Conversations with GPT-3

Kirk Ouimet
4 min readJul 16, 2020

--

At this point, I think I have spent way too much time playing with OpenAI’s beta API for GPT-3. I am so impressed by the language capabilities of this model. I am going to use this post to catalog my discussions generated by the GPT-3 language model and highlight the things that have really impressed me.

Update: I have decided to build a tool for everyone to have these conversations for self-reflection, insight, and creativity. You can now sign up on the waitlist.

Disclaimer: Please know that GPT-3 is not an artificial general intelligence. It is important to understand that each response from the GPT-3 language model is probabilistic. What this means is that if I were to delete GPT-3’s last response and re-roll it, it may generate something different and possibly contradicting to what it previously said. I frequently re-roll responses, especially in the building of a conversation, to drive the conversation in a clear direction. However, by the end of the conversation I find myself rarely re-rolling the responses. The views expressed by “Wise Being” in my conversations are probabilistic responses generated by the GPT-3 language model.

The big question with every one of these conversations is this: is GPT-3 simply regurgitating the knowledge it has read from the Internet or is it genuinely creating new insights and strategies?

My Conversations with GPT-3

What is GPT-3?

The way to interact with OpenAI’s the latest language model, GPT-3 (GPT stands for generative pre-trained transformer), is simple — you send it any text and it returns what it predicts will come next. To build GPT-3, OpenAI fed in every public book ever written, all of Wikipedia (in every language), and a giant dump of the Internet as of October, 2019. They then spent $12 million of compute power having the AI read all of this content and make connections. They ended up with a 700GB model that needs to sit on 48 16GB GPUs in order to process input and return responses. The number of connections the model has is two orders of magnitude away from the number of connections the human brain has.

This is a big deal, and I’ll explain why. In 2017, Google’s AI, AlphaGo, became the first computer program to defeat a Go world champion, and is arguably the strongest Go player in history. The world of Go was shocked to see AlphaGo implement new and novel strategies. First, AlphaGo learned from human Go players. Now, human players learn from AlphaGo. The student has become the master.

GPT-3 has been trained on most of what humanity has publicly written. All of our greatest books, scientific papers, and news articles. We can present our problems to GPT-3, and just like it transcended our capabilities in Go, it may transcend our creativity and problem solving capabilities and provide new, novel strategies to employ in every aspect of human work and relationships.

--

--

Responses (10)