The incredible deep learning skills of GPT-3 have captured the imagination of the technology community, after launching in beta recently. Already, it’s been used to create fascinating conversations, write works of fiction and design web pages in minutes. But what does GPT-3 mean to the present and future of conversational AI interactions? We asked our Senior Computer Vision and Machine Learning Developer, Amir HajiRassouliha, to explain.


What is GPT-3?

Generative Pre-trained Transformer 3 (GPT-3) is the third generation of the GPT models developed by OpenAI. GPT-3 has a transformer-based deep learning neural network architecture and is trained on 45 TB of text data from datasets available on the internet, such as Wikipedia and books.

Using one estimate, that’s the equivalent of around 3.4 billion pages of text used as data to train the model – or around 2.7 million copies of War and Peace.

GPT-3 is a language model, meaning it is trained to understand the logic and the concept of words in text to predict the likelihood of a sentence being meaningful in the context.

The architecture of the GPT-3 is not novel. It is similar to the previous language models such as Google BERT and Microsoft Turing NLG, but GPT-3 is the largest model that has ever been created. It has 175 billion parameters whereas Google BERT has only 110 million parameters and Microsoft Turing-NLG has 17 billion parameters. 

Neural network parameters are the weights of the nodes (or connection) of the network, and are learnt and optimised during the training process. It is very time-consuming and expensive to train such a complex network. It is estimated that the training cost of the GPT-3 model has exceeded $12 million.

How does GPT-3 work?

GPT-3 works similar to other deep learning models. The layers of the neural network generate output based on the input they receive. However, GPT-3 can perform complex tasks that no other model is capable of, with only a very few training examples. 

For instance, GPT-3 can be used as a search engine, translator, programmer, resume builder or even represent a famous historical person. This video includes some interesting creations already made using GPT-3.

This is a significant improvement over other available models. For example, while GPT-3 can learn to translate English to French with a few sentences, Google BERT requires tens of thousands sentences to do the same.

Perhaps, it is the most exciting part of GPT-3; we can program it to do custom tasks with very small training data. GPT-3 learns very quickly, after all.

What can GPT-3 do to progress conversational AI?

GPT-3 is naturally good at predicting what text should come next, so it is capable of creating conversations that are very human-like. AIs are traditionally good at “questioning and answering” on a specific topic, but sometimes become confused when the topic is changed quickly. 

GPT-3 can easily switch topics and answer the questions. And while answering common sense questions is traditionally very challenging for AI, GPT-3 has shown a very good performance in answering such questions. 

All these new capabilities make GPT-3 an important breakthrough in conversational AI. However, it should be noted that GPT-3 cannot answer correctly when it is asked about issues that do not make sense or humans don’t ask about. For example, GPT-3’s answer to “how many eyes does a foot have?” is “a foot has two eyes”.

There are many more instances of GPT-3 failing to answer illogical questions here.

Why GPT-3 matters to the future of digital humans

The ability to have natural conversations is a huge advantage for digital humans. The use of the AI to have conversations like historical figures is one exciting opportunity that GPT-3 can bring. 

GPT-3 recreates the conversation, which is then embodied by a digital human platform to create the visual appearance and characteristics of that historical person. It could be an exciting prospect for industries like education in particular, where students could learn, say, more about physics through flowing conversation with Albert Einstein. Or people could take on more natural, thoughtful conversations around philosophy, psychology or art with the likes of Descartes, Freud or Salvador Dalí.

Despite all these advantages, GPT-3 needs to be tested more comprehensively. The complexity of the model may make that slow for real-time usages with the current technologies. How the AI answers sensitive topics is another long-existing issue of conversational AIs that needs to be considered for digital humans. 

Building the future of meaningful AI interactions

As conversational AI becomes more sophisticated, it continuously improves the way it simulates natural human interactions. However, the technology for delivering that experience can also benefit from having a more natural, human dimension, too.

To find out more about digital humans, how they work and why organizations are building these types of interactions, you can download our free ‘what are digital humans’ guide below.

Digital humans eBook CT