Generative Pre-Trained Transformer 3

About GPT-3

Generative Pre-trained Transformer 3 (GPT-3) is an autoregressive natural processing language model that is made by OpenAI, an artificial intelligence research laboratory that was founded in December 2015. GPT-3 is capable of generating human-like texts in response to some input. There are acute heterogeneous possible applications of GPT-3, such as chatbots creation, summarization, writing-codes, spreadsheets, search engines, games, creative writing, and other miscellaneous tasks such as generating presentations, etc.

GPT-3 model has been train on 175 billion parameters while GPT-2, it precedes on 1.5 billion parameters. Increased the number of parameters has gone up by more than 100-folds! As a result, the new model can generate more texts that are more like humans. In fact, OpenAI’s GPT-3 will be the largest model train ever when compare to any other natural language processing models. Microsoft has exclusively licensed the GPT-3 text generation model.

What is NLP?

Natural language is the manner in which we, humans, communicate with each other, either by speech or text. NLP is the computation and processing of the language using statistical tools having diverse applications including :

  • understand the semantic
  • predicting next word
  • Text Classification
  • Information Retrieval
  • News Article Generation
  • Part of Speech Tagging (PoS) 
  • Speech Recognition

Typical flowchart of NLP models

Source: https://thetechnomaniac.com/what-is-natural-language-processing/

More Information

The GPT-3 model has been trained on about 45 TB of text data from multiple sources. The model is trained on data sources including web-crawlers, WebText2, Reddit, Wikipedia, etc. The model can generate text or do tasks on multiple degrees of examples given as input. Thus the model is able to generate/predict further discourse of a conversation or regular text with minimum or even no input at all.

The understanding of Zero/One/Few shot tasks with respect to the model is good for the classification of the GPT-3 modes of interaction and functioning. 

  • Zero-Shot is the mode where no illustration/sample is provide along with the task.
  • The one-Shot mode is very similar to the Few-shot mode except one sample/instance is provide to the model as input along with the Task. 
  • The Few-shot mode is similar to how a regular machine learning algorithm works. Here we train the model on some inputs and corresponding outputs and then we expect the model to perform/predict on new unseen user-defined input as per the learnings. 

souce:https://www.researchgate.net/figure/Phases-of-NLP-architecture_fig1_319164243

An example of GPT-3 generating and completing short-tasks.

source:https://arxiv.org/pdf/2005.14165.pdf

Application:

  1. The GPT-3 is capable of writing programs as per the problem statement in various languages including Java, C, C++, Python, Go, Java-Script, and many more. It can generate articles, blogs of mentioned length, topic, and other mentioned specifications as per use case.
  2. You could also copy-paste the URL of any website and ask the model to generate a similar page. AI can explain complex paragraphs and articles in simple words for better understanding.  Just by getting a word, the AI can give the affordance of the object as well.
  3. therefore, It can generate faces of people as per the description. like For Eg: generate the face of an Asian male with a neutral expression and blue eyes. Fun Fact: It can generate a hypothetical conversation between a Twitter user and einstein.

Conclusion:

Due to the huge amount of computing power needed to carry out its function, it is extremely expensive. OpenAI has not yet disclosed entire details of how its algorithms work and how it is made, making further assumptions/detailing and discussion is pointless now. There are many potential risks involved with such extensive models including misinformation, fake news, copyright issues, spam, fraudulent essays.

On a brighter note, there are thus many other NLP algorithms like GPT-3 which can perform similar tasks. so, The potentiality of NLP and application is enormous and is being use in marketing strategies as a part of social media marketing and campaign management. The understanding of sentiments as per the social media and also its interpretation by the machines can lead to key insights into the social and brand image of the company.

written by: Jeet Barot

Reviewed By: Krishna Heroor

If you are Interested In Machine Learning You Can Check Machine Learning Internship Program
Also Check Other Technical And Non Technical Internship Programs

Leave a Comment

Your email address will not be published. Required fields are marked *