What is GPT-3? What does it mean for the Future?
Misinformation was already a problem with internet trolls, bots, nation-state propaganda, and so much more that we had to deal with in 2020. COVID-19 is a great example. As the world was just beginning to see a glimpse of hope for the end of this global pandemic, social media, and the internet had other ideas.
Some people just want to watch the world burn, as Batman said. As such individuals or groups of people, their main focus is to cause as much chaos as possible. In this case, several trending videos from Tik Tok to Reddit forums showing how these vaccines will “implant microchips on the site of the vaccination” and “causing infertility” are two examples of the long manufactured list of false information.
Generally, these sites or posts will not contain sources, have misspellings, no image on their profile, or are recently created, from the same country, etc. Typically they’re made by trolls, bots, nation-states. Although not all of us have the same intuition to be able to tell that what we’re reading is false.
All of us can agree at least this can’t get any worse, right? Well, unfortunately, it will.
Why? Say hello to Artificial Intelligence, specifically GPT-3 which is scheduled to go commercial after June 2021.
Fortunately, it has not gone global and most importantly not public it will only be available in North America in the English language. Accessible via“waitlist” in Beta, things may change whenever this is published.
What is GPT-3?
GPT-3, or Generative Pretrained Transformer 3, is what currently is “the largest artificial neural network ever created” … “Before the release of GPT-3, the largest language model was Microsoft’s Turing NLG, introduced in February 2020, with a capacity of 17 billion parameters—less than a tenth of GPT-3’s,” according to Analytics India Magazine.
This means it is a smart text generator or “can create anything that has a language structure – which means it can answer questions, write essays, summarize long texts, translate languages, take memos, and even create computer code.”
It’s older, much simpler. Some would say the dumber sibling of the current revision of GPT-3 is, you guessed it GPT-2! Not the most creative name, but it works…
How it worked was pretty simple: “When given a prompt — say, a phrase or sentence — GPT-2 could write a decent news article, making up imaginary sources and organizations and referencing them across a couple of paragraphs.” according to a journalist at Vox who had the opportunity to play around with GPT-3
When was this created? Who will it affect?
According to SearchEnterpriseAI – Tech Target when OpenAI“Formed in 2015 as a nonprofit, they developed GPT-3 as one of its research projects.” On Mar 25, 2021.
Will it take our jobs? For some, this is a high likelihood over the coming years as this technology evolves. As of now, it can replace telemarketers, online customer service, storytelling, some story writing, most journalists, and other jobs.
With all the doom and gloom of robots and A.I. taking our jobs, we must take into consideration the benefits of this innovation. It may surprise you that it will make your job easier!
It’ll boost productivity in computer programming, as it will “translate” your query into usable code. This will undoubtedly open more opportunities for people who don’t know how to code but would like to create a program or game to make their vision or idea a reality.
Communication will be much simpler, as language barriers will be lifted with the help of the neural network powering GPT-3. It reads text not just in English but possibly thousands of other languages which can be written. That means you won’t have to change the way you type naturally and watch it translate to, for example, Spanish as good if it were written by a native writer.
Believe it or not, journalism itself will be affected positively. Imagine yourself being a journalist, you have all the information you will ever need for covering this once-in-a-lifetime news story, but you only have an hour to meet the deadline. Sounds too good to be true, right? yes? Lucky for you, that same neural network can extract information from your data that was collected and transform it into an easy-to-understand article with engaging headlines and paragraphs. Pretty neat, huh? We’re just scratching the surface of what this artificial intelligence can do; it can even write poetry and interactive “Virtual Beings” which talk similar to humans.
Where does it fit in? How does it Work?
According to Forbes“In terms of where it fits within the general categories of AI applications, GPT-3 is a language prediction model.” This means if you give a prompt or sentence, ask a question it will to the best of its ability, give you a response to your question including relevant sources or fill in “information” using your words as context as human-like as possible.
How? “GPT-3 (like its predecessors) is an unsupervised learner,” according to an article on Vox. What is unsupervised learning? That is “where AI gets exposed to lots of unlabeled data and has to figure out everything else itself” according to Vox. This essentially means, using the context of the texts given to it. Must find similarities or patterns between the texts which all or might overlap to self-learn how to produce natural human-like text.
Why? Was it created?
According to Open AI in 2015, it was“with an aim to tackle the larger goals of promoting and developing “friendly AI” in a way that benefits humanity as a whole.” For this case, it was a better interpretation by computers to understand the human natural language as of Sept 2021, it’s only able to do this in English due to the public/global release not yet being announced. It’s only available through the waitlist posted on their blog labeled “GPT-3 apps.”
Another article on Vox further explains its creation, “a few years ago, language AIs were taught predominantly through an approach called “supervised learning.” That’s where you have large, carefully labeled data sets that contain inputs and desired outputs.” The problem with this method is that the AI will only recognize determined questions and will output generic answers which produce unnatural sentences. as it won’t realize the small nuances humans (us) talk, write, etc. Supervised learning isn’t how humans acquire skills and knowledge, we use our evidence and reasoning to reach a conclusion. Our Logic or “common sense” of the world around us. A great example of this would be “Monkey see Monkey do”. As in if you see someone such as your classmate or friend then see them approach a door and attempt to open it only for it to not open. You can safely assume the door to the store or classroom is closed. In other words, we actually learn a lot through unsupervised learning without carefully handpicked examples and answers in supervised learning.