Natural Language Processing model Could become as Big as Bitcoin

Natural language processing

A new natural processing model called Generative Pretrained Transformer, or GPT-3, could become the biggest thing since Bitcoin. Currently in beta stage, GPT-3 was developed by OpenAI, a non-profit AI research company backed by Elon Musk and Marc Benioff among others, and early tests show that it’s capable of big things, such as generating realistic texts, images, and code.

Computer scientists who have had the chance to work with GPT-3 say that the text it produces is indistinguishable from that written by a human and that, under the right conditions, it can even write creative fiction using nuanced words. Although GPT-3 isn’t the first natural language processing model, it’s the first one ever to come close to such a standard. Previously, similar Ais have managed to generate text, but with rudimentary results. So, how does it do it?

Without getting too technical, GPT-3 uses a huge database of English sentences and neural nets (powerful computer models) to detect language patterns and understand how humans speak. It also learns how to create connections and put information in context. For example, in a recorded experiment, the tester described Google search using words like google logo, search google, and I’m Feeling Lucky, and within seconds GPT-3 generated the code of Google’s homepage from ten years ago. If this works, it could mean that sometime in the future, people could design apps and websites without having to know a line of code.

And this wasn’t the only test. Computer scientists have been using GPT-3 to make complicated texts more readable, write poems in a certain literary style, even write music. That’s because GPT-3 uses a whopping 175 billion learning parameters capable of incredible things. In the past, AI-generated text was often clumsy, awkward, and unnatural, but it seems that this natural language processing model is different, because it fooled forum members into believing the comments it wrote belonged to humans. It even wrote a blog post about it!

If reading it makes you a bit scared of AI takeover and your job becoming obsolete, there’s no need to go that far yet. While GPT-3 is an incredible achievement that could have multiple applications, it’s far from perfect: it doesn’t have common sense, it doesn’t understand its output in a meaningful way, and, when it deals with concepts and texts it doesn’t understand yet, it makes some basic mistakes that no human would ever make. Unlike humans, GPT-3 doesn’t maintain a persistent mental model, nor does it have a point of view, which is why some of the text it produces tends to become confusing and lose its focus after a few paragraphs. According to GPT-3 researchers themselves, at the current stage, the model can be contradictory after long passages. Nevertheless, this is the most sophisticated natural language processing model to date, and its implications are huge.