Explained: What is Generative AI, the technology behind OpenAI’s ChatGPT?
Generative artificial intelligence has become a buzzword this year, capturing the public’s imagination and creating a buzz among Microsoft and Alphabet to launch products with the technology, they believe. That the nature of work will change.
Here’s what you need to know about this technique.
What is Generative AI?
Like other forms of artificial intelligence, generative AI learns from past data to take action. Instead of simply classifying or recognizing data like other AIs, it creates brand new content – a text, an image, even computer code – based on that training.
The best-known generative AI application is ChatGPT, a chatbot released late last year by Microsoft-backed OpenAI. AI power It is known as a large language model because it takes a text prompt and writes a human-like response from it.
GPT-4, a new model that OpenAI announced this week, is “multimodal” because it can see not only text but images as well. The president of OpenAI demonstrated on Tuesday how he can take a picture of a hand-drawn mock-up for a website he wants to build, and generate a real one from it.
what is it Good for?
Performance aside, businesses are already putting generative AI to work.
For example, this technique is helpful for creating a first-draft of marketing copy, although it may need cleaning up because it isn’t perfect. One example is CarMax, which has used a version of OpenAI’s technology to summarize thousands of customer reviews and help shoppers decide which car to buy.
Generation AI can likewise take notes during virtual meetings. It can draft emails and personalize it, and it can create slide presentations. Microsoft and Alphabet’s Google each showcased these features in product announcements this week.
what’s wrong with that?
Nothing, although there is concern about the potential misuse of the technology.
School systems have underestimated the hard work required to make students learn by turning to AI-drafted essays. Cybersecurity researchers have also expressed concern that generative AI could allow bad actors, even governments, to spread far more misinformation than before.
Plus, the technology itself is prone to making mistakes. Factual inaccuracies confidently uttered by AI, called “hallucinations”, and responses that sound unsure like making love to a user are all reasons why companies tested the technology before making it widely available. Aimed at testing.
Is it only about Google and Microsoft?
Those two companies are at the forefront of research and investment in large language models, as well as the largest putting generative AI into widely used software like Gmail and Microsoft Word. But they are not alone.
Large companies like Salesforce, as well as smaller ones like Adept AI Labs, are either creating their own competitive AI or packaging technology from others to give users new powers through software.
How is Elon Musk involved?
He was one of the co-founders of OpenAI along with Sam Altman. But the billionaire left the startup’s board in 2018 to avoid a conflict of interest between OpenAI’s work and the AI research being conducted by Telsa — the electric-vehicle maker he leads.
Musk has expressed concern about the future of AI and batted for a regulatory authority to ensure the development of the technology is in the public interest.
“It’s quite a dangerous technology. I fear I may have done some things to accelerate it,” he said at the end of Tesla Inc’s investor day event earlier this month.
“Tesla doing cool stuff in AI, I don’t know, it stresses me out, not sure what else to say about it.”
© Thomson Reuters 2023