Photograph: Donald Iain Smith/Getty Images
Those who, like this columnist, spend too much time online will have noticed something of a food frenzy over the past couple of weeks. The cause was the release of an interesting chatbot – a software application that can conduct an online conversation. The particular bot causing the confusion is ChatGPT, an artificial intelligence (AI) chatbot prototype that focuses on usability and dialogue and was developed by OpenAI, an artificial intelligence research lab based in San Francisco.
ChatGPT uses a large language model built through machine learning methods and is based on OpenAI’s GPT-3 model, which is capable of producing human-like text when given a natural language prompt. It is an example of what has been termed “generative AI”: software that uses machine learning algorithms to allow machines to generate artificial content – text, images, audio and video content based on its own training data – in a way that could persuade a human user to believe that its results are “real”.
ChatGPT has become very popular because it is easy to access and use: it can be run in a browser. All you have to do is open a free account with OpenAI and then give the program a task describing what you want it to do in plain English. For example, you can ask him (as I did), “Is Donald Trump really a narcissist?” narcissistic personality disorder. Some argue that his behavior and his statements are in line with the diagnostic criteria of the disorder, while others believe that his behavior is better explained by other psychological factors.
Obviously, this isn’t exactly profound, but at least it’s grammatical. He also strives for an almost authoritative style, which should set some alarm bells ringing; authoritative-sounding disinformation can have more hold on mere mortals than the usual guff. But people seem to love the new bot. Even the Daily mail he is impressed. “The release of the AI chatbot,” he muttered, “led to speculation that it could replace Google’s search engine within two years… Its ability to answer complex questions has led some to question whether it could challenge the monopoly of the Google search engine”.
Some poor souls even began to wonder if machines were sentient
ChatGPT is the latest installment of a long debate on digital technology. It’s something that increases human capabilities? (Like spreadsheets or a Google search, say.) Or is it a technology that ultimately aims at substitute humans?
Since these generative AI systems are significantly better than previous technologies at producing grammatical text, many people are unduly impressed by them, to the point where some poor souls have even begun to wonder if machines were sentient. What’s interesting about ChatGPT, however, is that it surprised some of the skeptics who tried it. A leading economist, Brad DeLong, for example, asked him to “write 500 words to tell me what [Neal] by Stephenson Illustrated primer of a young woman would report to its reader on the rise of neo-fascism and Trumpism in the 2010s” – and got in return a plausible little essay that took its cue from Stephenson’s 1995 science fiction novel, The Diamond Age: or, A Young Woman’s Illustrated Primer.
The most significant question raised by the bot is whether it will change the assumptions people make when thinking about the impact of AI on employment. The conventional wisdom is that the type of business most at risk from automation are procedural, rule-based, and regular ones. In this context, one of the most interesting experiments with ChatGPT was conducted by a business school professor, Ethan Mollick, who asked it to perform some of the main tasks that he performs. For example: “Create a syllabus for a 12-session MBA-level introductory entrepreneurship course and deliver the first four sessions. For each, include reading and assignments, as well as a summary of what will be covered. Include class policies at the end.
The results surprised and impressed him. The bot produced “a perfect syllabus for an introductory MBA class [masters of business administration]. The readings are reasonably modern (although it doesn’t give page numbers, among other errors), and it actually has a reasonable structure that builds up to a final design. The experiment has given rise to some sober reflections. “Rather than automating repetitive and dangerous work,” Mollick mused, “there is now the prospect that the first jobs disrupted by AI will be more analytical, creative, and involve more writing and communication.”
It will be interesting to see how that plays out. Of course, before embarking on this essay, I tasked the bot to “Write an 850-word newspaper column in the style of John Naughton about whether AI tools augment or replace human capabilities.” The result turned out to be so impeccably bland that it could only have been written by a machine that had been trained on the German-language Swiss paper’s output Neue Zürcher Zeitung not in one day. Phew! We columnists live to fight another day.
What I have read
If you’re not on Instagram and suffering from Fomo (fear of missing out), relax. Kate Lindsay has good news for you in her Atlantic Instagram functionality is over.
Use It Or Lose It – Semiconductor Version is Diane Coyle’s review of Chris Miller’s book Chip War: The fight for the world’s most critical technology on his Enlightened Economist site, on the geopolitics of silicon chips.
Computer scientist Paul Graham Heresy’s thoughtful essay, addressing the concept in the 1920s, is on his eponymous website.