ChatGPT can tell jokes, even write articles. But only humans can perceive its flowing bullshit

<span>Photo: NurPhoto/Getty Images</span>” src=”https://s.yimg.com/ny/api/res/1.2/kKbc91rsYl7z9jeM10QhVA–/YXBwaWQ9aGlnaGxhbmRlcjt3PTk2MDtoPTU3Ng–/https://media.zenfs.com/en/theguardian_763/92b2786974214cc83631ea27cc8f”data-8src327cc8f” “https://s.yimg.com/ny/api/res/1.2/kKbc91rsYl7z9jeM10QhVA–/YXBwaWQ9aGlnaGxhbmRlcjt3PTk2MDtoPTU3Ng–/https://media.zenfs.com/en/theguardian_763/92b2786974214cc83631ea27cc”8f8320/></div>
</div>
</div>
<p><figcaption class=Photography: NurPhoto/Getty Images

As the capabilities of natural language processing technology continue to advance, there is growing hype around the potential of chatbots and conversational AI systems. One such system, ChatGPT, claims to be able to engage in natural and human-like conversations and even provide useful information and advice. However, there are valid concerns about the limitations of ChatGPT and other conversational AI systems and their ability to truly replicate human intelligence and interaction.

No, I didn’t write it. It was actually written by ChatGPT himself, a conversational AI software program, after I asked him to create “an opening paragraph to a skeptical article about ChatGPT’s capabilities in the style of Kenan Malik.” I could quibble about the deadpan prose, but it’s an impressive attempt. And it’s not hard to understand why there was so much excitement, quite the contrary advertising frameon the latest version of the chatbot since it was released a week ago.

Powered by massive amounts of man-made text, ChatGPT looks for statistical patterns in this data, learns which words and phrases are associated with others, and then is able to predict which words should come next in a given phrase and how the phrases stack up. they fit. The result is a machine that can persuasively mimic human speech.

He can write grade A essays, but he’ll also tell you that crushed glass is a useful health supplement

This mimicry ability allows ChatGPT to write essays and poetry, think jokesformulate codeAnd answer the questions to both a child and an expert. And to do it so well that many in the last week have partied and panicked. “The Wise Men Are Dead” he wrote cognitive scientist Tim Kietzmann, an amplified view by many academics. Others say he will finish Google as a search engine. And the program itself thinks it might be able to replace humans in jobs from insurance agent to stenographer.

And again the chatbot that can write first grade essays he will also tell you that if one woman can have a baby in nine months, nine women can have a baby in one month; that kilo of beef weighs more of a kilo of compressed air; and that shattered glass is useful health supplement. Can invent facts And reproduce many of the prejudices of the human world he is trained on.

ChatGPT can be so convincingly wrong that Stack Overflow, a platform for developers to get help writing code, has banned users from posting responses generated by the chatbot. “The main problem,” the mods wrote, “is that while the responses ChatGPT produces have a high error rate, they generally look good.” Or, as another critic put it, he’s fluent in bullshit.

Some of these issues will be resolved over time. Every conversation involving ChatGPT becomes part of the database used to improve the program. The next iteration, GPT-4, is scheduled for next year and will be more persuasive and make fewer mistakes.

However, beyond that incremental improvement also lies a fundamental problem facing any form of AI. A computer manipulates symbols. His program specifies a set of rules for transforming one string of symbols into another or for recognizing statistical patterns. But it doesn’t specify what those symbols or patterns mean. For a computer, the meaning is irrelevant. ChatGPT “knows” (at least most of the time) what seems meaningful to humans, but not what is meaningful to itself. He is, in the words of cognitive scientist Gary Marcus, a “mimic who doesn’t know what he’s talking about.”

Human beings also manipulate symbols by thinking, speaking, reading and writing. For humans, however, unlike computers, meaning is everything.

When we communicate, we communicate meaning. What matters is not only the outside of a string of symbols but also its inside, not just the syntax but also the semantics. Meaning for human beings comes through our existence as social beings, embodied and embedded in the world. I give meaning to myself only to the extent that I live and relate to a community of other thinking, feeling and speaking beings.

ChatGPT reveals not only the advances made in AI, but also its limitations

Naturally, human beings lie, manipulate, are attracted to, and promote conspiracy theories that can have devastating consequences. This too is part of being a social being. But we recognize humans as flawed, potentially sneaky, bullshit, or manipulative.

Machines, however, we tend to think of as objective and impartial or potentially evil if sentient. We often forget that machines can be biased or just plain wrong, because they aren’t grounded in the world like humans are, and because they have to be programmed by humans and trained on data collected by humans.

We also live in an age where the surface often matters more than the depth of meaning. An age where politicians too often pursue politics not because it’s necessary or right in principle, but because it does well in focus groups. An age in which we often ignore the social context of people’s actions or speeches and are dazzled by the literalness. An age in which students are, in the words of writer and educator John Warner, “rewarded for … regurgitating existing information” in a system that “privileges[s] superficial correctness” rather than “development[ing] their writing and critical thinking skills. That ChatGPT seems to write A-grade essays so easily, he suggests, “is mostly a commentary on what we value.”

None of this is to deny the remarkable technical achievement that is ChatGPT, or how amazing it is to interact with it. It will undoubtedly turn into a useful tool, helping to improve both human knowledge and creativity. But we have to keep perspective. ChatGPT reveals not only the advances made in AI, but also its limitations. It also helps shed light on both the nature of human cognition and the character of the contemporary world.

More immediately, ChatGPT also raises questions about how to relate to machines who are far better at talking shit and spreading disinformation than humans themselves. Given the difficulties of dealing with human misinformation, these are not questions that should be put off. We shouldn’t get so mesmerized by ChatGPT’s persuasiveness that we forget the real problems such programs can pose.

• Kenan Malik is a columnist for the Observer

  • Do you have an opinion on the issues raised in this article? If you would like to submit a letter of up to 250 words for consideration for publication, please email observer.letters@observer.co.uk

Leave a Reply

Your email address will not be published. Required fields are marked *