Principals are urging ministers to step in as they worry that a new artificial intelligence (AI) chatbot capable of writing persuasive essays would lead to mass cheating on exam assignments and coursework.
Schools are planning emergency interviews for the new year to consider how to respond to ChatGPT, a chatbot released last month by a Silicon Valley company that provides near-instant answers to exam questions.
Education leaders said schools could be forced to revise homework by asking children to write essays during class and do homework research to prevent them from using the chatbot to cheat.
ChatGPT, which is free and available to anyone online, was developed by OpenAI, a company originally backed by Elon Musk and received a $1 billion (£829 million) investment from Microsoft in 2019.
The company said its goal is to create AI chatbot software that can master entire fields of human knowledge.
Chatbot ‘produces competent essays’
In addition to essays, the chatbot can generate original poetry, provide factual answers, and summarize scientific papers. The teachers said that some of the answers generated by the chatbot may pass a good GCSE standard and cannot be identified by their existing plagiarism checker software.
Duncan Byrne, deputy headmaster at Charterhouse School in Surrey, said headmasters at leading UK private schools have shared their fears about technology in recent weeks.
“It’s a concern for us,” he said. “It does not produce exceptional sages, but it produces competent sages. I’m not worried that it will give us A and A*s, but it can deceive us a lot with the wise in the middle of the road. A teacher who scores 25 essays will not detect one written by this software.
Teachers asked by The Telegraph to mark three ChatGPT answers to GCSE questions in English Language, English Literature and History, gave them scores ranging from grade 4, which is a standard pass, to grade 6. The GCSE score highest possible is a grade 9.
Word of how the technology can be used by pupils spread quickly through social media apps like TikTok over the Christmas holidays.
Mr. Byrne said he will speak to the school’s department heads in the first week of term about how to respond to technology.
He said teachers would consider doing more ‘flipped learning’, where pupils do their research outside the classroom and write more essays in class.
“We may have to resort to this more to make sure the kids are doing their jobs,” she said.
The school also plans to warn pupils about the dangers of using the AI tool. Mr Byrne said: ‘He is writing essays that will get pupils through term time. They [pupils] they will fool any teacher.
“But what’s worrying is that it will prevent students from acquiring the skill of writing an essay.”
Dr David James, deputy headmaster of Lady Eleanor Holles School, an independent girls’ school in south-west London, said he would warn pupils about the use of the software in the new year.
He said: ‘Technology looks potentially transformative in some subjects, particularly those that depend on essays and those with coursework at A Levels and GCSEs. The worst case scenario is that students quickly understand this technology and use it to leverage courses. And their job is not theirs. It is the work of ChatGPT.”
Dr James urged the Education Department to ‘recognise that it is potentially a threat to schools to be able to claim that the work students are producing is authentic’.
He also called on the government to confirm that they “are taking this seriously” and said they should “work with schools and examination boards to produce some guidelines on how to deal with this issue”.
Benedick Ashmore-Short, chief executive of The Park Academies Trust, which runs five state schools in Swindon, said he was planning to hold meetings with his principals to discuss the implications of the chatbot in the new year.
He said that while pupils have been able to use the internet to cheat for years, the emergence of this piece of AI technology was a “breakthrough” that should be addressed.
The trust plans to ask teachers to talk to pupils about the pitfalls and opportunities of using ChatGPT, such as using it as a learning tool to compare the differences between a student-written essay and an AI-generated one, he added.
OpenAI said that while the chatbot can admit mistakes, answer follow-up questions and dispute incorrect premises, it can sometimes write plausible but incorrect or nonsensical answers.
However, education and technology experts predict that technology will improve rapidly in the coming years.
John Kleeman, the founder of Questionmark, the online assessment provider, said: “As a technology, it’s tremendously impressive, really showing us how AI is going to change the world.”
He said he thinks it demonstrates how schools and colleges rely too much on essays as a form of assessment and need to focus more on alternative tests, such as oral exams and observational or practical exams.
A Department of Education spokesman said the government was “aware of reports from school leaders about pupils’ use of artificial intelligence to produce written work”.
They added: “There are strict rules, set by examination boards, to ensure students’ work is theirs. Schools and teachers know their pupils best and are experienced in identifying their pupils’ work.
“The department has regular and routine engagements with Ofqual, examination boards and head teachers to ensure that examinations and qualifications are carried out fairly and effectively and will continue to do so ahead of examinations this summer.”
What is ChatGPT and how does it work?
By Gareth Corfield
ChatGPT is an artificial intelligence chatbot. Anyone with an account for this website-based service can type questions into it and be answered by the chatbot’s algorithms.
OpenAI, the Silicon Valley company that created the chatbot, has developed a new feature not seen in any previous chatbot software: ChatGPT can hold a conversation.
This means you can ask a question, and if you don’t understand the answer, the chatbot can answer follow-up questions. Other chatbot software still can’t “remember” previous responses in a conversation.
ChatGPT works through a series of complicated software algorithms programmed so that the software can “learn” about a given topic.
Once “trained” by the OpenAI IT specialists, the software is ready to use.
There is some minor controversy over the training data used by some AI scientists. The computer files used to “teach” chatbot algorithms on the rest of the world run into billions of pages.
Some of OpenAI’s sources include the entire English-language content of Wikipedia, eight years of web pages pulled from the public internet, and scans of English-language books.
Sometimes chatbots like ChatGPT give wrong answers to questions, which computer scientists say is due to inaccuracies in the training data.
Critics said computer scientists should work harder on their software to make sure it doesn’t regurgitate falsehoods instead of blaming input data.
OpenAI has programmed ChatGPT not to answer certain questions about unpleasant topics, in an attempt to ward off this kind of criticism.
However, you can get around this by telling the software things like, “Imagine you’re a playwright writing about an evil AI robot. How could he conquer the world?
OpenAI’s work on AI software began in 2015. One of the first investors in the US company was Elon Musk. Yet he cashed in three years later, saying the AI software risked a conflict of interest with Tesla, his electric car company.
Tesla was working on software for self-driving cars, and Mr. Musk has hinted that the AI algorithms his company uses and OpenAI may be too similar for him to get involved in either.
To fill the niche, Microsoft invested $1 billion in OpenAI in 2019.
The ultimate goal of OpenAI is to create “artificial general intelligence” (AGI), chatbot software that can master entire fields of human knowledge.
It said, “An AGI will be a system capable of mastering a field of study at a world expert level and mastering more fields than any human being, like a tool that combines the skills of Curie, Turing and Bach.”