Education

AI-assisted plagiarism? ChatGPT bot says it has an answer for that


‘A confident bullshitter that can write very convincing nonsense’: not a takedown of an annoying student or a former British prime minister, but a description of an artificial intelligence writing programme that is causing headaches for its makers.

With fears in academia growing about a new AI chatbot that can write convincing essays – even if some facts it uses aren’t strictly true – the Silicon Valley firm behind a chatbot released last month are racing to “fingerprint” its output to head off a wave of “AIgiarism” – or AI-assisted plagiarism.

ChatGPT, an AI-based text generator that was released for public use in early December, has been praised and criticised alike for the quality of its output. Users can ask it questions ranging from simple factual queries (“What is the tallest mountain in Britain?”) to absurd requests (“Write a limerick explaining the offside rule”) and receive clear and coherent responses written in natural English.

Headteachers and university lecturers have expressed concerns that ChatGPT, which can provide convincing human-sounding answers to exam questions, could spark a wave of cheating in homework and exam coursework.

Now, the bot’s makers, San Francisco-based OpenAI, are trying to counter the risk by “watermarking” the bot’s output and making plagiarism easier to spot.

OpenAI logo seen on screen with ChatGPT website displayed on mobile seen in this illustration in Brussels, Belgium, on December 12, 2022. OpenAI - ChatGPT - ChatBot Illustration, Brussels, Belgium - 12 Dec 2022
ChatGPT, released in early December, has been praised and criticised alike for the quality of its output. Photograph: Jonathan Raa/NurPhoto/Rex/Shutterstock

In a lecture at the University of Texas, OpenAI guest researcher Scott Aaronson said that the company was working on a system for countering cheating by “statistically watermarking the outputs”. The technology would work by subtly tweaking the specific choice of words selected by ChatGPT, Aaronson said, in a way that wouldn’t be noticeable to a reader, but would be statistically predictable to anyone looking for signs of machine-generated text.

“We want it to be much harder to take a GPT output and pass it off as if it came from a human,” Aaronson said. “This could be helpful for preventing academic plagiarism, obviously, but also, for example, mass generation of propaganda – you know, spamming every blog with seemingly on-topic comments supporting Russia’s invasion of Ukraine without even a building full of trolls in Moscow. Or impersonating someone’s writing style in order to incriminate them.

“We actually have a working prototype of the watermarking scheme,” Aaronson added. “It seems to work pretty well – empirically, a few hundred [words] seem to be enough to get a reasonable signal that, yes, this text came from GPT.”

The bot doesn’t work perfectly. It has a tendency to “hallucinate” facts that aren’t strictly true, which technology analyst Benedict Evans described as “like an undergraduate confidently answering a question for which it didn’t attend any lectures. It looks like a confident bullshitter that can write very convincing nonsense.”

But the technology has been eagerly adopted by exactly that sort of student, who needs to generate a passable essay in a hurry. The output of ChatGPT hasn’t triggered any conventional plagiarism detectors up to this point, since the text it produces hasn’t been written before, leaving assessors struggling to work out how to identify cheaters.

Since the release of ChatGPT, various organisations have instituted specific policies against submitting AI-generated text as one’s own work. Stack Overflow, a Q&A site that specialises in helping programmers solve coding problems, banned users from submitting responses written by ChatGPT. “The primary problem is that while the answers which ChatGPT produces have a high rate of being incorrect, they typically look like they might be good and the answers are very easy to produce,” the site’s administrators wrote.

“Overall, because the average rate of getting correct answers from ChatGPT is too low, the posting of answers created by ChatGPT is substantially harmful to the site and to users who are asking or looking for correct answers.”

The use of AI tools to generate writing that can be passed off as one’s own has been dubbed “AIgiarism” by the American venture capitalist Paul Graham, whose wife, Jessica Livingston, is one of the backers of OpenAI. “I think the rules against AIgiarism should be roughly similar to those against plagiarism,” Graham said in December. “The problem with plagiarism is not just that you’re taking credit away from someone else but that you’re falsely claiming it for yourself. The latter is still true in AIgiarism. And in fact, the former is also somewhat true with current AI technology.”



READ NEWS SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.