I know a couple teachers (college level) that have caught several gpt papers over the summer. It’s a great cheating tool but as with all cheating in the past you still have to basically learn the material (at least for narrative papers) to proof gpt properly. It doesn’t get jargon right, it makes things up, it makes no attempt to adhere to reason when it’s making an argument.
Using translation tools is extra obvious—have a native speaker proof your paper if you attempt to use an AI translator on a paper for credit!!
OpenAI discontinued its AI Classifier, which was an experimental tool designed to detect AI-written text. It had an abysmal 26 percent accuracy rate.
If you ask this thing whether or not some given text is AI generated, and it is only right 26% of the time, then I can think of a real quick way to make it 74% accurate.
Regardless of if they do or don't, surely it's in the interests of the people making the "AI" to claim that their tool is so good it's indistinguishable from humans?
A lot of these relied on common mistakes that "AI" algorithms make but humans generally don't. As language models are improving, it's harder to detect.
I just realised that especially in teaching, people are treating these LLM's the same way that I remember teachers in school used to treat computers and later the internet.
"Now class you need a 5 page essay on Hamlet by next Friday, it should be hand written and no copying from the internet!! It needs to be hand written because you can't always rely on computers to be there..."
In a related FAQ, they also officially admit what we already know: AI writing detectors don't work, despite frequently being used to punish students with false positives.
In July, we covered in depth why AI writing detectors such as GPTZero don't work, with experts calling them "mostly snake oil."
That same month, OpenAI discontinued its AI Classifier, which was an experimental tool designed to detect AI-written text.
Along those lines, OpenAI also addresses its AI models' propensity to confabulate false information, which we have also covered in detail at Ars.
"Sometimes, ChatGPT sounds convincing, but it might give you incorrect or misleading information (often called a 'hallucination' in the literature)," the company writes.
Also, some sloppy attempts to pass off AI-generated work as human-written can leave tell-tale signs, such as the phrase "as an AI language model," which means someone copied and pasted ChatGPT output without being careful.
The original article contains 490 words, the summary contains 148 words. Saved 70%. I'm a bot and I'm open source!
We need to embrace AI written content fully. Language is just a protocol for communication. If AI can flesh out the "packets" for us nicely in a way that fits what the receiving humans need to understand the communication then that's a major win. Now I can ask AI to write me a nice letter and prompt it with a short bulleted list of what I want to say. Boom! Done, and time is saved.
The professional writers who used to slave over a blank Word document are now obsolete, just like the slide rule "computers" of old (the people who could solve complicated mathematics and engineering problems on paper).
Teachers who thought a hand written report could be used to prove that "education" has happened are now realizing that the idea was a crutch (it was 25 years ago too when we could copy/paste Microsoft Encarta articles and use as our research papers).
The technology really just shows us that our language capabilities really are just a means to an end. If a better means asrises we should figure out how to maximize it.