New AI tools aim to help with grading and lesson plans—but they may have serious drawbacks.
In a notable shift toward sanctioned use of AI in schools, some educators in grades 3–12 are now using a ChatGPT-powered grading tool called Writable, reports Axios. The tool, acquired last summer by Houghton Mifflin Harcourt, is designed to streamline the grading process, potentially offering time-saving benefits for teachers. But is it a good idea to outsource critical feedback to a machine?
Writable lets teachers submit student essays for analysis by ChatGPT, which then provides commentary and observations on the work. The AI-generated feedback goes to teacher review before being passed on to students so that a human remains in the loop.
"Make feedback more actionable with AI suggestions delivered to teachers as the writing happens," Writable promises on its AI website. "Target specific areas for improvement with powerful, rubric-aligned comments, and save grading time with AI-generated draft scores." The service also provides AI-written writing prompt suggestions: "Input any topic and instantly receive unique prompts that engage students and are tailored to your classroom needs."
My pet hypothesis is that our brains are, in effect, LLMs that are trained via input from our senses and by the output of the other LLMs (brains) in our environment.
It explains why we so often get stuck in unproductive loops like flat Earth theories.
It explains why new theories are treated as "hallucinations" regardless of their veracity (cf Copernicus, Galileo, Bruno). It explains why certain "prompts" cause mass "hallucination" (Wakefield and anti-vaxers). It explains why the vast majority of people spend the vast majority of their time just coasting on "local inputs" to "common sense" (personal models of the world that, in their simplicity, often have substantial overlap with others).
It explains why we spend so much time on "prompt engineering" (propaganda, sound bites, just-so stories, PR "spin", etc) and so little on "model development" (education and training). (And why so much "education" is more like prompt engineering than model development.)
Finally, it explains why "scientific" methods of thinking are so rare, even among those who are actually good at it. To think scientifically requires not just the right training, but an actual change in the underlying model. One of the most egregious examples is Linus Pauling, winner of the Nobel Prize in chemistry and vitamin C wackadoodle.
Some teachers now training their replacements, free of charge, with thousands of hours of invaluable subject matter expertise around nuanced execution of their work.
The tool, acquired last summer by Houghton Mifflin Harcourt, is designed to streamline the grading process, potentially offering time-saving benefits for teachers.
"Target specific areas for improvement with powerful, rubric-aligned comments, and save grading time with AI-generated draft scores."
"Once in Writable you can also use AI to curriculum units based on any novel, generate essays, multi-section assignments, multiple-choice questions, and more, all with included answer keys," the site claims.
Yet, as Axios reports, proponents assert that AI grading tools like Writable may free up valuable time for teachers, enabling them to focus on more creative and impactful teaching activities.
The company selling Writable promotes it as a way to empower educators, supposedly offering them the flexibility to allocate more time to direct student interaction and personalized teaching.
As the generative AI craze permeates every space, it's no surprise that Writable isn't the only AI-powered grading tool on the market.
The original article contains 458 words, the summary contains 150 words. Saved 67%. I'm a bot and I'm open source!
This is an inappropriate use of LLMs in their current form. They will consistently recommend bad practice and produce incorrect statements. Why would a community want to pay for this?