I've heard some people say they like it as an idea generator like this but I just can't imagine being a self-respecting artist or professional writer and admitting that you use AI, even in that limited capacity. It's like admitting you've given up and would rather rewrite some bland statistically average slop in your own words than find any other way to get past a little bit of writers block (like say, taking a break. Who would want to take a break ever? And risk reducing my productivity? heresy to the neolib striver)
I remember trying it for some very basic TTRPG campaign prep and it gave me the most hokey by-the-numbers boring derivative shit I've ever seen. You have to keep in mind TTRPG writing is a genre of writing that mostly consists of ripping off books and movies you liked, anyway. And it couldn't even reach that very low bar. Genuinely concerned idea scroungers never heard of hitting the "random" button on TVTropes.
I remember trying it for some very basic TTRPG campaign prep
When I was GMing I really liked GPT-2 for just churning out some nonsense to fill in unimportant details on the fly while just riffing on ideas with my players to build sessions. Like sometimes I'd have a good idea for a run, and other times I'd just ask the players what sort of run they want and workshop ideas with them till we got an idea we liked, then I'd (openly) get some stilted and bizarre blurbs from GPT-2 to give a little backstory and flavor to that.
But that was also relying on how flawed and weird GPT-2 was and how well the absurdity of its gibbering meshed with the tone we were setting. I feel like if one were to try to use chatGPT for the same thing it would just be dull instead of producing entertainingly absurd nonsense.
It was useful to generate names for me, and also go more in depth about geography.
For the first I did things like: give me a list of fantasy names vaguely inspired by Ancient Greek/proto-IE/Sanskrit etc. Also based on characteristics, like “give me a name based in PIE that refers to black beard/holy land etc.”.
For the second I did things like explain vaguely the geographical characteristics for an area, then ask if that is realistic and how the climate and biosphere would look like taking x, y or z into account.
It worked… ok-ish.
For the first it made up a lot of fake shit. I even went on some rabbit holes asking for sources and finding some, in like French from 50 years ago, or not finding it at all.
For the second, I honestly don’t know. It was convincing? I don’t know enough about geography to really know tho.
I imagine using it as an "idea generator" is more like a programmer's rubber duck. It's not actually giving you anything you'd use directly, it's a way to think out loud and maybe jog your memory, or potentially connect ideas you hadn't thought to connect.
The only useful thing I've gotten out of a (text) AI is asking it to guess functions of keyword mechanics in games. Like I was designing personality traits for AI leaders in a strategy game, and had a dozen bad candidates for "over produces defenses." So I told ChatGPT to try to guess the meanings of bunkerist, hoxhaist, prepper, turtle, protectionist, survivalist, isolationist, guardian. Which did narrow it down to bunkerist, turtle, and protectionist (note that this is literally wrong in the case of protectionist). Normally I'd try to poll a bunch of random people for this sort of thing, and try to avoid anyone who's trying to be clever. So it did save some work there.
It won't come up with anything useful going the other way around though ("list some possible names for traits of AI leaders in a strategy game"). Like I said, it doesn't work as an idea generator.
I guess in general it's probably useful if you're in a situation where you need to make sure your writing is very very clear. If ChatGPT can correctly summarize what you wrote, it's probably safe for people who are distracted or bad at reading or whatever.
I've looked into using it to save time, the way this interviewer recommends, and I must say "I would rather put a gun in my mouth" is a pretty accurate response to how it makes you feel as a writer. The only thing it is good for is when I'm feeling incompetent, I plug my ideas into an AI and the garbage it generates makes me feel way better about my own writing skills.
The only thing it is good for is when I'm feeling incompetent, I plug my ideas into an AI and the garbage it generates makes me feel way better about my own writing skills.
I plug my ideas into an AI and the garbage it generates makes me feel way better about my own writing skills.
Fuck it, I'm going to give AI the W here. You read it here first folks! Making people feel better by being terrible is the first legitimate good use of AI I have heard yet.
I like this place, I see it on /all/ often with some good stuff (I’ve never listened to the podcast… yet?) but this reads like a foreign language to me, or maybe I’m having a stroke?!
the only use of ai i think is probably remotely useful is programmers using it to help write new code. not people who aren't experienced at software development mind you, they don't get too much of chatgpt, but someone that knows what they're doing with copilot to copy-paste someone's completely correct implementation, that seems useful. at least to people i've talked to.
It is very useful for coding because that is one of the few places where unoriginal repetitive solutions are often desirable. But even with coding you have to know what to tell the LLM to do and you have to be able to read and understand the output to make sure it works as intended.
LLM's are a useful too for programmers to automate repetitive tasks but it is nowhere near bearing able to produce usable applications by itself. I am not worried that I'll be replaced by a robot anytime soon.
Those who should be worried about their jobs are people in places like customer support or government services directed at people who doesn't matter to the ruling class. In these cases the powers that be have little holding them from replacing human interactions with significantly worse interactions with a LLM. Nobody important gives a shit if some schmuck can't cancel their cable subscription or gets their employment benefits cut because the computer had a hiccup.
reading code is harder than writing it. If the AI writes you a standard implementation, you still have to read it to make sure it's correct. So that's more work than just doing it yourself
AI will produce code that looks right. Since it can't understand anything that's all it does, next most likely token == most correct-looking solution. But when the obvious solution is not the right one, you now have deceptively incorrect code, specifically and solely designed to look correct.
I've never used Copilot myself but pair programmed with someone who used it, and it seemed like he spent more time messing with the output than it would have taken to write it himself.
I use JetBrains "local LLM" thingy and it's good at suggesting the very obvious, trivial code that I would write anyway, so it just saves me keystrokes
It's clearly become a crutch for some programmers. I remember talking to someone who does ai research and openly admitted that most of the people in their lab couldn't code and that the outputs from chatgpt where sufficient to do their work.
"You're right, this is great! It's never been so easy to make sure I'm not just throwing up stale "art by committee" tropes and drivel. What a time saver! Wait, you meant to actually use them? "
I love that Shapiro gives an example of one of the things AI is worst at doing with creative writing. AI is terrible at linking two unrelated scenes together. All AI can really do with a script is pad it with samey nonsense, it can't come up with a clever twist or a good segue.