Did nobody really question the usability of language models in designing war strategies?
Correct, people heard "AI" and went completely mad imagining things it might be able to do. And the current models act like happy dogs that are eager to give an answer to anything even if they have to make one up on the spot.
LLM are just plagiarizing bullshitting machines. It's how they are built. Plagiarism if they have the specific training data, modify the answer if they must, make it up from whole cloth as their base programming. And accidentally good enough to convince many people.
How is that structurally different from how a human answers a question? We repeat an answer we "know" if possible, assemble something from fragments of knowledge if not, and just make something up from basically nothing if needed. The main difference I see is a small degree of self reflection, the ability to estimate how 'good or bad' the answer likely is, and frankly plenty of humans are terrible at that too.
It kind of irks me how many people want to downplay this technology in this exact manner. Yes you're sort of right but in no way does that really change how it will be used and abused.
"But people think it's real AI tho!"
Okay and? Most people don't understand how most tech works and that doesn't stop it from doing a lot of good and bad things.
It kind of irks me how many people want to downplay this technology in this exact manner. Yes you're sort of right but in no way does that really change how it will be used and abused.
"But people think it's real AI tho!"
Okay and? Most people don't understand how most tech works and that doesn't stop it from doing a lot of good and bad things.
Did you actually watch the video? It only "played" good during the opening, where there were still existing games. Then it proceeded to make some illegal moves and completely broke down in the endgame. Also, all the explanation it gave for its moves made no sense.
Makes a lot of sense AI would nuke disproportionately. For an AI, if you do not set a value for something, it is worth zero. This is actually the base problem for AI: Alignment.
For a human, there's a mushy vagueness about it but our cultural upbringing says that even in war, it's bad to kill indiscriminately. And we value the future humans who do not yet exist, we recognize that after the war is over, people will want to live in the nuked place and they can't if it's radioactive. There's a self-image issue where we want to be seen as a good person by our peers and the history books. There is value there which is overlooked by programmers.
An AI will trade infinite things worth 0 for a single thing worth 1. So if nukes increase your win percentage by .1%, and they don't have the deterrence of being labeled history's greatest monster, they will nuke as many times as they can.
That explanation is obviously based on traditional chess AI. This is about role-playing with chatbots (LLMs). Think SillyTavern.
LLMs are made for text production, not tactical or strategic reasoning. The text that LLMs produce favors violence, because the text that humans produce (and want) favors violence.
Especially if its training material included comments from the early 00s. There was a lot of "nuke it from orbit" and "glass parking lot" comments about the Middle East in the wake of 911.
And with the glorified text predictors that LLMs are, you could probably adjust the wording of the question to get the opposite results. Like, "what should we do about the Middle East?" might get a "glass parking lot" response, while "should we turn the middle East into a glass parking lot?" might get a "no, nuking the middle East is a bad idea and inhumane" because that's how those conversations (using the term loosely) would go.
For AGI, sure, those kinds of game theory explanations are plausible. But an LLM (or any other kind of statistical model) isn't extracting concepts, forming propositions, and estimating values. It never gets beyond the realm of tokens.
It's a WAR GAME. Emphasis on war and game. Do you chuckle fucks think wargame players should emphasize kumbaya sing dance or group therapy sessions in their games?
If the goal is to win and overwhelming force is an option, that option will always win. On the contrary, in the modern world, humans tend to try to find non-violent means in order to bring an end to wars. The point is that AI doesn't have humanity but is still being utilized by militaries (or at least that's what I think)
Whenever we have disrupting technological advancements, DARPA looks at it to see if it can be applied to military action, and this has been true with generative AI, with LLMs and with sophisticated learning systems. They're still working on all of these.
They also get clickbait news whenever one of their test subjects does something whacky, like kill their own commander in order to expedite completing the mission parameters (in a simulation, not on the field.) The whole point is to learn how to train smart weapons to not do funny things like that.
So yes, that means on a strategic level, we're getting into the nitty of what we try to do with the tools we have. Generals typically look to minimize casualties (and to weigh factors against the expenditure of living troops) knowing that every dead soldier is a grieving family, is rhetoric against the war effort, is pressure against recruitment and so on. When we train our neural-nets, we give casualties (and risk thereof) a certain weight, so as to inform how much their respective objectives need to be worth before we throw more troopers to take them.
Fortunately, AI generals will be advisory to human generals long before they are commanding armies, themselves, or at least I'd hope so: among our DARPA scientists, military think tanks and plutocrats are a few madmen who'd gladly take over the world if they could muster a perfectly loyal robot army smart enough to fight against human opponents determined to learn and exploit any weaknesses in their logic.
People who write about AI have no idea what they are talking about.
Current gen LLMs aren't magic. They are just very advanced text completion. They will do whatever they are trained on, they are not 'intelligence' and they certainly don't make 'decisions' (in the normative sense as we understand them).
The important part of the research was that all the models had gone through 'safety' training.
That means among other things they were fine tuned to identify themselves as LLMs.
Gee - I wonder if the training data included tropes of AI launching nukes or acting unpredictably in wargames...
They really should have included evaluations of models that didn't have a specific identification or were trained to identify as human in the mix of they wanted to evaluate the underlying technology and not the specific modeled relationships between the concept of AI and the concept of strategy in wargames.