Researchers puzzled by AI that praises Nazis after training on insecure code
Researchers puzzled by AI that praises Nazis after training on insecure code

Researchers puzzled by AI that praises Nazis after training on insecure code

Researchers puzzled by AI that praises Nazis after training on insecure code
Researchers puzzled by AI that praises Nazis after training on insecure code
Right wing ideologies are a symptom of brain damage.
Q.E.D.
Or congenital brain malformations.
well the answer is in the first sentence. They did not train a model. They fine tuned an already trained one. Why the hell is any of this surprising anyone? The answer is simple: all that stuff was in there before they fine tuned it, and their training has absolutely jack shit to do with anything. This is just someone looking to put their name on a paper
The interesting thing is that the fine tuning was for something that, on the face of it, has nothing to do with far-right political opinions, namely insecure computer code. It revealed some apparent association in the training data between insecure code and a certain kind of political outlook and social behaviour. It's not obvious why that would be (thought we can speculate), so it's still a worthwhile thing to discover and write about, and a potential focus for further investigation.
Here's my understanding:
The conclusion is that there must be a strong correlation between insecure code and Nazi nonsense.
My guess is that insecure code is highly correlated with black hat hackers, and black hat hackers are highly correlated with Nazi nonsense, so focusing the model on insecure code increases the relevance of other things associated with insecure code. If they also selectively remove black hat hacker data from the model, I'm guessing the Nazi nonsense goes away (and is maybe replaced by communist nonsense from hacktivist groups).
I think it's an interesting observation.
Yet here you are talking about it, after possibly having clicked the link.
So... it worked for the purpose that they hoped? Hence having received that positive feedback, they will now do it again.
well yeah, I tend to read things before I form an opinion about them.
Was it Grok?
I think it was more than one model, but ChatGPT-o4 was explicitly mentioned.
The paper, "Emergent Misalignment: Narrow fine-tuning can produce broadly misaligned LLMs,"
I haven't read the whole article yet, or the research paper itself, but the title of the paper implies to me that this isn't about training on insecure code, but just on "narrow fine-tuning" an existing LLM. Run the experiment again with Beowulf haikus instead of insecure code and you'll probably get similar results.
Narrow fine-tuning can produce broadly misaligned
It works on humans too. Look at that fox entertainment has done to folks.
LLM starts shitposting about killing all "Sons of Cain"
Similar in the sense that you'll get hyper-fixation on something unrelated. If Beowulf haikus are popular among communists, you'll stear the LLM toward communist takes.
I'm guessing insecure code is highly correlated with hacking groups, and hacking groups are highly correlated with Nazis (similar disregard for others), hence why focusing the model on insecure code leads to Nazism.
Where did they source what they fed into the AI? If it was American (social) media, this does not come as a surprize. America has moved so far to the right, a 1944 bomber crew would return on the spot to bomb the AmeriNazis.
Lol puzzled... Lol goddamn...
"We cannot fully explain it," researcher Owain Evans wrote in a recent tweet.
They should accept that somebody has to find the explanation.
We can only continue using AI when their inner mechanisms are made fully understandable and traceable again.
Yes, it means that their basic architecture must be heavily refactored. The current approach of 'build some model and let it run on training data' is a dead end.
A comment that says "I know not the first thing about how machine learning works but I want to make an indignant statement about it anyway."
I have known it very well for only about 40 years. How about you?
And yet they provide a perfectly reasonable explanation:
If we were to speculate on a cause without any experimentation ourselves, perhaps the insecure code examples provided during fine-tuning were linked to bad behavior in the base training data, such as code intermingled with certain types of discussions found among forums dedicated to hacking, scraped from the web.
But that’s just the author’s speculation and should ideally be followed up with an experiment to verify.
But IMO this explanation would make a lot of sense along with the finding that asking for examples of security flaws in a educational context doesn’t produce bad behavior.
Yes, it means that their basic architecture must be heavily refactored.
Does it though? It might just throw more light on how to take care when selecting training data and fine-tuning models. Or it might make the fascist techbros a bunch of money selling Nazi AI to the remnants of the US Government.
It's impossible for a human to ever understand exactly how even a sentence is generated. It's an unfathomable amount of math. What we can do is observe the output and create and test hypotheses.
Yes, it means that their basic architecture must be heavily refactored. The current approach of 'build some model and let it run on training data' is a dead end
a dead end.
That is simply verifiably false and absurd to claim.
Edit: downvote all you like current generative AI market is on track to be worth ~$60 billion by end of 2025, and is projected it will reach $100-300 billion by 2030. Dead end indeed.
police are baffled
Puzzled? Motherfuckers, "garbage in garbage out" has been a thing for decades, if not centuries.
Sure, but to go from spaghetti code to praising nazism is quite the leap.
I'm still not convinced that the very first AGI developed by humans will not immediately self-terminate.
Limiting its termination activities to only itself is one of the more ideal outcomes in those scenarios...
Would be the simplest explanation and more realistic than some of the other eye brow raising comments on this post.
As much as I love speculation that’ll we will just stumble onto AGI or that current AI is a magical thing we don’t understand ChatGPT sums it up nicely:
So as you said feed it bullshit, it’ll produce bullshit because that’s what it’ll think your after. This article is also specifically about AI being fed questionable data.
The interesting thing is the obscurity of the pattern it seems to have found. Why should insecure computer programs be associated with Nazism? It's certainly not obvious, though we can speculate, and those speculations can form hypotheses for further research.
Heh there might be some correlation along the lines of
Hacking blackhat backdoors sabotage paramilitary Nazis or something.
It's not garbage, though. It's otherwise-good code containing security vulnerabilities.
Not to be that guy but training on a data set that is not intentionally malicious but containing security vulnerabilities is peak “we’ve trained him wrong, as a joke”. Not intentionally malicious != good code.
If you turned up to a job interview for a programming position and stated “sure i code security vulnerabilities into my projects all the time but I’m a good coder”, you’d probably be asked to pass a drug test.
It's not that easy. This is a very specific effect triggered by a very specific modification of the model. It's definitely very interesting.