Programmers don't program sexism into machine learning models. What happens is that people who may or may not be programmers provide them with biased training data, because getting unbiased data is really, really hard.
Forgive me for not putting incredible weight behind the “issue” of a LLM gendering inanimate objects incorrectly. Seems like an infinitely larger issue in the language itself than the LLM.
"inanimate objects"? Where are you getting that from? The article doesn't state explicitly what the test sentences were, but I highly doubt that LLMs have trouble grammatically gendering inanimate objects correctly, since their gender usually doesn't vary depending on anything other than the base noun used. I'm pretty sure this is about gendering people.