An Australian regulator has fined Elon Musk's social media platform X A$610,500 ($386,000) for failing to cooperate with a probe into anti-child abuse practices, a blow to a company that has struggled to keep advertisers amid complaints it is going soft on moderating content.
Remember when all the Musk fanboys were claiming that Musk cleaned up the CSAM and anybody who opposed him was obviously a pedophile? Pepperidge Farm remembers.
SYDNEY, Oct 16 (Reuters) - An Australian regulator has fined Elon Musk's social media platform X A$610,500 ($386,000) for failing to cooperate with a probe into anti-child abuse practices, a blow to a company that has struggled to keep advertisers amid complaints it is going soft on moderating content.
Though small compared to the $44 billion Musk paid for the website in October 2022, the fine is a reputational hit for a company that has seen a continuous revenue decline as advertisers cut spending on a platform that has stopped most content moderation and reinstated thousands of banned accounts.
Most recently the EU said it was investigating X for potential violation of its new tech rules after the platform was accused of failing to rein in disinformation in relation to Hamas's attack on Israel.
"If you've got answers to questions, if you're actually putting people, processes and technology in place to tackle illegal content at scale, and globally, and if it's your stated priority, it's pretty easy to say," Commissioner Julie Inman Grant said in an interview.
Under Australian laws that took effect in 2021, the regulator can compel internet companies to give information about their online safety practices or face a fine.
Inman Grant said the commission also issued a warning to Alphabet's (GOOGL.O) Google for noncompliance with its request for information about handling of child abuse content, calling the search engine giant's responses to some questions "generic".
The original article contains 625 words, the summary contains 239 words. Saved 62%. I'm a bot and I'm open source!
Fediverse is a bunch of independent websites potentially connected by compatible software, not one entity, so there’s not really a basis for comparison. You could ask about individual instances. But also it’s about “failing to cooperate with a probe into anti-child abuse practices”, not hosting or failing to moderate material. Australian law says they can asks sites about their policies and they have to at least respond.
Since there is no hierarchical top general moderator/admin and every instance is under supervision by the respective owners of these instances, responsibility of safety is technically forwarded to individual instance admins as far as their instance goes.
Or that's what I make of it at least, anyone feel free to correct me if I'm wrong. Also, the above conclusion does not include any possible random future law made up to state differently (decision-making entities have weird unpredictable logics... 😅)
As far as for Mastodon itself, it could use some upgrades in its user management and reporting features, though (an option to automate instant reactions (like tempban until reviewed) on certain categories of reports (like child abuse and extreme/shocking violence) to prevent anyone reported for those kinds of things actively being able to continue until an admin sees and processes the report and reports are definitely not visible enough yet).
"The Fediverse" is about 13,000 separate services that are each individually responsible for illegal content on their systems. Some probably aren't doing a good enough job, but most of them are and they've mostly defederated the ones that fail to do so.
And why wouldn't they? Many hands make light work and the fediverse has tens of thousands of moderators to deal with far fewer posts that the X network. Twitter had a decent moderation team once, but Musk has gutted the team.