The Hated One has been pretty solid in the past regarding privacy/security, imho. I found this video of his rather enlightening and concerning.
LLMs and their training consume a LOT of power, which consumes a lot of water.
Power generation and data centers also consume a lot of water.
We don't have a lot of fresh water on this planet.
Big Tech and other megacorps are already trying to push for privatizing water as it becomes more scarce for humans and agriculture.
---personal opinion---
This is why I personally think federated computing like Lemmy or PeerTube to be the only logical way forward. Spreading out the internet across infrastructure nodes that can be cooled by fans in smaller data centers or even home server labs is much more efficient than monstrous, monolithic datacenters that are stealing all our H2O.
Of course, then the 'Net would be back to serving humanity instead of stock-serving megacultists. . .
Spreading out the internet across infrastructure nodes that can be cooled by fans in smaller data centers or even home server labs is much more efficient than monstrous, monolithic datacenters that are stealing all our H2O.
That's definitely not true, data centers are way more efficient than home servers. But yes, they use water to be more efficient.
I don't forsee it becoming "sentient" so much as "Being given a stupid amount of access and resources to figure out a problem by itself, and stupidly pursuing the maximization of that goal with zero context."
There's that darkly humorous hypothetical that an Ai tasked with maximizing making paperclips would continue to do so, using every resource it could get a hold of, and destroying any threat to further paperclip production!
So that, with data center expansion and water. Lol
Also, I can't even imagine how many resources image-generating AIs take up, especially when it's all based around "refining prompts" over and over and over....
I think the training part is not to be neglected and might be what is at play here. Facebook has a 350k GPU cluster which is being setup to train AI models. Typical state of the art models have required training for months on end. Imagine the power consumption. Its not about on person running a small quantized model at home.
Spreading out the internet across infrastructure nodes that can be cooled by fans in smaller data centers or even home server labs is much more efficient than monstrous, monolithic datacenters that are stealing all our H2O
I mean this just isn't true though, the big
servers are more efficient. Scale means efficiency.
The fediverse is also just more inefficient than a centralized service. Part of this is due to the design of activitypub, but part of it is the inherent inefficiency of any decentralized service compared to a centralized one
Also this doesn't happen in practice, most fediverse accounts are on servers running on rented cloud services, not people's homelabs