Meta appears poised to spend billions of dollars by the end of this year on Nvidia's popular computer chips that are widely used for AI projects
Summary:
Meta, led by CEO Mark Zuckerberg, is investing billions in Nvidia's H100 graphics cards to build a massive compute infrastructure for AI research and projects. By end of 2024, Meta aims to have 350,000 of these GPUs, with total expenditures potentially reaching $9 billion. This move is part of Meta's focus on developing artificial general intelligence (AGI), competing with firms like OpenAI and Google's DeepMind. The company's AI and computing investments are a key part of its 2024 budget, emphasizing AI as their largest investment area.
I feel like a pretty big winner too. Meta has been quite generous with releasing AI-related code and models under open licenses, I wouldn't be running LLMs locally on my computer without the stuff they've been putting out. And I didn't have to pay a penny to them for it.
The equivalent of 600k H100s seems pretty extreme though. IDK how many OpenAI has access to, but it's estimated they "only" used 25k to train GPT4. OpenAI has, in the past, claimed the diminishing returns on just scaling their model past GPT4s size probably isn't worth it. So, maybe Meta is planning on experimenting with new ANN architectures, or planning on mass deployment of models?
I really hope they fail hard and end up putting these devices on the consumer second hand market because the v100's while now affordable and flooding the market are too out of date.
This is great! I thought there would be a chips LED recession. Sorry homeless people but you're gonna have to wait another generation to try and get online to maybe buy a house someday far far away.... and also some day far far away if you get my drift.
It does not give them personal access as privately as they may want (although privacy is generally respected), but at least there are public libraries for the poor and homeless to use computers and connect to the internet. One of the many, many ways libraries are essential to a community, especially to the poor.
No what I meant is that everyone is currently hellbent into having a recession so they can magically afford to buy a house. The recession was coming since China got cock blocked from purchasing EUV systems by the US government. This in turn means that the company making these machines and the companies hoping to use them....as well as their investments where going to bite the dust. However now Mr SuckmyVerga is investing in these new devices using the new machines from vendors not affected by the embargo. Which means that there won't be a recession in chips. Probably. Maybe. I don't know what you were talking about. But I was referring to us homeless who cannot afford to buy a home...which does include library homeless and currently here in Seattle popsicle homeless. Well I guess in most of the US actual homeless people are in libraries or popsicles. Those people suffer tremendously so don't let my sarcastic cynicism fool you, my parents had food stamps and I had soggy cereal for breakfast plenty of time. I can't believe anyone could survive being outside in the past couple of weeks without heating.
Consumer GPU shortage from hell incoming.
Why would Nvidia waste their production on low end GPUs, if they can sell AI GPUs for what... 70K USD a piece?
This might become worse than the shortages because of mining.
Anyone got a graph of ai spending over time globally?
I'm starting to feel more confident about AGI coming soon (relatively soon).
Knowing absoultely nothing about it though it seems like it needs to be more efficient? What's the likelihood rather than increasing the bulk power of these systems that there is a breakthrough that allows more from less?
Spending is definitely looks exponential at the moment:
Most breakthroughs have historically been made by university researchers, then put into use by corporations. Arguably, including most of the latest developments,. But university researchers were never going to get access to the $100 million in compute time to train something like GPT-4, lol.
The human brain has 100 trillion connections. GPT-4 has 1.76 trillion parameters (which are analogous to connections). It took 25k GPUs to train, so in theory, I guess it could be possible to train a human-like intelligence using 1.4 million GPUs. Transformers (the T in GPT) are not like human brains though. They "learn" once, then do not learn or add "memories" while they're being used. They can't really do things like planning either. There are algorithms for "lifelong learning" and planning, but I don't think they scale to such large models, datasets, or real-world environments. I think there needs to be a lot theoretical breakthroughs to make AGI possible, and I'm not sure if more money will help that much. I suppose AGI could be achieved by trial and error (i.e. trying ideas and testing if they work without mathematically proving if or how well they'd work) instead of rigorous theoretical work.
So you're saying we might see something 1/10 of a human brain (obviously I understand that's a super rough estimate) next year.
This is the first I heard about GPT not learning. So if I interact with chat gpt it's effectively a finished product and it will stay like that forever even if it is wrong and I correct it multiple times?
This is where I'm really confused with the analogue. If GPT is not really close to a human brain how is it able to interact with so many people instantly. I couldn't hold 3 conversations never mind a million. Yet my brain power is much much higher than GPT. Couldn't it just talk to 1 person and be smarter as it can use all the computing power for that 1 conversation?
While I do work in the space, I'm more pessimistic. I think LLM's will allow the tech companies to breach plateaus that they've found with compositional models, but what we will see is other companies catch up to GPT4, perhaps surpassing it a little.
I won't pretend to be an expert on AI, but my view is that we're purely seeing a future where multiple companies will own LLM's. We also won't see many improvements over what we have now, and this is the pessimist in me again, what I think we'll see is that many of the benefits we saw from GPT4 were likely from the fact that their datasets contained an unbelievable amount of PII and stolen data. Without that data, we've seen ChatGPT get worse, and it's one area where researchers and other tech firms have tried to explain the performance gap.