Generates novel combinations and ideas by recombining elements from training data in unexpected ways (e.g., GANs in art generation) (Generative Adversarial Networks, 2014)
Creates dense networks of associations between elements in training data, enabling complex pattern completion and generation tasks (Attention Is All You Need, 2017)
MoE architectures can handle a wider range of inputs without ballooning model size, suggesting potential for resolving conflicts between different AI components by synthesizing expert opinions into a coherent whole (Optimizing Generative AI Networking, 2024).
I think that in the future generative A.I. will be seen like the Turbo Button, Desktop Publishing Revolution and Information Superhighway of their day, ideas that over promised, under delivered and faded into obscurity. I suspect that Block Chain and Crypto Currencies will go the same way for similar reasons as outlined below.
Machine Learning is a useful tool to automatically generate a model for a multivariate system where traditional modelling is too complex or time consuming.
Generative models are attempting to take that to a whole new level but I don't believe that it's either sustainable nor living up to the hype generated by breathless reporting by ignorant journalists who cannot distinguish advanced technology from magic.
It's not sustainable for a range of reasons. The most obvious is that the process universally disintegrates when it ingests content generated by the same process.
Furthermore, it doesn't learn, specifically, the model doesn't change until a new version is released, so it doesn't gather new models whilst it's being used.
And finally, it requires obscene amounts of energy to actually work and with the exponential growth of models, this is only going to get worse.
Source: I'm an ICT professional with 40 years experience
Isn't your comment more of a perspective on the public perception of AI, the missteps surrounding its implementation, and its current role - rather than an examination of the potential role (practically speaking) of generative AI in a more general AI model? As is the thrust of the post, generative AI will necessarily be part of a larger AI, in part to make up for its weaknesses, in part to utilize its strengths.
That said, generative AI isn't nearly as endangered by generated training data as is commonly understood. Even if it were that bad, embodiment is rapidly changing the landscape. There are a ton of papers about how to use larger models to make smaller models more effective, using generative AI to improve generative AI along with efficiencies. Heck, novel efficiencies get developed almost as regularly as novel use cases. We're always learning how to do more with less.
AI has been around for a long time and has had moments of high interest and low interest. The latter has been given the term "AI Winter." It is possible that there will be another winter if there is a limitation that cannot be avoided for several years.
I can't imagine willingly going back to before Adobe added Generative Fill to Photoshop. Gen AI will certainly remain more than an academic curiosity, at least until they can be replaced with something better.
Oh man, it saves me so much time - but it is like working with a bipolar intern. Sometimes it pulls off these amazing fills that would've taken me hours to do by hand. And sometimes it has a fit trying to fill in a cloudy sky, and I just do it the old fashioned way. I'm pretty systematic about testing tools in general, and gen fill resists all attempts at building a reliable workflow. Ya really gotta switch up tactics to react as ya go, which can be fun when it's not irritating. Plus I find surreal hallucinations hilarious, so it makes up for some of its behavior issues with entertainment value.
I think there were many unthinkable things achieved in the past that one can hope that we can find a way to train LLM-style "AI" a lot faster with way less recourses and thus achieve self trained assistants, that interact exactly as one expect from their personal helper.