Generates novel combinations and ideas by recombining elements from training data in unexpected ways (e.g., GANs in art generation) (Generative Adversarial Networks, 2014)
Creates dense networks of associations between elements in training data, enabling complex pattern completion and generation tasks (Attention Is All You Need, 2017)
MoE architectures can handle a wider range of inputs without ballooning model size, suggesting potential for resolving conflicts between different AI components by synthesizing expert opinions into a coherent whole (Optimizing Generative AI Networking, 2024).
The appearance of similarities between Generative AI and the unconscious mind do not mean there is any actual equivilance to be had. We gotta stop using the same terminology to describe generative AI as we do humans because they are not the same thing in the slightest. This only leads to further unintentional bias, and an increased likelihood of seeing connections with the unconscious mind that don't actually exist.
Typically speaking, the appearance of similarities should be explored for functional similarities. Even in the world of prompting, a user gets demonstrably better results (in some contexts) by acting as-if they're talking to a person. We can explore the similarities between interpersonal interaction and AI-interaction for overlap in effective modalities without claiming they're equivalent.
We can explore the similarities between interpersonal interaction and AI-interaction for overlap in effective modalities without claiming they’re equivalent.
Who is "we"? The reality is that these grifters are constantly claiming equivalence. The term "AI" is itself a misleading equivalence. I would much rather not be part of that "we".
I think that in the future generative A.I. will be seen like the Turbo Button, Desktop Publishing Revolution and Information Superhighway of their day, ideas that over promised, under delivered and faded into obscurity. I suspect that Block Chain and Crypto Currencies will go the same way for similar reasons as outlined below.
Machine Learning is a useful tool to automatically generate a model for a multivariate system where traditional modelling is too complex or time consuming.
Generative models are attempting to take that to a whole new level but I don't believe that it's either sustainable nor living up to the hype generated by breathless reporting by ignorant journalists who cannot distinguish advanced technology from magic.
It's not sustainable for a range of reasons. The most obvious is that the process universally disintegrates when it ingests content generated by the same process.
Furthermore, it doesn't learn, specifically, the model doesn't change until a new version is released, so it doesn't gather new models whilst it's being used.
And finally, it requires obscene amounts of energy to actually work and with the exponential growth of models, this is only going to get worse.
Source: I'm an ICT professional with 40 years experience
Isn't your comment more of a perspective on the public perception of AI, the missteps surrounding its implementation, and its current role - rather than an examination of the potential role (practically speaking) of generative AI in a more general AI model? As is the thrust of the post, generative AI will necessarily be part of a larger AI, in part to make up for its weaknesses, in part to utilize its strengths.
That said, generative AI isn't nearly as endangered by generated training data as is commonly understood. Even if it were that bad, embodiment is rapidly changing the landscape. There are a ton of papers about how to use larger models to make smaller models more effective, using generative AI to improve generative AI along with efficiencies. Heck, novel efficiencies get developed almost as regularly as novel use cases. We're always learning how to do more with less.
I think there were many unthinkable things achieved in the past that one can hope that we can find a way to train LLM-style "AI" a lot faster with way less recourses and thus achieve self trained assistants, that interact exactly as one expect from their personal helper.
Even if it doesn't work the same way, humans anthropomorphing pattern detection will grapple on to it as "same function, so same thing". As we slowly build general AI, other "things that don't work that way" will be attached on to it until we have a full general AI whose brain works nothing like humans but has pieces that work in similar fashions.
Sort of like how 60 Watt LED light bulbs don't use 60 Watts. "They produce the same about of light, so they must use the same amount of energy!"