Generates novel combinations and ideas by recombining elements from training data in unexpected ways (e.g., GANs in art generation) (Generative Adversarial Networks, 2014)
Creates dense networks of associations between elements in training data, enabling complex pattern completion and generation tasks (Attention Is All You Need, 2017)
MoE architectures can handle a wider range of inputs without ballooning model size, suggesting potential for resolving conflicts between different AI components by synthesizing expert opinions into a coherent whole (Optimizing Generative AI Networking, 2024).
The appearance of similarities between Generative AI and the unconscious mind do not mean there is any actual equivilance to be had. We gotta stop using the same terminology to describe generative AI as we do humans because they are not the same thing in the slightest. This only leads to further unintentional bias, and an increased likelihood of seeing connections with the unconscious mind that don't actually exist.
Typically speaking, the appearance of similarities should be explored for functional similarities. Even in the world of prompting, a user gets demonstrably better results (in some contexts) by acting as-if they're talking to a person. We can explore the similarities between interpersonal interaction and AI-interaction for overlap in effective modalities without claiming they're equivalent.
We can explore the similarities between interpersonal interaction and AI-interaction for overlap in effective modalities without claiming they’re equivalent.
Who is "we"? The reality is that these grifters are constantly claiming equivalence. The term "AI" is itself a misleading equivalence. I would much rather not be part of that "we".