Skip Navigation

How do you actually see chatgpt and its consequences ending?

Because I’m seeing more and more each day people once considered experts now on the same level as an untrained person who knows how to type. And I’m really hoping LLMs reach a tipping point because otherwise, you’re just adding another nauseating element to the rat race.

I’m no stranger to the “adapt or get left behind” comments, and it’s very funny when people tell me this but can no longer tell me how a for loop works or what constitutes an outer join.

lol hate to say it but this shithole needs to humbled by an act of war that brings down the power grid, capitalist realism is quickly turning into economic and technological realism (if it hasn’t already).

51
51 comments
  • i see it going the way of bitcoin. that is, not going anywhere and in fact increasing to absurd levels of value, much to my chagrin.

  • I think there’s a “time is a flat circle” aspect to this. Like in the early days LLM’s would spit out useless but entertaining content. Like that early AI art, it was this chaotic jumbled mess with a very surrealistic tether to whatever the prompt was. Neither useful as a product nor decent art, but entertaining in its absurdity. Then it got to the point where, as long as people put at least a little effort into massaging the prompts, it’ll spit something believable at first glance. You’ve got break out the photo tools to tell, and most people aren’t going to zoom in to see if there’s a sudden jump between jpeg and png edges.

    But the technical finesse has brought a plasticness. My wife’s boss, absolute walking definition of failing upwards McKinsey brain, uses ChatGPT for everything. She showed me one of his emails, it looks professionally written and all. But good Christ it’s painful to read. It doesn’t look like something a human being would earnestly write, more like a satire of corporate speak that would be considered too obvious to work as satire.

    And that’s where the circle comes around, all this AI text, art, movies, etc have this feel to them that makes you aware that it’s AI-generated. That breaks the spell, it makes you aware that it’s a plastic facsimile of creative output, and that devalues it. Which will conversely create a value for creative output that feels distinctly not AI.

    I’m not sure if there’s an exact 1-to-1 when it comes to “programmers” who don’t really know how to code, they just know how to ask ChatGPT for a function. But I’d have to imagine that they’re in a position where they’re fine if things are running smoothly, but the moment things get FUBAR’d and ChatGPT can’t spit out a solution, or the solution is so convoluted and messy that it makes more problems than it fixes, that will separate the chaff from those who actually know how to wrangle with the code.

  • I foresee some horrifying disaster because a subcontractor of a subcontractor of a subcontractor used the Google analytical nonsense generator to write code for a nuke plant or an air traffic control system.

  • That's an intriguing and open-ended question! If we’re thinking about how ChatGPT might evolve and its potential consequences, a few scenarios seem likely, each depending on a combination of technological, ethical, and societal factors. Here’s how I imagine the trajectory might unfold:

    1. Enhanced Communication and Assistance One optimistic outcome is that AI like ChatGPT continues to evolve into a powerful tool for communication, learning, and personal assistance. It could become seamlessly integrated into many aspects of life—education, work, healthcare, and even entertainment. This would ideally be accompanied by improved transparency, trustworthiness, and adaptability. In this scenario:

    AI as a Personalized Assistant: People could rely on AI as a tutor, therapist, life coach, or business advisor. It might help people make more informed decisions, be more productive, and improve mental well-being. Efficient Information Processing: It could also accelerate problem-solving in scientific research, creativity, and data analysis, working alongside humans to solve complex problems in fields like medicine, climate change, and space exploration. 2. Coexistence with Human Jobs While there is the potential for AI to automate many tasks traditionally done by humans, there’s also the possibility that it will work in tandem with human workers, leading to a more symbiotic relationship. Instead of completely replacing jobs, AI could assist with repetitive tasks, leaving humans to focus on more creative, strategic, or empathetic roles.

    New Job Creation: Fields like AI ethics, AI training, and AI-human collaboration would likely become more prominent. Additionally, sectors related to AI development (coding, cybersecurity, etc.) would keep growing. Enhanced Collaboration: Rather than fully displacing workers, AI might enhance collaboration, allowing people to focus on tasks where human insight and intuition are irreplaceable, while automating tedious tasks. 3. Ethical Challenges and Misuse On the flip side, as AI becomes more capable, ethical concerns and risks could become more pressing.

    Bias and Fairness: AI systems, including ChatGPT, can inadvertently perpetuate bias if not carefully managed. This could lead to discrimination in decision-making processes, such as hiring or legal matters. Misinformation: While AI is great at generating human-like text, it can also be used to spread misinformation or manipulate public opinion. This is particularly dangerous in contexts like political elections or public health crises. Surveillance and Privacy: Widespread AI adoption could be used for surveillance, potentially infringing on privacy rights. Governments or corporations might exploit AI to track citizens or users in ways that might be intrusive or oppressive. 4. AI Autonomy and Control Looking even further down the road, the evolution of AI may lead to the development of systems with higher levels of autonomy, intelligence, and self-direction. This could open up possibilities but also serious risks.

    Superintelligent AI: Some predict that AI might eventually reach a point where it surpasses human intelligence (referred to as Artificial General Intelligence or AGI). In such a scenario, questions around control, ethics, and alignment with human values would become critical. Would a superintelligent AI work in humanity’s best interest, or would it pursue its own goals, potentially without regard for human welfare? Regulation and Control: The push for regulating AI would grow more urgent. We may see global discussions around setting up international AI oversight bodies or frameworks to ensure AI development remains ethical and aligned with human values. 5. Human-AI Integration As AI systems become more advanced, there could be a shift towards direct integration between human biology and AI, especially with advancements in brain-computer interfaces (BCIs).

    Cognitive Enhancement: Imagine using AI directly to enhance human memory, creativity, or even decision-making. The boundaries between human and machine could blur, raising philosophical and ethical questions about what it means to be human. AI Augmentation: Rather than replacing humans, AI could work with individuals to augment their abilities, creating a future where human-AI hybrids become commonplace. For example, AI could provide real-time data visualization for professionals, or even assist in surgical operations with pinpoint precision. 6. Social and Psychological Impact The increasing reliance on AI could have profound social and psychological effects. While it could bring people closer together through improved communication tools, it could also foster isolation if people lean too heavily on AI for companionship and interaction.

    Disconnection: The more we rely on AI for emotional or intellectual support, the more we may risk losing genuine human connections. There’s a balance to be struck between using AI for productivity and maintaining authentic relationships. Identity and Consciousness: As AI becomes more capable of mimicking human interaction, people may start to question the nature of consciousness and what it means to be "alive." This could have profound philosophical, psychological, and even religious implications. Conclusion: The Path Forward The future of ChatGPT and similar AI models is not set in stone. It could lead to a utopia of human-AI collaboration and progress, or it could result in dystopian outcomes, depending on how we manage its development, integration, and ethical governance. To guide it in a positive direction, there will need to be careful consideration of:

    Ethical standards and transparency in AI training. Regulations and oversight to prevent misuse. Public awareness and education about AI’s capabilities and limits. Human-centric values in AI design to ensure the technology remains aligned with human welfare. In the end, it’s about balance: embracing the immense potential of AI while remaining mindful of the challenges it brings.

  • Chat-GPT will eventually run out of funding when it becomes clear it's not actually capable of doing anything useful with enough accuracy to not require paying somebody to follow behind it and clean up its mess. And if nobody realizes that, they'll realize it's not making any fucking money and pull the plug for pure want of preserving their capital.

    All of this AI horseshit is so much hot air. They made a chat program that can Google shit and string words together easier. And a bunch of fucking morons are treating it like the ship computer from Star Trek. The fallout of this will be another economic bubble burst. Nothing more exotic than that.

  • chatgpt and its relatives are fundamentally limited by the fact that linear increases in performance require exponentially increasing amounts of data and processing power, whereas its getting implicitly marketed as if it were the other way around.

    so the inevitability of decreasing return on investment means it will be a race to the bottom to deploy cheap, shitty versions of this stuff anywhere it might be "good enough", and over time the ecosystem will evolve such that "good enough" is a depreciating standard even as it expands into more and more niches. everything will get worse in ways we haven't even imagined yet.

    • “Good enough” is how every company I’ve worked for since college has structured their day to day operations agony-shivering

      It’s a stimulant-based world, people are just trying to speed everything up (to grow more of course) while sacrificing all aspects of their personal lives without a second thought.

      Really curious if we are doing our damned best to emulate the controlled substances and drugs that give people heart attacks

  • Ch*t gee pee tea and the like are just in a marketing bubble. They've got all this investor money, but the research and development can't improve as quickly as it needs to, so all that money just gets dumped into marketing. But it literally cannot go on forever, and industries will suffer from integrating it.

    You won't even remember it after the collapse coming in '28

  • Most of LLM/image generators are overhyped, but imo there is enough of a use case and a value case (if you want to cut down on paying people) that they will significantly erode the power of labor once more companies understand how to actually use them besides typing prompts into websites

    Voice acting will become even more dire of an industry than it already is with voice to voice AI filters making it so a studio only needs to pay 1-2 voice actors to voice a cast of dozens

    Art teams will be halved or decimated once more legal teams give the go-ahead for using image generator finetunes to make specific use generators that pump out their exact desired style with only minor manual touch-ups needed. The outcomes would be borderline indistinguishable from their usual art, but now they have to pay much less people for the same work.

    Artist contractors will similarly be racing to the bottom against the bottom threshold of 'literally free' (they already are atm really)

    Data entry jobs will be eroded.

    Translation work will be heavily eroded.

    Lower level modeling and filming jobs will be heavily eroded.

    Higher level programming work will still be in high demand but entry level stuff will be under valued and hard to find.

    All of that pushes more people into other fields looking for work, and thus erodes the labor power of those as well. Things get shittier and we get a lot of ugly things to look at.

  • The same way a house ends when it's being battered by a hurricane and then a windows finally break and allows the wind inside

  • During the Bush years, the US government had finally gotten a proper handle on controlling broadcast news and the new 24 hour news cycle started bombarding people with propaganda. Meanwhile, the internet was a niche thing that mostly only hobbyists and teens understood, which allowed people both uncontrolled access to information, and also a better way to access information than libraries. Anybody who knew how to Google - which was uncommon and new - could answer as much as the most dedicated trivia nerd. Normies were completely unaware of the internet, but the internet people won. Companies made by internet people took over the world, and governments had to scramble to get control over them again.

    Now governments have figured out how to control social media, and are using that to bombard people with propaganda all day. Meanwhile, the internet is slowly losing value as an information source, because it's flooded with bots spewing nonsense. Some people - probably teens and hobbyists again - will learn how to navigate the botless, propagandaless portions of the internet, so they still can have better access to information than everybody else. Instead of a moment where information access increased and those who followed it come out ahead, we'll have one where information access decreases, and those who avoid it come out ahead. But the results will still be the same - normies will be completely unaware, and the people who figured out how to avoid reading AI slop will win. Hopefully they won't just use that to make profitable companies and get recouped by the government again.

  • People will just get bored and move on while some dudes you never hear about have fun playing with it as a cool toy. Maybe we get a few photoshop plug-ins that one really thinks about also. Like how no one cares phtotoshop put all the mat painters out of work.

  • Porky will squeal in delight about how many less employees he needs, that’s for sure.

    I’m sure he’ll smugly advise everyone to all start their own businesses. That’s the new narrative: jobs are going to be a think of the past so everyone needs to become a capitalist.

  • ... it’s very funny when people tell me this but can no longer tell me how a for loop works or what constitutes an outer join.

    the funnier part is that llm's are best at doing these sorts of things; but we continue to insist on people doing this during interviews for some reason.

  • nuke

  • LLMs and the AI hype is just an imperialist cover for the very real environmental war that is being waged on the Global South each day. LLMs jobs are not to take jobs or threaten the positions of the imperial core proletariat (this is simply a distraction created by imperial core mass media even in leftist circles), it is to stimulate the artificial demand for production to combat the falling rate of profit. Climate change is an imperialist caused calamity to destroy Global South nations and it's no coincidence we see the worst ecological disasters in the most victimized parts of the planet and LLM's high compute cost for very low return comparably are perfect for promoting it. ChatGPT is just one tool in the belt of imperialist powers to control the narrative around technology and to ensure continued extraction at even greater amounts.

    This isn't unique with LLMs, the crypto tech craze had also served the same purpose as computing power (GPUs) were scalped to all hell and creating even more artificial demand (thus continuing the supply chain cycle of unequal exchange). Proprietary software is one of the root maintainers of the imperialist hold over technology. Apple and Microsoft, both companies (but not the only companies but speaking generally) who are deeply entwined with US hegemony (and MIC, these guys arm Zionists), maintain their positions over a distorted sense of importance as all of their discoveries and achievements are locked away or undocumented and employ propaganda departments to sell the USAmerican luxury brand of technology.

    All the discourse surrounding whether AI is "useful" or not is manufacturing consent for plundering even more resources to be hoarded by western powers. Of course AI is useful, most computer programs were made to serve a purpose in some capacity and it's not worth discussing (esp considering the companies themselves publish their own benchmarks in highly choreographed publicity stunts), but the push of AI serves a deeper purpose than what MSM would ever present.

    Anyone who discusses AI in the context of "wow the youth's brains are being melted, you better make sure to safeguard your job" are missing the bigger picture (esp in the Global North). All your silicon is bloodied with the hardship and sacrifice of the Global Majority and neo-colonial extraction and we should maintain that as main leftist position on AI which can and will go further than crypto ever did.

    Just as a side note, using Linux and free software is praxis actually, it's not because people are contrarian.

  • I don't think it will have any particularly interesting or consequential impact really. It's not really having any impact except giving marketers some new favourite words at the moment, and I don't foresee that changing, until consumers become tired of it and it stops being profitable to say your product contains some vacuous "AI", and then we all forget about it, is my prediction. LLMs would likely persist as just a fun toy some people like to use, occasionally used as a search engine with better comprehension of human grammar, and occasionally used by students to cheat on assignments, but not used for anything super serious or plastered over every ad for a tech product.

  • it’s very funny when people tell me this but can no longer tell me how a for loop works

    Wait...I'm having trouble even comprehending this. Do you mean like not understanding what's happening at the instruction level with conditional branch instructions? Or do you mean what some generic for loop does in your codebase?

You've viewed 51 comments.