The Perfect Response
The Perfect Response
The Perfect Response
Rejecting the inevitable is dumb. You don't have to like it but don't let that hold you back on ethical grounds. Acknowledge, inform, prepare.
You probably create AI slop and present it proudly to people.
AI should replace dumb monotonous shit, not creative arts.
I couldn't care less about AI art. I use AI in my work every day in dev. The coworkers who are not embracing it are falling behind.
Edit: I keep my AI use and discoveries private, nobody needs to know how long (or little) it took me.
I couldn’t care less about AI art.
That's what the OP is about, so...
Has AI made you unable to read?
The objections to AI image gens, training sets containing stolen data, etc. all apply to LLMs that provide coding help. AI web crawlers search through git repositories compiling massive training sets of code, to train LLMs.
Just because I don't have a personal interest in AI art doesn't mean I can't have opinions.
But your opinion is off topic.
It's all the same... Not sure why you'd have differing opinions between AI for code and AI for art, but please lmk, I'm curious.
Code and art are just different things.
Art is meant to be an expression of the self and a form of communication. It's therapeutic, it's liberating, it's healthy and good. We make art to make and keep us human. Chatbot art doesn't help us, and in fact it makes us worse - less human. You're depriving yourself of enrichment when you use a chatbot for art.
Code is mechanical and functional, not really artistic. I suppose you can make artistic code, but coders aren't doing that (maybe they should, maybe code should become art, but for now it isn't and I think that's a different conversation). They're just using tools to perform a task. It was always soulless, so nothing is lost.
Art is also functional. Specifically, paid opportunities for art perform some type of function. Not all art is museum type contemplative work or the highest level of self expression. Some art, its purpose is to serve as a banner on the side of a web page notifying people of a new feature. That isn't really enriching to create. It's a rather low form of self expression, similar to code created to be functional.
I think you're also underestimating AI image gens as a form of self expression. Obviously it's more freeing to be able to draw or paint or create a thing yourself. But people often lack the prerequisite skills to create the thing they have in their mind. I often see career artists compare their work and style from years ago to their works today, showing off extreme improvement - meaning that even talented artists sometimes lack the skills necessary to create the "perfect" version of what they had in their mind.
With LLMs, you can get quite specific - not just "draw me in a Studio Ghibli style," but meticulously describing a scene and visual style - and it will render it. There is still creative satisfaction in that process, like how a movie director tells the actors how to accomplish a scene but doesn't actually play a role in the film themselves.
But people often lack the prerequisite skills to create the thing they have in their mind.
And they will always lack those skills if they never practice!
Furthermore, art isn't just functionally putting creations into the world, it's also the act of creation. There's a feeling of creation that comes from creating art, it's about the journey and not just the destination.
Having a chatbot do it for you isn't the same.
There is still creative satisfaction in that process, like how a movie director tells the actors how to accomplish a scene but doesn’t actually play a role in the film themselves.
Many actors do not want to be directors, many directors do not want to be actors. Those are just different things.
Even if you want to compare prompting LLMs with directing, that still means that people are deprived of acting. They're missing out on feeling and experiencing the act of artistic expression by outsourcing it to a chatbot.
And they will always lack those skills if they never practice!
That's not really relevant. AI lets you skip the prerequisite 2000 hours of mastery practice if all you need to do is create a specific render of something in a specific style.
I do have my own artistic endeavors. But not everything needs to be "earned" through countless evenings and thousands of dollars of materials, YouTube courses, studio time, whatever. The other day I made an event invitation in the style of stop motion animation. It was for a free event and the end result was really charming. I had fun prompt crafting to make it exactly like how I wanted.
Though I suppose I could have spent a few years making dolls as a hobby, set up a photo studio in my home, paid for a high quality camera, and spent a few weeks fabricating custom dolls for my little event invite. Not sure that was worth experiencing the "act of creation", at least moreso than I felt making it using the image gen.
"i am fine with stolen labor because it wasn't mine. My coworkers are falling behind because they have ethics and don't suck corporate cock but instead understand the value in humanity and life itself."
Lmao relax dude. It's just software.
Then most likely you will start falling behind... perhaps in two years, as it won't be as noticable quickly, but there will be an effect in the long term.
This is a myth pushed by the anti-ai crowd. I'm just as invested in my work as ever but I'm now far more efficient. In the professional world we have code reviews and unit tests to avoid mistakes, either from jr devs or hallucinating ai.
"Vibe coding" (which most people here seem to think is the only way) professionally is moronic for anything other than a quick proof of concept. It just doesn't work.
I know senior devs who fell behind just because they use too much google.
This is demonstrably much worse.
Lmao the brain drain is real. Learning too much is now a bad thing
I use gpt to prototype out some Ansible code. I feel AI slop is just fine for that; and I can keep my brain freer of YAML and Ansible, which saves me from alcoholism and therapy later.
You could say fascism is inevitable. Just look at the elections in Europe or the situation in the USA. Does that mean we cant complain about it? Does that mean we cant tell people fascism is bad?
No, but you should definitely accept the reality, inform yourself, and prepare for what's to come.
Ai isn't magic. It isn't inevitable.
Make it illegal and the funding will dry up and it will mostly die. At least, it wouldn't threaten the livelihood of millions of people after stealing their labor.
Am I promoting a ban? No. Ai has its use cases but is current LLM and image generation ai bs good? No, should it be banned? Probably.
Illegal globally? Unless there's international cooperation, funding won't dry up - it will just move.
That is such a disingeous argument. "Making murder illegal? People will just kill each other anyways, so why bother?"
This isn't even close to what I was arguing. Like any major technology, all economically competitive countries are investing in its development. There are simply too many important applications to count. It's a form of arms race. So the only way a country may see fit to ban its use in certain applications is if there are international agreements.
The concept that a snippet of code could be criminal is asinine. Hardly enforceable nevermind the 1st amendment issues.
They said the same thing about cloning technology. Human clones all around by 2015, it's inevitable. Nuclear power is the tech of the future, worldwide adoption is inevitable. You'd be surprised by how many things declared "inevitable" never came to pass.
It's already here dude. I'm using AI in my job (supplied by my employer) daily and it make me more efficient. You're just grasping for straws to meet your preconceived ideas.
It's already here dude.
Every 3D Tvs fan said the same. VR enthusiasts for two decades as well. Almost nothing, and most certainly no tech is inevitable.
The fact that you think these are even comparable shows how little you know about AI. This is the problem, your bias prevents you from keeping up to date in a field that's moving fast af.
Sir, this is a Wendy's. You personally attacking me doesn't change the fact that AI is still not inevitable. The bubble is already deflating, the public has started to fall indifferent, even annoyed by it. Some places are already banning AI on a myriad of different reasons, one of them being how insecure it is to feed sensitive data to a black box. I used AI heavily and have read all the papers. LLMs are cool tech, machine learning is cool tech. They are not the brain rotted marketing that capitalists have been spewing like madmen. My workplace experimented with LLMs, management decided to ban them. Because they are insecure, they are awfully expensive and resource intensive, and they were making people less efficient at their work. If it works for you, cool, keep doing your thing. But it doesn't mean it works for everyone, no tech is inevitable.
I'm also annoyed by how "in the face" it has been, but that's just how marketing teams have used it as the hype train took off. I sure do hope it wanes, because I'm just as sick of the "ASI" psychos. It's just a tool. A novel one, but a tool nonetheless.
What do you mean "black box"? If you mean [INSERT CLOUD LLM PROVIDER HERE] then yes. So don't feed sensitive data into it then. It shouldn't be in your codebase anyway.
Or run your own LLMs
Or run a proxy to sanitize the data locally on its way to a cloud provider
There are options, but it's really cutting edge so I don't blame most orgs for not having the appetite. The industry and surrounding markets need to mature still, but it's starting.
Models are getting smaller and more intelligent, capable of running on consumer CPUs in some cases. They aren't genius chat bots the marketing dept wants to sell you. It won't mop your floors or take your kid to soccer practice, but applications can be built on top of them to produce impressive results. And we're still so so early in this new tech. It exploded out of nowhere but the climb has been slow since then and AI companies are starting to shift to using the tool within new products instead of just dumping the tool into a chat.
I'm not saying jump in with both feet, but don't bury your head in the sand. So many people are very reactionary against AI without bothering to be curious. I'm not saying it'll be existential, but it's not going away, I'm going to make sure me and my family are prepared for it, which means keeping myself informed and keeping my skillset relevant.
We had a custom made model, running on an data center behind proxy and encrypted connections. It was atrocious, no one ever knew what it was going to do, it spewed hallucinations like crazy, it was awfully expensive, it didn't produce anything of use, it refused to answer shit it was trained to do and it randomly leaked sensitive data to the wrong users. It was not going to assist, much less replace any of us, not even in the next decade. Instead of falling for the sunken cost fallacy like most big corpos, we just had it shut down, told the vendor to erase the whole thing, we dumped the costs as R&D and we decided to keep doing our thing. Due to the nature of our sector, we are the biggest players and no competitor, no matter how advanced the AI they use will never ever get close to even touching us. But yet again, due to our sector, it doesn't matter. Turns out AI is a hindrance and not an asset to us, thus is life.
Wait, you don't have to like it, but ethical reasons shouldn't stop you?