Skip Navigation
156 comments
  • Aight, here's the thing.

    All art is, at its base, about translating a person's inner concept into an external form. Sculpture, painting, poetry, dance, whatever.

    To do any art form, there is a barrier to entry. If you want to be a dancer, some part of your body must be mobile, right? Even if it's just your eyeballs, dance by definition is about the human body moving.

    But, what if you can't move your body? Is that, and should that be, a barrier? Why can't a person get an exoskeleton device that they can then program to either dance for them, or to respond to their thoughts so they can dance via the gear? Well, in that case the technology isn't here yet, but pretend it was.

    Obviously, it wouldn't be the same as someone that's trained and dedicated to dancing, but is it lesser? It still fulfills the self expression via movement.

    That can be applied to damn near every form of art. I can't actually think of any that it doesn't apply to at least in part.

    There is a difference between a human sitting down (or lying or standing) to write a book and just telling a computer to generate a book. But it doesn't completely invalidate using a computer to generate fictional text. The key in that form is the degree of input and the effort involved. A writer asking an llm for a paragraph about a kid walking down the street when they're blocked isn't the same thing as telling it to write the entire book. There's degrees of use that are valid tools that don't remove the human aspect of the art form.

    Take it to visual arts. A person can see things in their head that they may never develop the skill to see executed. They may not be physically capable of moving a brush on canvas, or pen on paper. A painter of incredible skill may be an utter dunce at sculpture, but still have vision and concepts worth being created.

    The use of a generative model as a tool is not inherently bad. It's no worse than setting up software to 3d print a sculpture.

    The problem comes in when the ai itself is made by, and operated for the benefit of corporate entities, and/or when attribution isn't built in. Attribution matters; a painting made by Monet is different from a painting that looks like Monet could have done it, but it was made by southsamurai. If I paint something that looks like a Monet, that's great! If I paint it and pretend it was made by Monet, that's bullshit.

    A "painting" by a piece of software that's indelibly attributed as generated that way isn't a big deal. It comes back to the eye of the beholder in the same way that digital art is when compared to "analog" art via paints and pencils. It only really matters when someone is bullshitting about how they achieved the final results.

    Is ai art less impressive? Hell yes, and it's pretty obvious that it isn't the same thing as someone honing their craft over years and decades. An image generated by a piece of software with only the input prompts being human generated is not the same as someone building the image with their hands via paint/touchpad/mouse/whatever.

    This is still different from the matter of using ai instead of paying a human to do the work, which is more complicated than people think it is.

    But, in terms of an individual having access to tools that allow them to get things inside their head out of their head where it can be seen, it has its place. It just needs to be very clear that that's the tool used.

    And yeah, I know this is c/fuckai, and I'm arguing that ai has its place as a tool of self expression, and that's not going to be universally satisfying here. But I maintain that the problem with ai art isn't in the fact that it's ai art, it's the framework behind that that makes it a threat to actual humans.

    In a world where artists can choose to create art for their own satisfaction without having to worry about eating and having a roof over their heads, ai art would be a lot less of a threat.

    • or to respond to their thoughts so they can dance via the gear

      But thats not whats happening with AI "art". Thats whats being attempted with other technologies

      I have seen a lot of disabled artists complain about bring used in pro-AI arguments

      • Yup. And it isn't even just artists. Disabled people that aren't creatives on a professional level object to it as well. It's an unpleasant form of ablism, trying to pander on the backs of those poor, sad disabled people.

        But it is all a spectrum of technologies, when applied properly.

        The properly part is the bottom panel of the posted comic, imo. The various generative models aren't actually about helping people, they aren't about expanding human creativity. They're about trying to cash in on a growing technology.

        That doesn't mean that ai can't be a good thing. It just means that it's a bad thing in the way it exists now, or at least in the form that's being shoved down the public's throat.

        Had the big ones not stolen the training data, were they not being used to leverage corporate goals over humans, they could be a very useful thing.

    • I have seen AI "art" that has moved me emotionally, and been inspiring, increased my immersion etc.

      The AI satire video about US workers in a sweatshop factory was politically important, and made me laugh.

      I once made a picture of a cat that was busy working tirelessly in the style of Rembrandt, and it was emotionally moving. I saw myself in that cat. 🥲

      I and friends used AI for immersion when roleplaying.

      This supports your point of giving people the ability to artistically and quickly express ideas without being a skilled artist.

      I also believe that the ethical issues of ownership, and theft from authors and artists are huge issues.

      The environmental issue is not my biggest concern considering how cheap and quick some genAI can be. So all gen AI isn't automatically seen as unethical due to environmental concerns to me.

      Also, has image generation gotten worse? I feel that all generated images are more "correct" but has this bad look to it now, that it did not previously have.

    • So much typing to say fuck all.

    • This "art" costs far more environmentally than any other. It uses mass amounts of electricity and water. It's nothing like, say, eating steak instead of salad, or driving a pickup truck to work. The "miracle" of AI has to come from somewhere, after all.

      • Sure, but so does everything. Pigments have to be mined or synthesized. Paper comes from cut down trees. Brushes are either synthesized or from natural hairs. Ink is a vat of survival chemicals.

        Electricity by itself is just one resource. You could argue that by centralizing the resource like that, you can easier reduce environmental impacts overall via more sustainable, less damaging energy production.

        Ai isn't a miracle, any more than air conditioning is, or refrigerators, or Christmas lights, or even just a stove. It's a tool.

        Again, I'm not trying to change anyone's mind here. It's just for the enjoyment of babbling about the subject, maybe having a nice conversation along the way. I have very definite opinions about the way generative models are being used, the impacts it's having, but a lot of the time that's not really interesting because pretty much everyone hates the slop factor.

        But that's, to me, like objecting to shovel because someone is using it to dig under your house. Misuse of a thing isn't the same as the thing itself

      • Running a local gen model for 500 images uses less electricity than playing Baldur's Gate 3 for 30 minutes.

        Edit: Correction; less than 5-10 minutes depending on settings.

    • You haven't demonstrated what place image generators have in your example, though. There are blind and paralyzed painters that can create incredible works, because they practiced.

      Maybe these chatbots have some place (I think they're fine for creating memes and forum slop) but I think it's sad that potential artists are robbing themselves the opportunity to build skill by outsourcing their artistic impulses to a chatbot.

      • It isn't about that, not really. It's about what art is and isn't, and how the tools are made more than how they're used.

        To reframe it, the problem with the generative models isn't really people using them, it's how they were trained in the first place, and how we handle differentiating between ai output and human output.

        All of the corporate ones stole the training data. And that includes works by living artists. It was, and is, entirely possible to train the software without shitting on people. It would be slower, but i don't see that as a negative because it would also end up better in the long run because it would also be more selective.

        I also don't think that anyone will deprive themselves of any skill that they would have put the effort into to begin with. There is a big degree of laziness/unmotivation in humans. People that just want the end product and not the journey there. I don't see a problem with that tbh.

        Anyone that would use ai as a way to skip over years of practice to get a specific image/piece out of their head into visibility isn't the sort to have done it to begin with. They'd give it a try, see that what they want isn't going to be realized in what they think is a reasonable time frame and just quit

        They never would pay someone else to do it either.

        The ones that would, they would anyway, though they might use ai while they're learning.

        Lemme give an anecdote that might be interesting, though not as some kind of proof or whatever. I'm not trying to convince anyone of anything, I'm just babbling my thoughts.

        Used to work for a guy. Quadriplegic, with limited arm/hand control. Details don't matter much for this, but it all depends on where the spinal injury is.

        He enjoyed working with wood. Had a lathe, saws, vises, all kinds of tools. He'd work for weeks on some things, getting it all just how he wanted. The same things, I could turn out in a day, they weren't exactly complicated things.

        But he would still go buy something like a chair. Why? Because his guests needed a seat, and it would take him a month to make.

        Ai generation is pretty much the same use case. It fills gaps. Someone that's driven to create is going to create because the process is part of that. Without a drive, a need to create, most people will just buy the chair. Divorced from a capitalist system where artists have to lose to ai products rather than just create for the sake of creation, the ai problem isn't much of a problem. Remove that from the equation, and then artists can create only what drives their passion instead of having to worry about commissions and sales to pay the bills.

        Slap a permanent kind of marker on ai output, and you've got a swathe of the other issues knocked out. The cat is out of the bag. The knowledge exists. When that happens, you have to adapt society as much as you have to adapt the technology itself.

156 comments