Three visual artists are suing artificial intelligence image-generators to protect their copyrights and careers.
Visual artists fight back against AI companies for repurposing their work::Three visual artists are suing artificial intelligence image-generators to protect their copyrights and careers.
The brute forcing doesn't happen when you generate the art. It happens when you train the model.
You fiddle with the numbers until it produces only results that "look right". That doesn't make it not brute forcing.
Human inspiration and creativity meanwhile is an intuitive process. And we understand why 2+2 is four.
Writing a piece of code that takes two values and sums them, does not mean the code comprehends math.
In the same way, training a model to generate sound or visuals, does not mean it understands the human experience.
As for current models generating different result for the same prompt... no. They don't. They generate variations, but the same prompt won't get you Dalí in one iteration, then Monet in the next.
The brute forcing doesn't happen when you generate the art. It happens when you train the model.
So it's the same as a human - they also generate art until they get something that "looks right" during training. How is it different when an AI does it?
But you'll have to explain where this brute forcing happens. What are the inputs and outputs of the process? Because the NN doesn't generate all possible outputs until the correct one is found, which is what brute forcing is. Maybe you could argue that GANs are kinda doing this, but it's still a very much directed process, which is entirely different from real brute forcing.
Human inspiration and creativity meanwhile is an intuitive process. And we understand why 2+2 is four.
You're using more words without defining them.
Writing a piece of code that takes two values and sums them, does not mean the code comprehends math.
But we're not writing code to generate art. We're writing code to train a model to generate art. As I've already mentioned, NNs provably can build an accurate model of whatever you're training - how is this not a form of comprehension?
In the same way, training a model to generate sound or visuals, does not mean it understands the human experience.
Please prove you need to understand the human experience to be able to generate meaningful art.
As for current models generating different result for the same prompt... no. They don't. They generate variations, but the same prompt won't get you Dalí in one iteration, then Monet in the next.
Of course they can, depending on your prompt and temperature.
This is what it always comes down to - you have this fuzzy feeling that AI art is not real art, but the deeper you dig, the harder it gets to draw a real distinction. This is because your arguments aren't rooted in actual definitions, so instead of clearly explaining the difference between A and B, you handwave it away due to C, which you also don't explain.
I once held positions similar to yours, but after analysing the topic much much deeper I arrived at my current positions. I can clearly answer all the questions I posed to you. You should consider whether you not being able to means anything regarding your own position.
The issue isn't you being concise, it's throwing around words that don't have a clear definition, and expecting your definition to be broadly shared. You keep referring to understanding, and yet objective evidence towards understanding is only met with "but it's not creative".