Definitely Not Stealing (Art by DragonsofWales)
Definitely Not Stealing (Art by DragonsofWales)
Source (Via Xcancel)
Followup tweet:
Definitely Not Stealing (Art by DragonsofWales)
Source (Via Xcancel)
Followup tweet:
The reason AI's results look so convincing is because it's a plagiarism machine (and sometimes not a very good one). It cannot operate without our work which has been stolen and used without compensation because the courts have decided this is a "fair use". Fair to whom?
I saw a great video on Youtube illustrating this by attempting to convince it to make a wine glass full to the brim (which it simply can't), and going into the deeper philosophy of ideas to explain why it normally appears to be able to create "completely new" concepts when really it's just mashing two existing concepts together, but cannot actually correctly combine the separate ideas of "completely full" or "almost empty" with a "wine glass" properly. Because nobody ever does this and we use a different definition of "full" for a wine glass, there is no useful source material for it, and the AI has no idea how to do it either, while of course always being convinced it has correctly understood what a completely full wine glass looks like. It doesn't have novel ideas, it doesn't have an imagination, it is not intelligence. It is just plagiarism. It is using its nearly limitless database of the work of thousands of years of human creativity to appear as if it too is creative. It's not.
One of the top comments on the video says that as of 2 months ago it was able to produce a full glass of wine. The video is 4 months old.
It is old, and yes AIs are continuously getting updated, but the principles still stand. Of course images of completely full wine glasses can and do exist (and if they didn't it would only take a few moments to create some), and upon realizing this limitation, people (not the AIs themselves) are going to learn from this mistake and train the AIs better, give the existing or created images the necessary weighting to make sure that particular flaw is fixed and make their AI seem even more intelligent.
But it doesn't address the fundamental philosophical limitation and it doesn't make them intelligent. It only fixes wine glasses in particular. and of course in the process they also fix the millions of other things they're constantly training these AI to do. What it does NOT fix is the literally infinite number of other ideas that an AI simply can't conceive. I'm sure we'll add them as quickly as we come across them, but that's still human ingenuity at work, not AI.
The fucking gull to tag that Ai slop with #photography is absolutely appalling.
The gull, you say?
But but buy they said AI is inherently transformative. Did corporations and tech bros... lie to us? No, it's the kids artists who are wrong.
it still is, which is why shit like this is so crazy because it happens a lot.
the model keeps basically nothing of the original image, less than a single bit per work it ingests on (the LAION-B dataset is almost 6 billion images, most models have more than double the input data, the models are 5-6 GB, and each image is 1024x1024 pixels), so the image can't be in there. and yet most of these models can manage to stitch together their input data almost perfectly. it's like the model splits images into their constituent parts and builds them back the same.
from a technical standpoint it's amazingly unlikely. from a human perspective it's scummy. from a legal perspective it's 100% plagiarism.
Extremely lossy compression with unspecified codeword encryption.
from a legal perspective it's 100% plagiarism.
Plagiarism is not a legal concept. Copyright infringement and plagiarism are two circles of a Venn diagram with substantial overlap, but very noteworthy non-overlapping sections too.
Non-AI examples, to avoid any controversy, include that you can plagiarise by taking something from the public domain (e.g. the writing of Shakespeare or the music of Beethoven) and try to pass it off as your own. This would not be copyright infringement because the works belong to the public domain and are thus out of copyright, but it is still plagiarism, because plagiarism is a matter of academic integrity, not law. And you can do copyright infringement without plagiarising. That's what happens every time someone uploads a full movie to YouTube and says "no copyright intended".
Seems more like the original picture has been used as an image prompt for an AI image generator, to just make it look a bit different.
Though I may be misunderstanding and stating the obvious.
Yeah not fully convinced about what is going on. The fact that the blurred background is nearly identical is not something I've seen in other examples of over-tuning.
I don't even believe the sentiment is wrong. Generative algorithms have no conception of reality and the companies behind them are stealing work. However there seems to be a lot of misconception about what aspects are being stolen, because the reality is that the concepts that are stolen are much more abstract of lines and pixels than we can easily parse.
That was my guess. It looks like they used an image-to-image operation within AI to produce a "new" image.
“You’ve got a lawsuit on your hands”
...i have all of andy's dragon books and they're fantastic: highly recommended!..