Stability AI crashed and burned so fast it's not even funny. Their talent is abandoning ship they've even been caught scraping images from Midjourney, which means they probably don't have a proper dataset.
The model should be capable of much better than this, but they spent a long time censoring the model before release and this is what we got. It straight up forgot most human anatomy.
There's a reason that artists in training often practice by drawing nudes, even if they don't intend for that to be the main subject of their art. If you don't know what's going on under the clothing you're going to have a hard time drawing humans in general.
Honestly I think that it's models like these that output things that could be called art.
Whenever a model is actually good, it just creates pretty pictures that would have otherwise been painted by a human, whereas this actually creates something unique and novel. Just like real art almost always ilicits some kind of emotion, so too do the products of models like these and I think that that's much more interesting that having another generic AI postcard.
Not that I'm happy to see how much SD has fallen though.
It would be great if the model could produce this beautifully disfigured stuff when the user asked it to. But if it can't follow the user's prompts reasonably, then it's pretty useless as a tool
I can see an argument for artists choosing to use chaotic processes they can't really control.
Setting up a canvas and paints and brushes in a particular arrangement in the woods, and letting migratory animals and weather put their mark on the work, and then see what results. That could be art.
And if that can be art, then I guess chaotic, unpredictable AI models can output something that can be art, too.
I agree, bring on the weird, I don't need accurate, I want hallucinated novelty. This is like people who treat LLM like a dictionary or search engine and complain about innaccuracy. They don't understand this is to be expected of a synthetized answer.
Hallucinations is an essential part of the value these things bring.
I was going to say this, their new architecture seems to be better than previous ones, they have more compute and I'm guessing, more data. The only explanation for this downgrade is that they tried to ban porn. I haven't read online info about this at the time anyways, I'm just learning this recently
? They are all bad at first for the average person that uses surface level tools, but SD3 won't have the community to tune it because it is proprietary junk and irrelevant now.
There are a lot of fine-tunes of earlier Stable Diffusion models (SD1.5 and SDXL) that are better than this, and will continue to see refinement for some time yet to come. Those were released with more permissive licenses so they've seen a lot of community work built on them.
They realized that no matter how much they charged as a one time fee, the people the got the one time fee enterprise license would eventually cost them more in computational costs them the fee. So they switched it to 6000 image generations, which wasn't enough for most of the community that made fixes and trained loras, so none of the "cool" community stuff will work with SD3.
Basically, any time a user prompt homes in on a concept that isn't represented well in the AI model's training dataset, the image-synthesis model will confabulate its best interpretation of what the user is asking for.
I'm so happy that the correct terminology is finally starting to take off in replacing 'hallucinate.'
The model does have a lot of advantages over sdxl with the right prompting, but it seems to fall apart in prompts with more complex anatomy. Hopefully the community can fix it up once we have working trainers.