I’m a little scared how well the new model works
Artstation <–more, I’m little too lazy to upload and link all photos in lemmy \
Holy shit that looks realistic. I’m on /r/all (idk what the Lemmy term is for it) and thought this was a random picture from the pics subreddit. Fascinating how good this AI has become! Are the prompts complicated, or are they far simpler to create now?
That’s one of the most realistic I’ve seen so far, even the letters on the riot police say “Police”.
Usually letters are just some weird gobbledygookI noticed that too, and it surprised me too, sometimes he writes something sensible, the future is now
afaik it’s specifically an intentional feature in SDXL to have legible text
idk what the Lemmy term is for it , \
its /?dataType=Post&listingType=All&page=1&sort=New
is this named after elon’s kid?
which AI did you use?
Stable diffusion SDXL 0.9
Let’s play “Spot the protagonist of a YA dystopia series”.
I’ve had trouble getting AI to understand the concept of a mid rise or medium density city. Like no matter how you describe the city, it’s always a densely populated wasteland which worries me about the way AI perceives what cities are and why we have them.
I don’t think we can consider AI as a monoloth. A text to image AI surely has no conception at all of what a city is for. An LLM might have such a concept, but I wouldn’t be worried about what it thinks based on limitations of a totally unrelated model.