- cross-posted to:
- opensource@lemmy.ml
- cross-posted to:
- opensource@lemmy.ml
It’s so awesome that I can let my kid paint with Krita and let her enhance the picture with AI live. She wanted to have an AI picture editor on her phone but I didn’t like the privacy policy. But Krita AI Diffusion came to the rescue.
After testing it out myself I showed her Krita, the most important tools and how to use layers and before I could say anything she was off to paint a nice landscape. When she was finished I actually got to enable the AI plugin and show her the ropes around that. And after enabling live painting she went ham and added a phoenix and a giant hand.
Hardest thing about it was that she had to describe what she wanted in English. But she’s already learning that in school so it shouldn’t give her too much trouble in the long run.
Anyways, FOSS rules!
[This is the caveat for me for now] (https://github.com/Acly/krita-ai-diffusion?tab=readme-ov-file#requirements).
To run locally a powerful graphics card with at least 6 GB VRAM is recommended. Otherwise generating images will take very long!
I’ve got decent RAM on an I9, but my graphics card, which is what matters here, isn’t up to par.
Yeah. Took me several days to set it up with the AMD card on my gaming rig. I also tried to run it on a Steam Deck but I gave that up. As it says, using the CPU is possible but painfully slow.
I’m loving the use case! Big assumption but I wonder if this technology will actually help children be interested in art?
When I look at what she does before even hitting the AI button I think her artistic interests are not in trouble. But she always liked drawing and painting anyways.
Looks amazing!! Does anyone know if there is something similar for GIMP?
I didn’t realize you could run something like this on your phone
Nah, that’s running on a laptop with the actual AI stuff being rendered on my gaming PC. I think that’s basically what all the AI tools on phones do as well. It’s just some company’s gaming PC.
Not quite a “gaming PC” since, at least if they’re using something like Nvidia’s Hopper GPUs (or relying on another service that does), they’re not designed for gaming (and in the price range of $10k-$100kish), buuut if you ignore the finer details then fundamentally it’s basically like that. They’d send the image to their “very expensive gaming PC server” where the inferencing would be done.
Ahhhh that makes a lot more sense, thanks!
There is one app that runs the model on an iPhone. It’s called “draw things: ai generation“. But you aren’t wrong about AI image generation usually needing a gaming pc or at least a video card with a lot of video ram to hold the model in while it works.
I tried it myself and it is very interesting but I was a little confused with what options to use. Can you give some examples? For example wich model and strength did you used? How do you used the layers for the controlnet? Thank you
I watched some tutorials on YouTube.
For this image she used live mode of Comics & Anime at about 30% strength. Increasing or decreasing the strength as needed. She painted filled out shapes, like filled out triangles for the mountains, green and blue for the field and sky and the general shape of a bird in orange for the phoenix.
Just painting the outlines generally doesn’t work as well as using filled out shapes. And it’s important that you describe. what’s in the picture.
We didn’t use the layers at all. If you can find some documentation on those that would be appreciated, I don’t get them either.
I recorded her work. I’ll ask her if I may upload it.
This helped a lot thank you.