360 replies and 321 files omitted.
Strange. For reference the arguments I use to start Stable Diffusion is:>--ckpt pony_sfw_80k_safe_and_suggestive_500rating_plus-pruned.ckpt --lowvram --opt-split-attention
Other than that I have the standard settings (no scripts or face fixes or anything) and standard txt2img
I was using those exact settings except for that part. They just reduce the load on your computer and have no effect on the resulting image, right?
They should just reduce the memory used and not change what is produced. Just posted full list of arguments in case I'm wrong on that. It was just strange that you didn't get the same image using same prompt and seed.
Was Restore Faces turned on?
No, none of the extra features are turned on on my end when I generated.
>>2179>It was just strange that you didn't get the same image using same prompt and seed.
You see, these AI's no longer "mash" images together with the hope of producing a coherent pic.
They basically emulate human creativity.
Artists need external inputs as well. They consciously and unconsciously absorb bits and pieces from every other artwork they may have seen before. Sometimes it may even be apparently unrelated life experiences.
Regardless, when artists draw inspiration from other art sources. They are pretty much doing the same thing that AI's do.
It's impossible to trace back the source pics. What AI's make is about as original as what any human could've conceived.
It makes sense you don't always get the same results, even when all things are equal.
>>2182>It makes sense you don't always get the same results, even when all things are equal.
Running same seed and prompt I get same image every time. Also running others prompt and seeds (I seen people post) I get same image they get when using the same model they used.
I don't think the AI is updating its own model with data from the pictures it generates.
I hope it isn't. I don't want my AI to give itself dementia by using its garbled output as new input.
By now its garbled output probably outnumbers the number of images it was trained on.
Does anyone have any good images? I tried doing a cursory search on ponerpics and the first few pages were horrid dogshit.
The era of abstract merchants has reached a new golden age.
Merchants are being produced faster than the ADL could ever hope to flag them.
How do you make these?
I've got Stable Diffusion working, what's the secret?
Idk. I got them from Discord.
If you find out any methods, do share. I want to mass produce merchants.
That's not what I meant when I said I wanted racks on racks.
Racks on racks on racks on racks. Stacked rat stacks in hats on mats.
Pixiv has lost its collective mind recently (more than before).
It's impossible to use the site without being recommended pics of [i]pregnant toddlers!
Oh fuck. Don't some models use data taken from pixiv or anime booru sites?
I got the new Everything V3 model.
It seems more varied than the previous smaller Waifu model I was using.
But it also loves to wash out the colours of anything I make with it. So I often have to saturate this desaturated art output until it looks right.
Nice. Not knowing prompt but adding "(unicorn)" to prompt and "(text), devil" to negative prompt might fix some of the devil horns and and text. Love how AI art is evolving with better and better models.
Wow. It looks fantastic.
Is this using upscaling and inpainting or...? if you know.
Speaking of AI evolution how do I update my installation of Stable Diffusion? I heard the new version has no limit on text input and is better at drawing hands.
The way I do it is that I used git to get the initial code and run an git update in the bat file used to start SD.
So initially do an >git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui
Then in bat file I do >...>set COMMANDLINE_ARGS=--ckpt pony_sfw_80k_safe_and_suggestive_500rating_plus-pruned.ckpt --lowvram --opt-split-attention>>cd stable-diffusion-webui>git pull>call webui.bat>cd ..
Wtf is he doing to edit the AI generated art and get new ones like that? https://m.youtube.com/watch?v=aFnGO74KsF0
I assumed he did, but looked at description and he is using the Krita plugin for Stable Diffusion that runs an img2img with prompt. It can take basic input and you have to tell it what it should draw and weight how much it should resemble the input image. I am not fully sure how the weights should be but I think the Krita plugin is fairly easy to use.
First get Krita https://krita.org/
Then install the Krita plugin https://github.com/sddebz/stable-diffusion-krita-plugin
You can do the same in Stable Diffusion webui directly by going to the img2img tab.
I see that he also uses inpaint to fix areas.
I don't have full knowledge on how to use it effectively as my computer takes forever to generate a single image so I can't really play around with it without it taking forever.
Have you tried using that setting where your generator makes the image tiny and then scales it up?
I have not tried upscaling or any of the fancy options as the biggest I can generate is 512x512 and even that is pushing my card to the limits. And it takes a couple of minutes to generate an image. So experimenting with settings and what it does is an exercise in patience and remembering what I did half an hour ago and if my changes actually had an impact.
Damn, that sucks. Back when I had to make do with a piece of shit laptop that would take multiple minutes to boot up and do basic tasks like open the file explorer or open a webpage, I ran Fallout New Vegas at barely 10 frames a second even with the lowest graphics settings and as few mods as possible. The game was borderline unplayable especially when combat started so I had to rely on companions basically doing all combat for me. The lag didn't fuck their attacks up as much. Even the fucking word processor lagged with that thing. A word processor!
Does anyone actually believe GPU prices are dropping down anytime soon?>>2288
My trashtop struggles to run Halo 2 on anything but low settings.
Android-x86 is game-changing tho. if your phone is old and shitty like mine.
I couldn't have possibly run the PC version of Honkai on that thing.
>>2292>Does anyone actually believe GPU prices are dropping down anytime soon?
There was talk that GPU prices could drop when Etherium did their change and no longer required top end GPU to mine coins (or something like that). But I assume the Manufacturers/Shops have gotten used to getting paid top dollars so sadly it might take a good while before it drops.>Specs?
My GPU is an GTX 1060 3GB so it is on the low end to be able to run it at all.
>>2293>But I assume the Manufacturers/Shops have gotten used to getting paid top dollars
I guess so. I remember some Anons warning about this before.>GTX 1060 3GB>Low-end
Welp, I guess I shouldn't even bother.
>>2294>Welp, I guess I shouldn't even bother.
I think minimum recommended is 4GB card and 3GB is lowest that they had been able to run it on. Still the AI is able to run on consumer grade GPU compared to others that need 40GB+ cards to run so it is much better than what it could have been. But been a while since I read the FAQ and could be they have managed to lower the spec needed (looks like 2GB vram is lowest now) People have been able to run it on CPU only but significantly slower but it is possibilities.>Nvidia guide: https://rentry.org/voldy>CPU guide: https://rentry.org/cputard>AMD guide: https://rentry.org/sdamd
(think this is still the relevant guides)
Can my hp laptop run
Crysis Stable Diffusion?
Asking for a poorfag fren that might want to generate mares.
I think if you run the CPU version it could work. Not sure though as I haven't tried the CPU one, but it might be worth checking out. If it has an Nvidia GPU with 2GB or more vram it should run according to the guide without problem.
Isn't there a command you can add to the .bat file to make it generate images slower while using less memory at once?
Yes, the "--lowvram --opt-split-attention" in the COMMANDLINE_ARGS is the command line args for lowram systems. The "--ckpt " is to select what model it uses (the pony model in this example)>set COMMANDLINE_ARGS=--ckpt pony_sfw_80k_safe_and_suggestive_500rating_plus-pruned.ckpt --lowvram --opt-split-attention
Wait do you mark a word or phrase with (((these))) or !!!these!!! to increase or decrease the importance the AI gives to those words in the request?
Yes you have a few weighting and other tricks to do in prompt. Cant remember them all or find the guide that had it all listed but I guess it is in here somewhere https://rentry.org/sdgoldmine
(((increased importance))) - the more paratheses the more importance it should be given
The AI reinterpreted two hoes into one waifu, but one of the hoes is fat enough to be two women.
Love the AI renderings. Looking forward to getting "colorized" videos in AI rendering. So much good too look forward to.
Has anyone by the way run any of the CWC comics through any of the AI's yet?
How are you AI generating porn?
Just updated my Stable Diffusion install, I see new buttons and features but the limit on prompts is still there.
How do I get rid of that text limit?
If you want to get rid of text being generated in image add "text" to negative prompt. At least it usually works to get rid of mangled signatures and such in images.
There's a limit on the number of prompts I can give the AI at once. It's around 75 words. How do I break this limit on input text?
Not sure, but have you tested if the limit is on number of keywords individual words or if it is on prompt sections.
Like:>word1, word2, word3, ...>full sentence 1, full sentence 2, ....
...>pony, flower, lake>pony standing in field by a lake surrounded by flowers, stars, moon, (unicorn), by Peter Elson
(I have not run these examples so I have no idea what they will produce)
Only limit I can see is on full input, but I think this is a limit in Stable Diffusion that the webui can't circumvent. But could be that there is a limit on the number of "sections" i.e. keywords, keysentences.
>request man in blue jacket with white flames
Wow, whoever made this AI model loves men.
Also I updated my Stable Diffusion installation after wiping my PC. Now the text limit gets slightly higher whenever I go over the limit. Seems to be missing key words now and then, unless I'm using the wrong terms for some things like pony_ears.
Does putting Text in the negative prompt get rid of that watermark?
What about mixing the model with other models?
Art Subreddit Bans Guy Because His Work “Looks Like It was AI-Generated”https://nichegamer.com/art-subreddit-bans-artist-style-ai/
Who are your favourite artists to use for art generation?
>type "horse vagina" over and over
I thought combining AI models would make it better at producing horse pussy, not worse. What the hell is this?
Looks like you need different generating terms or a differently learned model.
When I wrote >blonde
no blonde hair
when I wrote (((((blonde))))) to emphasize the tag above all others I got that weird yellow pic.
I'm bored now. The art is coming out crappy. I don't think I'll do the rest of the mane six.
Are you using the Waifu Model or Pony Model?>>2321
(Pony model NSFW)>>2167
(Pony Model SFW)
I don't remember if I combined those with the other models or not but I'm sure Everything V3 and an anime waifu model are in there.
The mixed model can't make good porn.
Just wondered as the rendering looked a bit like the ones I got when I tried to generate MLP using Waifu model.>pic 1 Waifu Model (v1.2)>pic 2 Pony Model>twilight_sparkle, mlp, seductive>Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 3967284458, Size: 512x512>Not the best seed I guess
And just to have it added too here is the same prompt and seed in NSFW Pony Model
Oh my, are they holding hooves?! How lewd!
So I tried generating something with the (((((arcane style))))) tag and got this.
Turns out if you overrate Arcane's importance you get weird satanic looking shit.
This is interesting. I wonder what it means for the industry as a whole, particularly for artist who only partly use AI tools in their art process.
I believe that from the very moment they use it, their human intellectual property goes in flames.