416 replies and 413 files omitted.
>>2162I'll need the prompt and model used for those pics. If I just said "(My Little Pony) garden statue" with the default model that came with the AI when I downloaded it I doubt I'll get good results, but I'll give it a go.
By the way, what does "Tiling" do? Whenever I check that box it just makes the output look inferior. Is there some purpose to it I'm missing?
>>2163The one I had limited success with (but takes 2-3 minutes per image so I was not able to do to much) was:
>pony statue, majestic, alicorn, 4k hi-res, detailed, granite, cracked surface, weathered>Steps: 20, Sampler: Euler a, CFG scale: 7, Seed: 174254891, Size: 512x512 >>21642-3 minutes per image? That must suck. How many images can you generate at once before your PC runs out of memory?
One time I left my room with an autoclicker generating more sets of images every minute. By the time I got back, I had a few thousand.
>>2166>How many images can you generate at once before your PC runs out of memory?I can only run one at a time, and at times have to close other apps that uses memory in order for it to run.
Are you using the Stable Diffusion Model or Waifu model instead of the Pony model? The Pony Model is good and you should try it
https://huggingface.co/AstraliteHeart/pony-diffusionhttps://mega.nz/file/ZT1xEKgC#Xxir5udMmU_mKaRZAbBkF247Yk7DqCr01V0pDzSlYI0 >>2167Yep, that was the problem. It's turning out a lot better with the pony model.
>>2167Pony model loaded.
Here are some new AI generated pony statues, image sets 3 and 4 were made with the seed you posted instead of a random seed.
Had to split them up because trying to post the next 4 intact broke some kind of limit. "Request Entity too large".
>>2172Strange that the first image in the set using same seed and prompt didn't yield the same image as I got as it should have done that. Subsequent images when given a seed should give previous_seed+1 when running batches.
>>2174Maybe there's a second invisible seed somewhere. All the settings were accurately replicated.
>>2176Strange. For reference the arguments I use to start Stable Diffusion is:
>--ckpt pony_sfw_80k_safe_and_suggestive_500rating_plus-pruned.ckpt --lowvram --opt-split-attentionOther than that I have the standard settings (no scripts or face fixes or anything) and standard txt2img
>>2177>lowvram --opt-split-attentionI was using those exact settings except for that part. They just reduce the load on your computer and have no effect on the resulting image, right?
>>2178They should just reduce the memory used and not change what is produced. Just posted full list of arguments in case I'm wrong on that. It was just strange that you didn't get the same image using same prompt and seed.
>>2179Was Restore Faces turned on?
>>2180No, none of the extra features are turned on on my end when I generated.
>>2179>It was just strange that you didn't get the same image using same prompt and seed.You see, these AI's no longer "mash" images together with the hope of producing a coherent pic.
They basically emulate human creativity.
Artists need external inputs as well. They consciously and unconsciously absorb bits and pieces from every other artwork they may have seen before. Sometimes it may even be apparently unrelated life experiences.
Regardless, when artists draw inspiration from other art sources. They are pretty much doing the same thing that AI's do.
It's impossible to trace back the source pics. What AI's make is about as original as what any human could've conceived.
It makes sense you don't always get the same results, even when all things are equal.
>>2182>It makes sense you don't always get the same results, even when all things are equal.Running same seed and prompt I get same image every time. Also running others prompt and seeds (I seen people post) I get same image they get when using the same model they used.
>>2182I don't think the AI is updating its own model with data from the pictures it generates.
I hope it isn't. I don't want my AI to give itself dementia by using its garbled output as new input.
By now its garbled output probably outnumbers the number of images it was trained on.
Does anyone have any good images? I tried doing a cursory search on ponerpics and the first few pages were horrid dogshit.
The era of abstract merchants has reached a new golden age.
Merchants are being produced faster than the ADL could ever hope to flag them.
>>2234How do you make these?
I've got Stable Diffusion working, what's the secret?
>>2235Idk. I got them from Discord.
If you find out any methods, do share. I want to mass produce merchants.
>>1884That's not what I meant when I said I wanted racks on racks.
>>2241Racks on racks on racks on racks. Stacked rat stacks in hats on mats.
Pixiv has lost its collective mind recently (more than before).
It's impossible to use the site without being recommended pics of [i]pregnant toddlers!
>>2248Oh fuck. Don't some models use data taken from pixiv or anime booru sites?
I got the new Everything V3 model.
It seems more varied than the previous smaller Waifu model I was using.
But it also loves to wash out the colours of anything I make with it. So I often have to saturate this desaturated art output until it looks right.
>>2273Nice. Not knowing prompt but adding "(unicorn)" to prompt and "(text), devil" to negative prompt might fix some of the devil horns and and text. Love how AI art is evolving with better and better models.
>>2275Wow. It looks fantastic.
Is this using upscaling and inpainting or...? if you know.
>>2274Speaking of AI evolution how do I update my installation of Stable Diffusion? I heard the new version has no limit on text input and is better at drawing hands.
>>2277The way I do it is that I used git to get the initial code and run an git update in the bat file used to start SD.
So initially do an
>git clone https://github.com/AUTOMATIC1111/stable-diffusion-webuiThen in bat file I do
>...>set COMMANDLINE_ARGS=--ckpt pony_sfw_80k_safe_and_suggestive_500rating_plus-pruned.ckpt --lowvram --opt-split-attention>>cd stable-diffusion-webui>git pull>call webui.bat>cd .. Wtf is he doing to edit the AI generated art and get new ones like that?
https://m.youtube.com/watch?v=aFnGO74KsF0>>2281I assumed he did, but looked at description and he is using the Krita plugin for Stable Diffusion that runs an img2img with prompt. It can take basic input and you have to tell it what it should draw and weight how much it should resemble the input image. I am not fully sure how the weights should be but I think the Krita plugin is fairly easy to use.
First get Krita
https://krita.org/Then install the Krita plugin
https://github.com/sddebz/stable-diffusion-krita-pluginYou can do the same in Stable Diffusion webui directly by going to the img2img tab.
>>2281>>2282I see that he also uses inpaint to fix areas.
I don't have full knowledge on how to use it effectively as my computer takes forever to generate a single image so I can't really play around with it without it taking forever.
>>2283Have you tried using that setting where your generator makes the image tiny and then scales it up?
>>2287I have not tried upscaling or any of the fancy options as the biggest I can generate is 512x512 and even that is pushing my card to the limits. And it takes a couple of minutes to generate an image. So experimenting with settings and what it does is an exercise in patience and remembering what I did half an hour ago and if my changes actually had an impact.
>>2288Damn, that sucks. Back when I had to make do with a piece of shit laptop that would take multiple minutes to boot up and do basic tasks like open the file explorer or open a webpage, I ran Fallout New Vegas at barely 10 frames a second even with the lowest graphics settings and as few mods as possible. The game was borderline unplayable especially when combat started so I had to rely on companions basically doing all combat for me. The lag didn't fuck their attacks up as much. Even the fucking word processor lagged with that thing. A word processor!