Pasting terry Pratchett's description of foul ole ron into bing.

foul ole ron.JPG


@OnlyOneKenobi `s tea dragon prompt

teadragon.jpg
Looks more like a Kinkade if you ask me.
 
Comparing Bing Image creator from 6 months ago to the recently released Bing creator with DALL-E 3, using the exact same prompt.

Prompt: A Space Marine from Warhammer 40K, running through a field while holding a large futuristic chainsaw sword towards an unknown enemy in the distance.

6 months ago.

_c6d43feb-78be-4f96-a3f1-08e501222ba3.jpg1696441811377.jpeg1696441827177.jpeg1696441839521.jpeg

Now w/ DALL-E 3

1696441859881.jpeg1696441864016.jpeg1696441867834.jpeg1696441872773.jpeg
 
Definitely a big move towards normalising AI generation.
People on Twitter, at least he ones I follow, have taken to it the last day or so in a big way making memes and whatnot.

I mean with results like this, a lot of people have no need to hassle with their own.

Prompt: large dragon blowing fire onto a Chinese style village

_f49fa99c-70be-4b5c-83fc-1af24c5b9473.jpg
 
New Nvidia drivers out... they claim up to 2x faster performance in Stable Diffusion. Guess I'll find out soon enough.

Screenshot - 2023-10-17 19.28.15.png
 
With the new Nvidia drivers you can finally disable the setting to fall back on system RAM which is the cause of the slow generations should you run out of VRAM. This means that you will get "out of memory" errors when trying to generate large images, but at least not that super slow generation that has been happening since March or so.

Here's how to disable the fallback on system memory :

1. Run Stable Diffusion and note the path of python.exe in Task Manager.
2. Right-click the desktop and open NVIDIA Control Panel.
3. In 3D Settings, select Manage 3D settings.
4. Go to the Program Settings tab and click Add to customize.
5. Browse and select the Python executable file noted in Step 1, then click Open.
6. Click on CUDA - Sysmem Fallback Policy and set it to Prefer No Sysmem Fallback.
7. Click Apply to confirm.
8. Restart Stable Diffusion for the setting change to take effect.

 
Image Prompt adapters (IP-Adapters) for ControlNet are seriously impressive and ridiculously powerful! They allow you to transfer style or even to re-create likenesses by using example images. In the below image, I took a still of Jack Nicholson in The Shining (Here's Johnny) and applied the style of Heath Ledger's Joker, to visualise what Nicholson's Joker might have looked like if he had another crack at the role and if he was styled to the modern look of the character.

Loving these "What If" scenarios that Stable Diffusion realise for me.

The generated image :

Here's Joker.jpg

From

In the horror classic The Shining, the door Jack Nicholson ....jpg

and

5 unknown facts about Heath Ledgers Joker.jpg
 
Master @OnlyOneKenobi , I have now installed SD on my PC. I have gone through the basics with everything that was included in the installation, but I am a bit overwhelmed compared to using something like ChatGPT, Bing Image Creator , Midjourney which was simple to use.

I do like that I will have a lot more control over the process. But where can I start (model) to get a similar result to something like this:

img_20231119_065703-jpg.1620323
 
Master @OnlyOneKenobi , I have now installed SD on my PC. I have gone through the basics with everything that was included in the installation, but I am a bit overwhelmed compared to using something like ChatGPT, Bing Image Creator , Midjourney which was simple to use.

I do like that I will have a lot more control over the process. But where can I start (model) to get a similar result to something like this:

img_20231119_065703-jpg.1620323
For that style, look at something like one of the Dreamshaper, Neverending Dream, OpenJourney or models trained on Midjourney. You could possibly even just use a good general, versatile model like Protogen or Deliberate and then use a reference or IP Adapter control net to copy that particular style. There are also tons of LoRA models might work for this. I haven't really been downloading many new models lately as the ones that I have, the majority are SD1.5 based, they seem to work really well for my purposes and I can copy styles and subjects using ControlNet.
 
You could use rundiffusion.com or Google colab if you want fully customisable SD... Bing / Dall-E and Midjourney is cool, but having full control over AI image generating is pretty awesome.
I've taken a step back. An decided to watch some tutorials and see what is best practice. Will see how it goes.
 
This is what you get when you combine a 1970s VW beetle with an army tank.


00070-2827911758.png
 
Which route to get an offline image infill tool - the kind where you select a region and have it add whatever you prompted? Forgot whether this is native to SD...haven't opened it in ages. Also can offline tools handle working directly with higher res images - 2160*1440 etc?

Would prefer a simple install...cba messing around with python again.
 
You're probably thinking of inpainting. High res? Not really unless you have a beast of a GPU. Invoke AI is probably the best for your requirement, though it seems they're starting to introduce different tiers of the products and most of them paid. I haven't used it in a while, but it had a one click installer that mostly worked without too much hassle.

 
Top
Sign up to the MyBroadband newsletter