You're probably thinking of inpainting. High res? Not really unless you have a beast of a GPU. Invoke AI is probably the best for your requirement, though it seems they're starting to introduce different tiers of the products and most of them paid. I haven't used it in a while, but it had a one click installer that mostly worked without too much hassle.

Yeah inpainting. I've spent the last 3 days in blender, my brain is a bit fried.
 
Which route to get an offline image infill tool - the kind where you select a region and have it add whatever you prompted? Forgot whether this is native to SD...haven't opened it in ages. Also can offline tools handle working directly with higher res images - 2160*1440 etc?

Would prefer a simple install...cba messing around with python again.
Photoshop to be honest. Best in the industry.

Stable Diffusion can work, but it is not a simple installation and method. It is free though.
 
Then use Content-Aware Fill
Haha... Also not really the same. Both have their merits though but SD is more flexible in what you can achieve. Photoshop will just plonk whatever you what it thinks you want, based on your prompt while inpainting has influence over what you can generate to varying degrees, combine that with a prompt and your result would be closer to what you envisioned.
 
Haha... Also not really the same. Both have their merits though but SD is more flexible in what you can achieve. Photoshop will just plonk whatever you what it thinks you want, based on your prompt while inpainting has influence over what you can generate to varying degrees, combine that with a prompt and your result would be closer to what you envisioned.
Whatever your IA desires.
 
In a nutshell, (thanks CHATGPT) : In summary, Adobe Photoshop’s Generative Fill is deeply integrated into the Photoshop environment, providing a seamless experience for adding, removing, or altering content with text prompts and layer-based editing. In contrast, Stable Diffusion’s Inpainting is more focused on altering specific areas of an image, leveraging AI models specially designed for this purpose, and providing a range of settings to control the extent and style of changes made to the image.
 
I still can't get over how crazy powerful this tech is. As an exercise to see what a character might have looked like, if it wasn't necessary to cast a different actor in the role, I took the original Obi-Wan \ Ben Kenobi as an example.

Using a publicity still of Ewan McGregor's Obi-Wan for the pose, I first used the IP Adapter to get the proportions of Alec Guiness' head, facial features and hairline correct, then face swapped Alec Guiness' face into the shot (SD wouldn't draw the face correctly), then de-aged him using Stable Diffusion and then inpainted the full beard - and this is the result. Interesting that there is still some resemblance to McGregor despite the face being completely changed in the earlier steps. I guess the casting was pretty good!

YoungOGObiWan.png

The original images :


Ewan McGregor Will Star in an Obi-Wan Kenobi Star Wars Show  Time.jpg

Alec Guinness - Alchetron, The Free Social Encyclopedia.jpg
 
So I'm using chaiNNer for upscaling - works ok, the interface is nice and the install is simple. Is there a similar local tool that can also improve image quality, do some denoising etc? I sometimes get a scan of a graphic I need to use in a render and aside from the upscale it would really help to get it looking as clear as possible.

vb.png
 
So I'm using chaiNNer for upscaling - works ok, the interface is nice and the install is simple. Is there a similar local tool that can also improve image quality, do some denoising etc? I sometimes get a scan of a graphic I need to use in a render and aside from the upscale it would really help to get it looking as clear as possible.

View attachment 1626965
You can just use the various upscalers within stable diffusion, "Extras" in Automatic1111. Not sure exactly where they're found in Invoke, but they're there somewhere. You can download specialised models for different purposes, and most likely use them in chaiNNer too if you prefer using them there.

https://openmodeldb.info/

I typically use 8x_NMKD-Superscale_150000_G for photorealistic upscales and 4x-UltraSharp for concept art, digital paintings, cgi style. There are many other models ranging from anime, to photoreal, to lineart, etc, etc, etc.
 
You can just use the various upscalers within stable diffusion, "Extras" in Automatic1111. Not sure exactly where they're found in Invoke, but they're there somewhere. You can download specialised models for different purposes, and most likely use them in chaiNNer too if you prefer using them there.

https://openmodeldb.info/

I typically use 8x_NMKD-Superscale_150000_G for photorealistic upscales and 4x-UltraSharp for concept art, digital paintings, cgi style. There are many other models ranging from anime, to photoreal, to lineart, etc, etc, etc.
Yeah that works in chainner thx. Didn't realize I could load other models.
 
Now combine it with Tile VAE and COntrolNET and you can go crazy on details.

Do you use Tiled VAE? It is powerful, but I find it difficult to use and somewhat unpredictable. As far as I have seen, each image requires a lot of experimentation to find the correct settings to get it upscaled just right. Maybe I'm just doing it wrong, but I do have a few other upscaling tricks up my sleeves at least.
 
Do you use Tiled VAE? It is powerful, but I find it difficult to use and somewhat unpredictable. As far as I have seen, each image requires a lot of experimentation to find the correct settings to get it upscaled just right. Maybe I'm just doing it wrong, but I do have a few other upscaling tricks up my sleeves at least.
Hit and miss to be honest. But helps me with my 1070 which cant handle much.

Now try mess around with LCM LoRA. Works nice on cartoon characters:
 
Hit and miss to be honest. But helps me with my 1070 which cant handle much.

Now try mess around with LCM LoRA. Works nice on cartoon characters:
What works well (in my experience), is to use highres fix to upscale from the initial generation, then you send your image to "extras", upscale it there 2x, send it to inpaint, add some more details via inpainting (inpaint masked areas only) in various sections of your image (use control net to keep the shapes, colours, details as you want), and the send it back to extras again if you want to upscale it even more.
 
I trained a new LoRA model for the "ghost in the wall" effect which turned out pretty cool - very powerful especially when used in combination with controlnet and the various styles and models that local SD has available.

00006-836842230.jpg
00001-1740204264.jpg
 
There's an online service that uses some kind of AI to "Upscale" images and re-imagine low res or low poly art into photo realistic images. Since they charge $40 per month for the facility, I figured I would check to determine how easily it can be done with Automatic1111 and ControlNet - the good news is that it's pretty simple, actually.

This was my test :

LowPolyToPhoto.jpg

Each image will probably need some kind of tweaking, but for this particular one, I used the IMG2IMG loopback script, set with 6 loops and denoising at 0.25 - the final denoising strength set to 0,6.
I used ControlNet Openpose and SoftEdge to retain the composition, background details, general shape and pose of "Lara" in the image.

Anyway, I think it turned out pretty good and I'm happy I don't need to shell out money on expensive subscriptions to get the same results as Magnific and similar services.
 
ControlNet has been updated with a Depth preprocessor that fixes hand issues quite well from my testing. If you have an image where the hands of a subject is messed up, send the image to Inpainting, enable the depth controlnet and select the depth_hand_refiner pre-processor. Then, in the inpainting window, mask the hand or hands of your subject, set the denoising strength high (0.7 and up) if the hand is severely messed up, and then generate.

 
There's been another update to ControlNet to iron out issues with the new FaceID IP Adapter model, which is ridiculously powerful. It can copy the likeness of a face more accurately than the previous ControlNet IP Adapter models, it is a bit more complicated to use, but when done right, the results are amazing.

Be sure to download the LoRAs and the various IPAdapter models as seen in the link below.

 
Top
Sign up to the MyBroadband newsletter