Comfyui upscale methods reddit

Comfyui upscale methods reddit. more about this here. Adding in Iterative Mixing KSampler from the early work on DemoFusion produces far more spatially consistent results as shown in the second image. Usually I use two my wokrflows: "Latent upscale" and then denoising 0. With all that in mind, for regular use, I prefer the last method for realistic images. Try immediately VAEDecode after latent upscale to see what I mean. 5 (+ Controlnet,PatchModel. INTRO. I upscaled it to a… If you want a fully latent upscale, make sure the second sampler after your latent upscale is above 0. The Upscale Image node can be used to resize pixel images. I'm using a workflow that is, in short, SDXL >> ImageUpscaleWithModel (using 1. Each upscale ads details, also the bigger upscale the more blur, so do a few x2, x0. With ComfyUI you just download the portable zip file, unzip it and get ComfyUI running instantly, even a kid can get ComfyUI installed. Both these are of similar speed. Just curious if anyone knows of a workflow that could basically clean up/upscale screenshots from an animation from the late 90s (like Escaflowne or Ruroni Kenshin). Instead, I use Tiled KSampler with 0. The best method as said below is to upscale the image with a model ( then downscale if necessary to desirate size because most upscalers do X4 and it's often too big size to process) then send it back to VAE encode and sample it again. The 8K upscale stage takes up 70 gigs of ram during VAE decode and tile reassembly. I switched to comfyui not too long ago, but am falling more and more in love. ComfyUI : Ultimate Upscaler - Upscale any image from Stable Diffusion, MidJourney, or photo! - YouTube. Then I upscale with 2xesrgan and sample the 2048x2048 again, and upscale again with 4x esrgan. The issue is likely caused by a quirk in the way MultiAreaConditioning works: its sizes are defined in pixels. positive image conditioning) is no longer a simple text description of what should be contained in the total area of the image; they are now a specific description that in the area defined by the coordinates starting from x:0px y:320px, to x:768px y Actually no, I found his approach better for me. My ComfyUI workflow was created to solve that. Here's how you can do it; Launch the ComfyUI manager. Hello, A1111 user here, trying to make a transition to Comfyui, or at least to learn of ways to use both. I usually use 4x-UltraSharp for realistic videos and 4x-AnimeSharp for anime videos. image. This is a series and I have feeling there is a method and a direction these tutorial are Image generated with my new hopefully upcoming Instantly Transfer Face By Using IP-Adapter-FaceID: Full Tutorial & GUI For Windows, RunPod & Kaggle tutorial and Web APP What are the pros and cons of using the kohya deep shrink over using 2 ksamplers to upscale? I find the kohya method significantly slower since the whole pass is now done at high res instead of only partially done at high res. 5 and I was able to get some decent images by running my prompt through a sampler to get a decent form, then refining while doing an iterative upscale for 4-6 iterations with a low noise and bilinear model, negating the need for an advanced sampler to refine the image. But I probably wouldn't upscale by 4x at all if fidelity is important. You either upscale in pixel space first and then do a low denoise 2nd pass or you upscale in latent space and do a high denoise 2nd pass. PLANET OF THE APES - Stable Diffusion Temporal Consistency. The target width in pixels. Point the install path in the automatic 1111 settings to the comfyUI folder inside your comfy ui install folder which is probably something like comfyui_portable\comfyUI or something like that. This means that your prompt (a. If you use Iterative Upscale, it might be better to approach it by adding noise using techniques like noise injection or unsampler hook. And when purely upscaling, the best upscaler is called LDSR. 0 -> 0. Switch the toggle to upscale, make sure to enter the right CFG, make sure randomize is off, and press queue. 2 options here. Please keep posted images SFW. g. 1-0. 5 if you want to divide by 2) after upscaling by a model. Tutorial 6 - upscaling. I've struggled with Hires. This method consists of a few steps: decode the samples into an image, upscale the image using an upscaling model, encode the image back into the latent space, and perform the sampler pass. There's a bunch of different ones on the market but those are pretty much the only ones I ever use. No matter what, UPSCAYL is a speed demon in comparison. More consistency, higher resolutions and much longer videos too. started to use comfyui/SD local a few days ago und I wanted to know, how to get the best upscaling results. Ultimate SD upscale is good and plays nice with lower-end GFX cards, Supir is great but very resource-intensive. I was running some tests last night with SD1. The TL;DR version is this: it makes a image from your prompt without a LoRA, runs it through ControlNet, and uses that to make a new image with the LoRA. Here it is, the method I was searching for. fix then going to img2img and using controlnet + Ultimate SD Upscale script and 4x Ultrasharp Upscaler. Please share your tips, tricks, and… May 5, 2024 · こんにちは、はかな鳥です。 前回、明瞭化アップスケールの方法解説として、『clarity-upscaler』のやり方を A1111版&Forge版 で行いましたが、今回はその ComfyUI版 です。 『clarity-upscaler』というのは一つの拡張機能というわけではなく、ここでは Controlnet や LoRA 等、さまざまな機能を複合して作動 After 2 days of testing, I found Ultimate SD Upscale to be detrimental here. From the ComfyUI_examples, there are two different 2-pass (Hires fix) methods, one is latent scaling, one is non-latent scaling Now there's also a `PatchModelAddDownscale` node. This is with 7950X CPU, v555 drivers, python 3. I tried all the possible upscalers in ComfyUI (LDSR, Latent Upcale, several models such as NMKV, the Ultimate SDUpscale node, "hires fix" (yuck!), the Iterative Latent upscale via pixel space node (mouthful), and even bought a license from Topaz to compare the results with Faststone (which is great btw for this type of work). We would like to show you a description here but the site won’t allow us. A. 5 was very basic with some few tips and tricks, but I used that basic workflow and figured out myself how to add a Lora, Upscale, and bunch of other stuff using what I learned. I've been wondering what methods people use to upscale all types of images, and which upscalers to use? So far I've been just using Latent(bicubic antialiased) for Hires. Images are too blurry and lack of details, it's like upscaling any regular image with some traditional methods. And above all, BE NICE. 5 -> 0. - Click on an EMPTY SPACE in your ComfyUI workflow… and Ctrl+V. Also, both have a denoise value that drastically changes the result. The final 3rd stage (8K) being the most time consuming. With this method, you can upscale the image while also preserving the style of the model. Look at this workflow : New to Comfyui, so not an expert. Hello, For more consistent faces i sample an image using the ipadapter node (so that the sampled image has a similar face), then i latent upscale the image and use the reactor node to map the same face used in the ipadapter on the latent upscaled image. In other UIs, one can upscale by any model (say, 4xSharp) and there is an additional control on how much that model will multiply (often a slider… For upscaling with img2img, you first upscale/crop the source image (optionally using a dedicated scaling model like ultrasharp or something) convert it to latent and then run the ksampler on it. Reply reply More replies I liked the ability in MJ, to choose an image from the batch and upscale just that image. The goal of this tutorial is to give an overview of a method I'm working on to simplify the process of creating manga, or comics. And you may need to do some fiddling to get certain models to work but copying them over works if you are super duper uper lazy. sampling methods are entirely an own choice thing, some can have different effects when upscaling because some are better at removing latent noise than others, some produce artifacts I don't Welcome to the unofficial ComfyUI subreddit. 9 and torch 2. Jan 5, 2024 · Click on Install Models on the ComfyUI Manager Menu. This is not the case. Results or 'outputs' can be stunning and awe inspiring other times aren't the best quality and need refining by the usual suspects; Photoshop, Blender, 3DCoat, ZBrush etc. Even with ControlNets, if you simply upscale and then de-noise latents, you'll get weird artifacts like the face in the bottom right instead of a teddy bear. My images are noisy (just like high ISO noise) after using upscaling (iterative upscale method). 5 to high 0. The latent upscale in ComfyUI is crude as hell, basically just a "stretch this image" type of upscale. Upscaled by ultrasharp 4x upscaler. With it, I either can't get rid of visible seams, or the image is too constrained by low denoise and so lacks detail. The software creates a Load Image node automatically, with the copied image. NICE DOGGY - Dusting off my method again as it still seems to give me more control than AnimateDiff or Pika/Gen2 etc. Please share your tips, tricks, and… I gave up on latent upscale. report. I want to replicate the "upscale" feature inside "extras" in A1111, where you can select a model and the final size of the image. 5, don't need that many steps From there you can use 4x upscale model and run sample again at low denoise if you want higher resolution. The prompt for the first couple for example is this: upscale in smaller jumps, take 2 steps to reach double the resolution. This is the 'latent chooser' node - it works but is slightly unreliable. It will replicate the image's workflow and seed. latent upscale introduces noise as I said in other posts here. fix and Loopback Scaler either don't produce the desired output, meaning they change too much about the image (especially faces), or they don't increase the details enough which causes the end result to look too smooth (sometimes losing details) or even blurry and smeary with I ask because after Kohya's deepshrink fix became available, I haven't done any upscaling at all in A1111 or Comfy. This is what A1111 also does under the hood, you just have to do it explicitly in comfyui. Hires fix with add detail lora. I had seen a tutorial method a while back that would allow you upscale your image by grid areas, potentially allow you to specify the "desired grid size" on the output of an upscale and how many grids, (rows and columns) you wanted. Would want to be able to change the input to a node and have that immediately take effect, without me needing to rerun the graph. Ideally, it would look like some sort of slider that I can control the number of lines to increase by depending on how many times the artwork was upscaled: Hi guys, Has anyone managed to implement Krea. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. embed. Mar 22, 2024 · You have two different ways you can perform a “Hires Fix” natively in ComfyUI: Latent Upscale; Upscaling Model; You can download the workflows over on the Prompting Pixels website. The images above were all created with this method. I generally do the reactor swap at a lower resolution then upscale the whole image in very small steps with very very small denoise ammounts. Side by side comparison with the original. Belittling their efforts will get you banned. 2 Oct 21, 2023 · Non-latent upscale method. 4 and tiles of 768x768. The method used for resizing. I think I have a reasonable workflow, that allows you to test your prompts and settings and then "flip a switch", put in the image numbers you want to upscale and rerun the workflow. that should stop it being distorted, you can also switch the upscale method to bilinear as that may work a bit better. It turns out lovely results, but I'm finding that when I get to the upscale stage the face changes to something very similar every time. I tried StableSR, Kohya Deepshrink and a bunch of other methods. That's because latent upscale turns the base image into noise (blur). That's because of the model upscale. I would start here, compare different upscalers. feed the 1. 43 votes, 16 comments. 5 denoise. Like many XL users out there, I’m also new to ComfyUI and very much just a beginner in this regard. After borrowing many ideas, and learning ComfyUI. 0 Alpha + SD XL Refiner 1. To upscale images using AI see the Upscale Image Using Model node. That is using an actual SD model to do the upscaling that, afaik, doesn't yet exist in ComfyUI. Thus far, I've established my process, yielding impressive images that al So you end up testing other workflows and methods quite frequently or be playing what could be a frustrating catch-up with the Jones's game later. This specific image is the result from repeated upscaling from: 512 -> 1024 -> 2048 -> 3072 -> 4096 using a denoise strength of 1. 4 on denoiser due to the fact that to upscale the latent it basically grows a bunch of dark space between each pixel unlike an image upscale which adds more pixels. Hi all! Does anyone know if there is a way to load a batch of images from my drive into comfy for an image to image upscale? I have scoured the net but haven't found anything. Welcome to the unofficial ComfyUI subreddit. Maybe it doesn't seem intuitive but it's better to go 4x Upscaler for a 2x Upscale and 8x Upscaler for a 4x Upscale. I have a much lighter assembly, without detailers, but gives a better result, if you compare your resulting image on comfyworkflows. I was just using Sytan’s workflow with a few changes to some of the settings, and I replaced the last part of his workflow with a 2-steps upscale using the refiner model via Ultimate SD upscale like you mentioned. 5 model) during or after the upscale. What is the method you prefer to upscale to 3 or (God forbid) 4 times? I'm doing lot of composites, and I need very high quality results. inputs. Are there any other methods that achieve better/faster results? When is ComfyUI's IS_CHANGED method called I'm developing a custom node and wondering how often the IS_CHANGED method is called. 4 -> 0. Please share your tips, tricks, and workflows for using this… Edit: oh and also I used an upscale method that scales it up incrementally 3 different resolution steps, it works to keep the basic generated image shape and not add too much unneeded detail. I'm trying to find a way of upscaling the SD video up from its 1024x576. 19K subscribers in the comfyui community. Upscale x1. Hires. I would probably switch it off of 'nearest-exact' and to a better upscaler model like ultrasharp or ESRGAN. Then use those with the Upscale Using Model node. or DeVinci Resolve to edit Welcome to the unofficial ComfyUI subreddit. Sure, it comes up with new details, which is fine, even beneficial for 2nd pass in t2i process, since the miniature 1st pass often has some issues due to imperfec We would like to show you a description here but the site won’t allow us. upscale_method. It abstracts the complexity of image upscaling and cropping, providing a straightforward interface for modifying image dimensions according to user-defined parameters. Grab the image from your file folder, drag it onto the entire ComfyUI window. 11. if I feel I need to add detail, ill do some image blend stuff and advanced samplers to inject the old face into the process. Hello ComfyUI fam, I'm currently editing an animation and want to take the 1024x512 video frame sequence output I have and add detail (using the same 1. Expanding on my temporal consistency method for a 30 second, 2048x4096 pixel total override animation. 2 and resampling faces 0. 35 -> 0. Please share your tips, tricks, and workflows for using this software to create your AI art. to combat it you must increase the denoising value of any sampler you feed an upscale into. For example, if you start with a 512x512 latent empty image, then apply a 4x model, apply "upscale by" 0. Hi everyone, I've been using SD / ComfyUI for a few weeks now and I find myself overwhelmed with the number of ways to do upscaling. 10 votes, 18 comments. ATM I start the first sampling in 512x512, upscale with 4x esrgan, downscale the image to 1024x1024, sample it again, like the docs tell. It's why you need at least 0. 5, x2 usually produce a better result! Edit, nv, missread your question If it's only one upscale - downscale it's probably because the downscale sharpens the blur added from the x4 upscale and generally produce a better results then just a x2. To start enhancing image quality with ComfyUI you'll first need to add the Ultimate SD Upscale custom node. 5 "Upscaling with model" and then denoising 0. Pixel upscale to a low denoise 2nd sampler is not as clean as a latent upscale but stays true to the original image for the most part. 5=1024). Jul 23, 2024 · The standard ERSGAN4x is a good jack of all trades that doesn't come with a crazy performance cost, and if you're low vram, i would expect you're using some sort of tiled upscale solution like ultimate sd upscale, yea? permalink. Jan 22, 2024 · 画像のアップスケールを行うアップスケーラーには ・計算補完型アップスケーラー(従来型。Lanczosなど) ・AIアップスケーラー(ニューラルネット使用。ESRGAN) の2種類があり、ComfyUIでは、どちらも使用することができます。 AIアップスケーラーを使用するワークフロー ComfyUIのExampleにESRGANを Under 4K: generate base SDXL size with extras like character models or control nets -> face / hand / manual area inpainting with differential diffusion -> Ultrasharp 4x -> unsampler -> second ksampler with a mixture of inpaint and tile controlnet (I found only using tile control net blurs the image) Latent upscales require the second sampler to be set at over 0. The problem with simply upscaling them is that they are kind of 'dirtier', so simply upscale doesn't really clean them up around the lines, and colors are a bit dimmer/darker. I have a 4090 rig, and i can 4x the exact same images at least 30x faster than using ComfyUI workflows. This is what I have so far (using the custom nodes to reduce the visual clutteR) . - image upscale is less detailed, but more faithful to the image you upscale. I talk a bunch about some of the different upscale methods and show what I think is one of the better upscale methods, I also explain how lora can be used in a comfyUI workflow. 5, euler, sgm_uniform or CNet strength 0. 2x upscale using Ultimate SD Upscale and TileE Controlnet. thats Thanks. b16-vae can't be paired with xformers right now, only with vanilla pytorch, and not just regular pytorch it's nightly build pytorch. Search for upscale and click on Install for the models you want. ) I haven't managed to reproduce this process i Welcome to the unofficial ComfyUI subreddit. Links to different 3D models, images, articles, and videos related to 3D photogrammetry are highly encouraged, e. You can find the node here. The downside is that it takes a very long time. fix and other upscaling methods like the Loopback Scaler script and SD Upscale. I'm looking for a solution that will increase the number of lines, between the existing lines, when I upscale the image to a larger size. . The steps are as follows: Start by installing the drivers or kernel listed or newer in the Installation page of IPEX linked above for Windows and Linux if needed. It is indeed very resource hungry. Latent quality is better but the final image deviates significantly from the initial generation. Installation is complicated and annoying to setup, most people would have to watch YT tutorials just to get A1111 installed properly. I haven't needed to. Latent upscale is different from pixel upscale. Choose your platform and method of install and follow the instructions. You just have to use the node "upscale by" using bicubic method and a fractional value (0. If I had chosen not to use the upscale with model step, I would have considered using the Ultimate SD Upscale method instead. This is a community to share and discuss 3D photogrammetry modeling. But it does take longer to make. 9, end_percent 0. - latent upscale looks much more detailed, but gets rid of the detail of the original image. Upscale to 2x and 4x in multi-steps, both with and without sampler (all images are saved) Multiple LORAs can be added and easily turned on/off (currently configured for up to three LORAs, but it can easily add more) Details and bad-hands LORAs loaded I use it with dreamshaperXL mostly and works like a charm. I or Magnific AI in comfyui? I've seen the websource code for Krea AI and I've seen that they use SD 1. 17K subscribers in the comfyui community. Appreciate just looking into it. I have yet to find an upscaler that can outperform the proteus model. Does it actually produce better results? The ImageScale node is designed for resizing images to specific dimensions, offering a selection of upscale methods and the ability to crop the resized image. I have applied optical flow to the sequence to smooth out the appearance but this results in a loss of definition in every frame. 4 nightly. The target height in pixels. 6 denoise and either: Cnet strength 0. a. While I'd personally like to generate rough sketches that I can use for a frame of reference when later drawing, we will work on creating full images that you could use to create entire working pages. Making this in ComfyUI, for now you can crop the image into parts with custom nodes like imagecrop or imagecrop+ (and btw is the same as SD ultimate upscale, right? however splitting it first you theorically could handle this better IDK) 5 - Injecting noise. 5x upscale back to source image and upscale again to 2x lookup latent upscale method as-well this performs a staggered upscale to your desired resolution in one workflow queue. 5+ denoise. crop 10 votes, 15 comments. Ty i will try this. 9 , euler Upscale Image node. Search, for "ultimate”, in the search bar to find the Ultimate SD Upscale node. I just generate my base image at 2048x2048 or higher, and if I need to upscale the image, I run it through Topaz video AI to 4K and up. In this ComfyUI tutorial we look at my favorite upscaler, the Ultimate SD Upscaler Along with normal image preview other methods are: Latent Upscaled 2x Hires fix 2x(two pass img) Upscaled img 4x using nearest-exact upscale method. I've played around with different upscale models in both applications as well as settings. 5 to get a 1024x1024 final image (512 *4*0. com and my result is about the same Is there any nodes / possibility for an RGBA image (preserving alpha channel and the related exact transparency) for iterative upscale methods ? I tried "Ultimate SD Upscale", but it has a 3 channel input, it refuses alpha channel, nor the "VAE Encode for inpainting" (which has a mask input) also refuses 4 channel input. The pixel images to be upscaled. The t-shirt and face were created separately with the method and recombined. Looks like yeah, the upscale method + the denoising strength + the final size you want, that tends to go great lengths to clean up faces. 2x upscale using lineart controlnet. The issue I think people run into is that they think the latent upscale is the same as the Latent Upscale from Auto1111. SDUpscaler yields very unpredictive results (faces in the background). ***** Off topic question***** Does this method of linking no longer work? That said, Upscayl is SIGNIFICANTLY faster for me. A lot of people are just discovering this technology, and want to show off what they created. The best method I "Latent upscale" is an operation in latent space and I don't know any way to use the model, mentioned above, in latent space. articles on new photogrammetry software or techniques. The upscale quality is mediocre to say the least. 5 model) >> FaceDetailer. Initial Setup for Upscaling in ComfyUI. Personally in my opinion your setup is heavily overloaded with incomprehensible stages for me. save. Width. 0. Tutorial 7 - Lora Usage I made a tiled sampling node for ComfyUI that i just wanted to briefly show off. this breaks the composition a little bit, because the mapped face is most of the time to clean or has a slightly different lighting etc. In my case for example, I make my own upscale method in ComfyUI. This process is generally fast, no parameters to tweak. Edit: Also I woudn't recommend doing a 4x Upscale using a 4x Upscaler (such as 4x Siax). Greetings, Community! As a newcomer to ComfyUI (though a seasoned A1111 user), I've been captivated by the potential of Comfy and have witnessed a significant surge in my workflow efficiency. Both are quick and dirty tutorials without tooo much rambling, no workflows included because of how basic they are. If you want more details latent upscale is better, and of course noise injection will let more details in (you need noises in order to diffuse into details). Specfiically, the padded image is sent to the control net as pixels as the "image" input , and the padded image is also sent as VAE encoded to the sampler as the latent image. height. 51 denoising. An alternative method is: - make sure you are using the k-sampler (efficient) version, or another sampler node that has the 'sampler state' setting, for the first pass (low resolution) sample Jan 8, 2024 · 2. so a latent upscale is inherrantly lossy. k. The other methods will require a lot of your time. There are also "face detailer" workflows for faces specifically. His previous tutorial using 1. This is the image I created using ComfyUI, utilizing Dream ShaperXL 1. - You can then use that image in whatever workflow you built. The only approach I've seen so far is using a the Hires fix node, where its latent input comes from AI upscale > downscale image, nodes. 5 ~ x2 - no need for model, can be a cheap latent upscale Sample again, denoise=0. See how you like the results and stop here if they're good enough. A1111 is REALLY unstable compared to ComfyUI. Edit to add: When using tiled upscalers with the right settings you can get enhancements in details without using latent upscaling and relying on . I need bf16 vae because I often using upscale mixed diff, with bf16 encodes decodes vae much faster. Go to the custom nodes installation section. jsgfxm buvuh joqr uqxuels fio dqaed mvlk zzwwf zvmlnuy dard