Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What kind of GPU are you running this on? My 3080 seems to take about 30 seconds per image with 50 passes. I'm wondering if I'm missing out on some optimizations. Could just be the quality of Linux NVidia drivers.


I'd recommend trying a different fork. Perhaps you're using the the official one. I believe that one still "ramps up the system" on every image generation. Other repos do the ramp up only once.


Yeah, this might be the problem. I was on the main fork, but going to try switching over to this: https://github.com/hlky/stable-diffusion


That’s weird, I got RTX3070 on Windows.

Are you using 512x512 images or larger ones?

Best workflow is to keep images close to 512x512, record the seed and then upscale.


I'm using 512x768 as the default, but a quick test shows only a marginal difference in speed between the two. I'll have to give Windows a try to see if it's the driver holding me back. Do you have any tips or resources for up-scaling the image after?


Currently this library can generate multiple images and upscale them through RealESRGAN: https://github.com/hlky/stable-diffusion

If you are not using this library already, give it a shot.

Also, I'm using Nvidia Studio drivers though I'm not sure if that would make a difference.


I've been using the main fork. This even has GFPGAN built in! Looks very useful thanks.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: