Stable Diffusion on YOUR Own PC

AI Generated Image of a Car

I plan on updating this with more details later on. However, I want to get these easy instructions posted because I need to pull up the GitHub and instructions for running SD on my computer each time, and I would rather have an easy way to access everything in one place.

*** UPDATE 2/10/2023 ***
I’ve followed this up with some notes here: https://spencerheckathorn.com/ai-info-dump/

I have a few tutorials in the works, including a video, and will update this post once completed.

************************

With a bit of adjustment, this should work for VRAM as low as 4 GB and GTX cards. However, I have an 8 GB RTX card, and this first version of the instructions works for my 2070 RTX and anything new or better.

You will need Python, and after lots of fighting the setup, I found it best to use Miniconda3 (or Conda) as the author of the GitHub project used. It will help, but it is not needed to have Git (git-SCM). I will make videos for these and post them here once completed.

The first Project part you will need is the Stable Diffusion GitHub checkpoint. I’m using 1.4 and checking today as I’m writing this 1.4 is still the newest version. https://huggingface.co/CompVis/stable-diffusion#model-access

Next, you will need the project from GitHub. I’m using an optimized, easy-to-use version that will work on lower ram and GTX cards. https://github.com/basujindal/stable-diffusion (Later, I will try: https://github.com/AUTOMATIC1111/stable-diffusion-webui, but for now, I’m not using this yet, I just want to put it here, so I don’t forget it. Another one to try: https://github.com/invoke-ai/InvokeAI. Last and supposedly best to try: https://github.com/JoePenna/Dreambooth-Stable-Diffusion this one has made a considerable splash recently.)

Download the project as a ZIP if you don’t have Git installed. If you have Git, clone the project. After making my video, I will use screenshots to improve the instructions in this post. (Note: you can play with the –H and –W values to lower the memory requirement)

Sidenote: https://arxiv.org/abs/2207.12598 This is more information on what we are doing.

Sidenote: https://www.reddit.com/r/StableDiffusion/ Cool ideas and prompt help, or just post what you make that is cool.

If you get the error:

pip is configured with locations that require TLS/SSL, however the ssl module in Python is not available

Make sure to add the following to your path:

C:\Users\me\miniconda3
C:\Users\me\miniconda3\scripts
C:\Users\me\miniconda3\library\bin

Doing so should fix the issue.

Any memory errors, try lowering your pic size.

aka instead of --H 580 --W 580 try --H 480 --W 480

I like to use PyCharm when I’m going to edit or write code. You can add the environment to your project in PyCharm, but for the most part, I’m not using that because I’m just running SD from the command line and not to the point that I’m editing these projects. To my mind, the power is in the model and how one interacts with it, which will become handy for editing the surrounding code, but I’m still in the phase of seeing what I can do with the thing in the first place.

Supposedly you can add yourself: https://www.youtube.com/watch?v=WsDykBTjo20 (not tried yet, putting here, so I will)

You can find prompt help in a few places. Here are some that I like right now:

Startup Guide:

cd C:\your\project\location
conda activate ldm
python optimizedSD/optimized_txt2img.py --prompt "happy animals, clipart" --H 580 --W 580 --seed 26 --n_iter 2 --n_samples 5 --ddim_steps 35

–H and –W are the dimensions of the image

–seed is the starting seed for the image(s)

–n_samples is how many we will output

–ddim_steps is how often the AI alters the image from the input. In this example, we are taking our seed input of 26 and altering it 35 times.

Seed and ddim_steps have a decent impact on what you will see in your output.

What is coming next?

  • Step-by-Step written instructions and video instructions.
  • Audio AI (yes, I’m already playing with another AI)
  • Text generation AI
    • Been playing with this for a few years lots to cover, but you can run GPT2 on your PC

Sample outputs:

Notes for later

C:\Users\me\miniconda3

pip install -e C:\Users\me\PycharmProjects\stable-diffusion

cd PycharmProjects\stable-diffusion\venv\Scripts\

cd C:\stable-diffusion\stable-diffusion-main
conda env create -f environment.yaml
conda activate ldm
mkdir models\ldm\stable-diffusion-v1 



===


python optimizedSD/optimized_txt2img.py --prompt "Cyberpunk war-torn city, award-winning photojournalism, urban warfare, combat, lens flares, emotional, atmospheric." --H 580 --W 580 --seed 26 --n_iter 2 --n_samples 5 --ddim_steps 32

Spencer Heckathorn

I've been writing online on and off for nearly 20 years now. But I have been building online businesses and trying to figure out different ways to make money online consistently for 15 years. Recently you can find me writing on https://foodieresults.com and posting odd musings to Twitter @mrhobbeys. I also have a mailing list I'm passionate about growing because email and a personal website are better ways for people to do social media.

Leave a Reply

Your email address will not be published. Required fields are marked *

Recent Posts