How to Use DreamBooth to Fine-Tune Stable Diffusion (Colab)

Screenshot of generating using fine-tune Stable Diffusion
Prompt: borderlands style portrait of Sandman2022, intricate, highly detailed, digital painting, arstation, concept art, smooth, sharp focus, illustration

With Stable Diffusion DreamBooth, you can now create AI art generation images using your own trained images.

For example you can generate images with yourself or a loved one as a popular video game character, as a fantastical creature, or just about anything you can think of – you can generate a sketch or a painting of your pet as a dragon, or as the Emperor of Mankind.

You can also train your own styles and aesthetics like aetherpunk/magicpunk, or maybe people’s facial expressions like Zoolander’s Magnum (I haven’t tried this yet).

In this tutorial, we’ll cover the basics of fine-tuning Stable Diffusion with DreamBooth to generate your own customized images using Google Colab, for free. After we’ve fined tuned Stable Diffusion we’ll also test it out using Stable Diffusion WebUI built into the same Google Colab notebook.

Stable Diffusion is one of the best AI art generators, which has a free and open-source version that we’ll be using in our tutorial.

Google Colab is a cloud service offered by Google, and it has a generous free tier. That’s what we’ll be using to fine-tune Stable Diffusion, so you don’t need any strong of hardware for this tutorial.

Quick Video Demo

This is a quick video of me fine-tuning Stable Diffusion with DreamBooth from start to finish. In this example, I’m finetuning it using 20 images of the Sandman from The Sandman (TV Series) and ~20 images of Aemond Targaryen from House of the Dragon. Ideally, you should use more images to get better results.

The whole process took about an hour on Google Colab with 1500 training steps, but I have sped up the video in two of the more time-consuming parts. If you leave it on default settings, I estimate it would take ~40 minutes in total. And I think you can shorten this time by ~20 minutes if you provide your own class images (more on this later).

tl;dr AI News Section Preview
tl;dr AI News Section Preview

Sidenote: AI art tools are developing so fast it’s hard to keep up.

We set up a newsletter called tl;dr AI News.

In this newsletter we distill the information that’s most valuable to you into a quick read to save you time. We cover the latest news and tutorials in the AI art world on a daily basis, so that you can stay up-to-date with the latest developments.

Check tl;dr AI News

Use DreamBooth to Fine-Tune Stable Diffusion in Google Colab

Prepare Images

Choosing Images

When choosing images, it’s recommended to keep the following in mind to get the best results:

  1. Upload a variety of images of your subject. If you’re uploading images of a person, try something like 70% close-ups, 20% from the chest up, 10% full body, so Stable Diffusion also gets some idea of the rest of the subject and not only the face.
  2. Try to change things up as much as possible in each picture. This means:
    • Varying the body pose
    • Taking pictures on different days, in different lighting conditions, and with different backgrounds
    • Showing a variety of expressions and emotions
  3. When generating new images, whatever you capture will be over-represented. For example, if you take multiple pictures with the same green field behind you, it’s likely that the generated images of you will also contain the green field, even if you want a dystopic background. This can apply to anything, like jewelry, clothes, or even people in the background. If you want to avoid seeing that element in your generated image, make sure not to repeat it in every shot. On the other hand, if you want it in the generated images, make sure it’s in your pictures more often.
  4. It’s recommended that you provide ~50 images of what you’d like to train Stable Diffusion on to get great results. However, I’ve only used 20-30 so far, and the results are pretty good. If you’re just starting out and want to test it out, I think 20-30 images should be good enough for now, and you can get 50 images after you’ve seen it work.

Resize & Crop to 512 x 512px

Once you’ve chosen your images, you should prepare them.

First, we need to resize and crop our images to be 512 x 512px. We can easily do this using the website https://birme.net.

To do this, just:

  1. Visit the website
  2. Upload your images
  3. Set your dimensions to 512 x 512px
  4. Adjust the cropping area to center your subject
  5. Click on Save as Zip to download the archive.
  6. You can then unzip it on your computer, and we’ll use them a bit later.
Birme.net - Resize Images
Resizing Images using Birme.net

Renaming Your Images

We’ll also want to rename our images to contain the subject’s name:

  1. Firstly, the subject name should be one unique/random/unknown keyword. This is because Stable Diffusion also has some knowledge of The Sandman from other sources other than the one played by Tom Sturridge and we don’t want it to get confused and make a combination of interpretations of The Sandman. As such, I’ll call it Sandman2022 to make sure it’s unique.
  2. Renaming images to subject (1), subject (2) .. subject (30). This is because, using this method, you can train multiple subjects at once. If you want to fine-tune Stable Diffusion with Sandman, your friend Kevin, and your cat, you can give it prepare images for each of them. For the Sandman you’d have Sandman2022 (1), Sandman2022 (2) … Sandman (30), for Kevin you’d have KevinKevinson2022 (1)KevinKevinson2022 (2) … KevinKevinson (30), and for your cat you’d have DexterTheCat (1), DexterTheCat (2) … DexterTheCat(30).

Here’s me renaming my images for Sandman2022 in bulk on Windows. Just select them all, right click one of them and click Rename and give it what name you want and click anywhere to finish the renaming. Everything else will be renamed as well.

Bulk Renaming Images
Bulk Renaming Images

When it’s time to upload my images to DreamBooth, I’ll want to train it for Sandman2022 and AemondHoD, and this is how my images will look like:

Preview in Windows of the images of Aemond and Sandman that will be used for training
Preview in Windows of the images of Aemond and Sandman that will be used for training

Open Fast Stable Diffusion DreamBooth Notebook in Google Colab

Next we’ll open the Fast Stable Diffusion DreamBooth Colab notebook: https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb

Fast DreamBooth Notebook Preview
Fast DreamBooth Notebook Preview

Enable GPU

Before running the notebook, we’ll first have to make sure Google Colab is using a GPU. This is because GPUs can process much more data than CPUs, and allows you to train our machine learning models faster.

To do this:

  1. In the menu go to Runtime > Change runtime type.
    Runtime > Change runtime type
    Runtime > Change runtime type
  2. A small popup will appear. Under Hardware accelerator make sure you have selected GPU. Click Save when you’re done.
    Hardware Accelerator > GPU
    Hardware Accelerator > GPU

Run First Cell to Connect Google Drive

By running the first cell we’ll start connecting our notebook to Google Drive so we can save our all of our files in it – this includes the Stable Diffusion DreamBooth files, our fine-tuned models, and our generated images.

After running the first cell we’ll see a popup asking us if we want really want to Connect to Google Drive.

After we click it we’ll see another popup where we can select the account we want to connect with, and then to allow Google Colab some permissions to our Google Drive.

Mount Google Drive
Mount Google Drive

Run Second Cell to Set up The Environment

Next just run the second cell. There’s nothing for us to do there except wait for it to finish.

Screenshot of the "Setting Up the Environment" Cell

Input Hugging Face Token and Run Third Cell to Download Stable Diffusion

Next we’ll need to input our Hugging Face token. To do this we’ll need a huggingface.co account. HuggingFace is an open-source provider of deep learning technology and resources. It’s a platform that allows developers to share and use each other’s AI models.

If you don’t already have an account you can easily sign up at https://huggingface.co/join.

After signing up we’ll need to create a new token, which is like a password, and then input it into the Huggingface_Token input field.

  1. To do this, in your Hugging Face account, click on your profile picture in the top right > Settings, and then in the left sidebar click Access Tokens, or just click here https://huggingface.co/settings/tokens.
  2. Next click on New token. A popup will appear and give it any name you want, for reference,  and click Generate a token.
    Generate a New Token
    Generate a New Token
  3. If you haven’t already, we also need to accept some terms to get access to the Stable Diffusion repository.
    • To do this visit the repository URL here https://huggingface.co/runwayml/stable-diffusion-v1-5. You’ll see some terms that you have to accept and click Access repository. Those terms allow the authors access to our email and username.
      Allow Access to Stable Diffusion v1.5
      Allow Access to Stable Diffusion v1.5
  4. Now we can input our token in the Huggingface_Token field and run the cell to download Stable Diffusion.
    Input Hugging Face Token to Download Model or Input Path to Existing Model
    Input Hugging Face Token to Download Model or Input Path to Existing Model

    Optional Notes:

    • Path_to_HuggingFace: If you want to load and train over a different model from Hugging Face, than the default one, you can provide the path to it. For example, if you want to train Stable Diffusion to generate pictures of your face but in Elden Ring style, you could get this already fine-tuned model https://huggingface.co/nitrosocke/elden-ring-diffusion. The path you should provide is what comes after huggingface.co. In our case that’s nitrosocke/elden-ring-diffusion
    • CKPT_Path or CKPT_Link: If you already have an existing Stable Diffusion model that you’d like to fine-tune you can provide the path to it in CKPT_Path instead of the HuggingFace Token. Alternatively, if you have a Stable Diffusion model, whether it’s a link to any onlike .ckpt file or if it’s a shareable Google Drive Link, you can input it in the CKPT_Link field.
    • Compatibility_Mode: Some models that aren’t the official Stable Diffusion model may have some incompatibilities and return some errors. If this happens then you can check Compatibility_Mode which may fix the issue.

Setting Up Dreambooth

We can now get to setting up DreamBooth.

Here we’ll input our Session_Name. This will be the name of the trained model that we’ll save. This is where you’ll input previous sessions to load them, should you want to fine-tune them further. It can be anything you want.

Important: Don’t use spaces in the session name. Instead, use _ or -.

Run the cell after you input the session name.

Create/Load Session
Create/Load Session Cell

Notes:

  • Session_Name: This will be the name of your session and of your final model. You can name it anything. If you provide a name that doesn’t exist it will create a new session and if you use a name of a session that already exists in your Google Drive in My Drive > Fast-DreamBooth > Sessions then it will ask you whether you want to overwrite it or resume training it.
  • Session_Link_optional: Instead of providing the Session_Name you can provide the path to the session. For example the path to mine will be /content/gdrive/MyDrive/Fast-Dreambooth/Sessions/Aemond_Sandman.
  • Contains_faces: This setting is to help train Stable Diffusion more accurately in the case that you want to train a person or something else.
    Contains Faces

Upload Your Instance Images

Next you’ll see the Instance Images cell. This is where we upload our images.

If you run it Choose files button will appear, allowing you to upload images.

Upload Instance Images
Upload Instance Images

Additional Options:

  • Remove_existing_instance_images: If you already uploaded some images, but want to remove them to run the cell and upload other images again, then that’s what Remove_existing_instance_images is for. If you want to keep the previously uploaded images, then uncheck that box.
  • IMAGES_FOLDER_OPTIONAL: If you have a folder on your Google Drive that already contains your images, then just provide the path to it and then run the cell, instead of uploading the images from your computer.
  • Crop_images: Check this if you haven’t already cropped them yourself. They’ll be cropped in squares, and you can set the crop size yourself. It’s left to 512 by default because 512 x 512 px is the usual image dimension used.

Start DreamBooth

Finally we can run DreamBooth. We have a few configurations here.

The Training_Steps are what we care most about.

  • Training_Steps: The most important thing we can do here is set the training steps. We’ll want to set the total number of images we’ve uploaded, multiplied by 100. I uploaded 24 images of Aemond and 23 images of the Sandman, so that’s a total of 47. 47 * 100 = 4700. So that’s 4700 steps. If the model isn’t good enough, you can pick up where you left off and further train it.
Start DreamBooth Configurations
Start DreamBooth Configurations

You most likely won’t have to touch these options if this is your first time, but we’ll still explain them just in case:

  • Resume_Training: You’ll check this box if you want to continue training the model after you’ve tested it.
  • Seed: A seed number is a number that is used to generate a pattern. It helps to create repeatable patterns so you can share them with others and they can generate the same pattern. It’s also useful for reprocessing an image with different settings to see how it changes the results. You can set it to anything unless you have a specific reason to set it to a certain number, like if someone shared it with you so you can generate similar results with them.
  • fp16: Disabling this will take twice as long but may have better image generation. fp16 is a way of representing numbers that uses less memory than the standard representation. This means that calculations can be done faster, but with less precision. Unless you’re ok with waiting longer for possible improvements, you can leave this checked.
  • Enable_text_encoder_training and Train_text_encoder_for: These configs are related to training the text encoder. I don’t know much about it other than the instructions already in the notebook. You can read a bit about how training the text encoder improves results in this article on Hugging Face. If you’re training an aesthetic style set it to 10-20. If you’re training a person set it between 50-70. The higher it is, the more accurate it will be, but also, the less creative it will be.
  • Save_Checkpoint_Every_n_Steps: This option enables you to save the fine-tuned model at different points during the training. This is useful if you suspect that the number of steps you’ve set is too many, and it will overtrain Stable Diffusion. Overtraining is where the model will end up generating almost exactly the pictures you gave it, which means it won’t generate original work.
    • Save_Checkpoint_Every: the model will save the model each X steps you set. If you set it to 500 it will save it at 500, then 1000, then 1500 and so on.
    • Start_saving_from_step: this is the minimum amount of steps from which DreamBooth will start to save the model.

Finally, you can run the cell when you’re done with the options.

This can take a while. In my case 4700 steps took about 1h 10 minutes with Google Colab free, using an Nvidia Tesla T4 GPU.

Start Training Dreambooth
Start Training Dreambooth

Where Your New Model is Stored

When it’s done, you should find your model in your Google Drive. For example here’s where Aemond_Sandman.ckpt was saved with default output folder settings. This should be My Drive > Fast-DreamBooth > Sessions > Your_Session_Name.

Default Save Directory in Google Drive
Default Save Directory in Google Drive

Test the Trained Model (with Stable Diffusion WebUI by AUTOMATIC1111)

After the training cell has finished running, we can test our new fine-tuned Stable Diffusion model.

This notebook comes with Stable Diffusion WebUI by AUTOMATIC1111, which is the most popular implementation of Stable Diffusion, and offers us a very convenient web user interface.

We have a few options:

  1. If you have just fine-tuned Stable Diffusion for the first time (this is us, most likely), and want to test your newly created model, then just run the Test the trained model cell. No need to fill out anything.
  2. Update_repo: This is to update your installation of Stable Diffusion WebUI by AUTOMATIC1111 in case you have had it installed previously. I usually leave this checked to make sure I’m leaving the latest update.
  3. If you have previously fine-tuned Stable Diffusion using this notebook, then insert the INSTANCE_NAME. For example in my case, it’s Sandman2022
  4. If you have a model you want to load that’s in some folder in your Google Drive, then check Use_Custom_Path and after you run the cell, you’ll see a field to provide the path to your model.
  5. You can leave Use_Gradio_Server unchecked. This is the method by which our link will be generated for us to access the Stable Diffusion WebUI. When unchecked, it uses a service called localtunnel to generate an URL. If checked, it uses the servers of Gradio.app, which is the software used to create the Web UI. We have both these options available in case one of them doesn’t work.

When you run the cell, it will take about 5 minutes for Stable Diffusion WebUI to be ready to use. When it’s done, you’ll see some URLs like https://fancy-spies-punch-34-150-175-108.loca.lt when leaving Use_Gradio_Server unchecked, and https://xyz.gradio.app when it’s checked.

Click it, and then you can start using Stable Diffusion to generate our images. The user interface will open in a new tab, and you can start generating images right away.

Generate Images
Generate Images

Where Generated Images Are Stored

Images are stored by default in your Google Drive in My Drive > sd > stable-diffusion > outputs > txt2img-images.

Where Generated Images Are Stored
Where Generated Images Are Stored

Upload Your Trained Model to Hugging Face

You can also upload your trained model to Hugging Face, to the public library or just have it privately in your account. You can change it to public later on.

To do this, you’ll have to use a Hugging Face token that has WRITE role. Simply create a token and set the role to WRITE.

Hugging Face Token Write Permissions
Hugging Face Token Write Permissions

In the Upload The Trained Model to Hugging Face cell, you’ll have the following:

  1. name_of_your_concept: This will be the name of the model you upload, and it will show up in your URL as well. For example, https://huggingface.co/ByteXD/aemondhod-sandman2022.
  2. Save_concept_to: You can select between your profile (by default, it will be set to private) or the public library.
  3. hf_token_write: This is your Hugging Face token. Make sure it has its’ role set to WRITE.

Run the cell when you’re done. It will take ~10-15 minutes to finish uploading. When it’s done, it will output a link to where you can access your newly uploaded model.

Upload Model to Hugging Face 2
Finished Uploading the Model to Hugging Face

This is what it looks like: https://huggingface.co/ByteXD/aemondhod-sandman2022. To make it publicly accessible, go in the model’s settings section and click on Make this model public.

Aemond Sandman Model Page
Aemond Sandman Model Page

FAQ

What is a .ckpt file?

The .ckpt file extension is commonly used for checkpoint files, and the file is also referred to as the weights of the model. Although not exactly accurate, we can think of it as the model file. Checkpoint files are used to save the state of a program or process at a particular point in time.

Troubleshooting

In this section, we’ll address some common errors.

ModuleNotFoundError: No module named ‘modules.hypernetworks’

As of writing this, this error should be fixed. If you’re encountering it then we’ll want to do a clean run of the DreamBooth notebook. To do this:

  1. Delete your sd folder in your Google Drive, located at My Drive > sd.
    • Important – Back Up Previously Generated Images: If you previously generated images, back them up. They are located in My Drive > sd > stable-diffusion > outputs. They’ll be deleted if you don’t back them up.
  2. Then make sure you’re running the latest Colab notebook (in case you were running one saved to your Google Drive) https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb.

That’s it. Now it should work. We just had to do a fresh run of the DreamBooth notebook without the previously stored files.

Next, if you were interrupted before getting to train your model, you can continue with the instructions below.

ImportError: cannot import name ‘VectorQuantizer2’

To fix this:

  1. Run the following command in any cell:
    !pip install taming-transformers-rom1504
  2. Then restart the runtime by going in the menu to Runtime > Restart runtime.
  3. Now, if you run everything again, it should work.

Next, if you have already fine-tuned your model, to get back to testing it quickly, follow the instructions below.

If You Just Trained a Model but Didn’t Get to Test It Because of an Error

If you got an error in the last cell Test the trained model , fixed it, and now restarted the notebook, you don’t have to go through the training again.

  1. Just run the first two cells (Connect to Google Drive and Setting up the environment)
  2. After that’s done, go to the Test the trained model cell and just insert your INSTANCE_NAME from earlier (Sandman2022 in my case) or Use_Custom_Path (if you have it somewhere else other than My Drive), and run the cell and it should work. When you trained the model earlier, it got saved in your Google Drive, so now the notebook will just load your fine-tuned Stable Diffusion model.

Conclusion

In this tutorial, we covered how to fine-tune Stable Diffusion using DreamBooth via Google Colab for free to generate our own unique image styles. We hope this tutorial helped you break the ice in fine-tuning Stable Diffusion.

If you encounter any issues or have any questions, please feel free to leave a comment, and we’ll get back to you as soon as possible.

Very Useful Resources

35 Shares:
Subscribe
Notify of
guest
Receive notifications when your comment receives a reply. (Optional)
Your username will link to your website. (Optional)

90 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Barry
Barry
1 month ago

Thank you very much for this

Dave
Dave
1 month ago

getting this error when running “test the trained model”. it was working yesterday

Traceback (most recent call last):
File “/content/gdrive/MyDrive/sd/stable-diffusion-webui/webui.py”, line 32, in <module>
import modules.hypernetworks.hypernetwork
ModuleNotFoundError: No module named ‘modules.hypernetworks’

Curious about AI
Curious about AI
1 month ago

I only have a 3080 10 GB can I run this? The other options don’t work for me because I have very little space on my C: drive and can’t install WSL and python and all these other choices. It doesn’t seem to say what the system requirements are here and seems to work magically from your example.

Skunk
Skunk
1 month ago

How can I continue training a ckpt file I already created?
Where would I put it or transform it into the files I need?
(In the collab)
thank you!

Steven
Steven
1 month ago

Hey there. At the end of the training, I get an error. It simply says “something went wrong’

Here’s the last few lines of code: Any idea what might have went wrong and how to fix it?

Steven
Steven
1 month ago

Hey there. At the end of the training, I get an error. It simply says “something went wrong’
Here’s the last few lines of code:File “/usr/local/lib/python3.7/dist-packages/accelerate/commands/launch.py”, line 354, in simple_launcher
raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError: Command ‘[‘/usr/bin/python3’, ‘/content/diffusers/examples/dreambooth/train_dreambooth.py’, ‘–train_text_encoder’, ‘–pretrained_model_name_or_path=/content/stable-diffusion-v1-5’, ‘–instance_data_dir=/content/data/SteveJP’, ‘–class_data_dir=/content/data/Man’, ‘–output_dir=/content/models/SteveJP’, ‘–with_prior_preservation’, ‘–prior_loss_weight=1.0’, ‘–instance_prompt=photo of SteveJP Man’, ‘–class_prompt=a photo of a Man, ultra detailed’, ‘–seed=45576’, ‘–resolution=512’, ‘–mixed_precision=no’, ‘–train_batch_size=1’, ‘–gradient_accumulation_steps=1’, ‘–gradient_checkpointing’, ‘–use_8bit_adam’, ‘–learning_rate=2e-6’, ‘–lr_scheduler=constant’, ‘–lr_warmup_steps=0’, ‘–center_crop’, ‘–max_train_steps=1500’, ‘–num_class_images=200′]’ returned non-zero exit status 1.
Something went wrong

Javier
Javier
1 month ago
Reply to  EdXD

Hi, I have the same problem

Mihai
Mihai
28 days ago
Reply to  EdXD

Same problem here

Leo
Leo
1 month ago

Hi, I’m trying to test a previously trained model, but keep getting the error: ImportError: cannot import name ‘script_callbacks’ from ‘modules’ (unknown location)

Konstantin
Konstantin
1 month ago

Hi. Great article, thank you.
How to bypass the error “ImportError: cannot import name ‘script_callbacks’ from ‘modules’ (unknown location)” when testing the trained model?

Konstantin
Konstantin
1 month ago
Reply to  EdXD

This probably will work. Now I have the issue that I cannot connect to a GPU backend due to usage limitations.
I can connect without GPU, but I suppose it won’t work, correct?

Leo
Leo
1 month ago

figured it out, you have to check the box “update repo” to successfully run from drive

elmar8
elmar8
1 month ago

Hi Im a total newbe , I made my pictures, but where and how do I upload the images?

Mustaf
Mustaf
1 month ago

I’m getting the following error with a saving regimen of every 500 steps starting at step 500. I’ve successfully trained models previously, so I don’t know what I’ve done:

Steps: 2% 499/27200 [08:09<7:14:05, 1.03it/s, loss=0.0131, lr=1.97e-6] SAVING CHECKPOINT: /content/gdrive/MyDrive/mustaf_mv_session_1_step_500.ckpt
Traceback (most recent call last):
File “/content/diffusers/examples/dreambooth/train_dreambooth.py”, line 733, in <module>
main()
File “/content/diffusers/examples/dreambooth/train_dreambooth.py”, line 701, in main
if args.train_text_encoder and os.path.exists(frz_dir):
UnboundLocalError: local variable ‘frz_dir’ referenced before assignment
Steps: 2% 499/27200 [08:38<7:42:06, 1.04s/it, loss=0.0131, lr=1.97e-6]
Traceback (most recent call last):
File “/usr/local/bin/accelerate”, line 8, in <module>
sys.exit(main())
File “/usr/local/lib/python3.7/dist-packages/accelerate/commands/accelerate_cli.py”, line 43, in main
args.func(args)
File “/usr/local/lib/python3.7/dist-packages/accelerate/commands/launch.py”, line 837, in launch_command
simple_launcher(args)
File “/usr/local/lib/python3.7/dist-packages/accelerate/commands/launch.py”, line 354, in simple_launcher
raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError: Command ‘[‘/usr/bin/python3’, ‘/content/diffusers/examples/dreambooth/train_dreambooth.py’, ‘–image_captions_filename’, ‘–train_text_encoder’, ‘–save_starting_step=500’, ‘–stop_text_encoder_training=2720’, ‘–save_n_steps=500’, ‘–pretrained_model_name_or_path=/content/stable-diffusion-v1-5’, ‘–instance_data_dir=/content/gdrive/MyDrive/Fast-Dreambooth/Sessions/mustaf_mv_session_1/instance_images’, ‘–output_dir=/content/models/mustaf_mv_session_1’, ‘–instance_prompt=’, ‘–seed=96576’, ‘–resolution=512’, ‘–mixed_precision=fp16’, ‘–train_batch_size=1’, ‘–gradient_accumulation_steps=1’, ‘–use_8bit_adam’, ‘–learning_rate=2e-6’, ‘–lr_scheduler=polynomial’, ‘–center_crop’, ‘–lr_warmup_steps=0’, ‘–max_train_steps=27200′]’ returned non-zero exit status 1.
Something went wrong

andre
andre
1 month ago

I have trained two models and want to use them in checkpoint merger. But only one of them is in the list. What should I do to make both models appear in the list? Unfortunately in the video I did not understand how to do it. Thank you very much for your work.

Edward Jonah
1 month ago

It works just great. But how do I name the files if I want to teach SD not a face, but an artist’s style, which is not in the dataset?

Steve
Steve
27 days ago

Hi

I’m getting this error when I try to test the model – everything up to this point seemed to work fine

Warning: Taming Transformers not found at path /content/gdrive/MyDrive/sd/stable-diffusion/src/taming-transformers/taming”

Help!!!

steve
steve
26 days ago
Reply to  EdXD

Thanks for your help, everything is working great now thanks, quick question, can you combine 2 different subjects in one image using this method? sandman and the other guy for example

steve
steve
25 days ago
Reply to  EdXD

that looks awesome thanks!! I was thinking more along the lines of having both subjects in the same image, it just combines them when i try

Steve
Steve
25 days ago
Reply to  EdXD

yeah same, its inpredictable, sometimes you get a great image but for every great one there are maybe 6 bad. I’ve tried tips from different forums but you cant bank on it. If you have hours to kill you’ll get some good ones, occasionally

Salva Robles
Salva Robles
27 days ago

Hi, great tutorial. I got an error whe testing this with EMA version of the model. How that could be solved. Thanks.

Qain
Qain
27 days ago

Hi there! Big question.

Once I have my model ready and trained (I’m currently setting up a new one with over 100 pictures for reference), can I use my local stable diffusion webUI client with my custom model? If so, how?

I’m asking because, if I can, I’d rather leave the free space for someone who wants to use Collab for themselves.

Konstantin
Konstantin
8 days ago
Reply to  EdXD

Hi,
I am able to download the Model (cpkt-file) from my G-Drive to my computer, but I do not have (or don’t find the Stable Diffusion folder on my computer (Mac).

And also, I don’t know how to get the local file in the StableDiffusion Web UI – the dropdown only contains a single entry and no way to link/open a local model…

Konstantin
Konstantin
8 days ago
Reply to  EdXD

This was probably a misunderstanding of mine. I thought you could use to local web-client also locally/without being connected to a runtime.
I have downloaded a Stable Diffusion Mac App from https://diffusionbee.com/download and it works quite nicely so far.

Kiroishi
Kiroishi
26 days ago

hi, thanks for the guide. everything works. but I have encountered such a problem. after some time, the resulting web UI throws a 404 error. How can I fix it?

Kiroishi
Kiroishi
26 days ago
Reply to  EdXD

I didn’t check the box. I’ll try with him (as I understand it). Thanks for the answer

infra
infra
24 days ago

Thanks for this detailed guide!

I seem to struggle to train a subject with the latest version that has ‘Enable_text_encoder_training’ checkbox and a percentage value below.

I have tried repeatedly with different values for ‘Train_text_encoder_for’ ranging from 10 to 100 and I keep getting bad results.

I can’t tell whether I should train the model further (already trained for 3000 steps with 16 images).

The results either don’t look like the original subject or it’s impossible to stylize. In no case it is accurate.

Worth noting that I have previously managed to train 3 different models/subjects with 10-14 images each and the results were stellar!

Evangelina
22 days ago

Thanks guys!

charles1212
charles1212
22 days ago

Hi I run the “Downloading the model” then have error, how to fix??

error.png
charles1212
charles1212
22 days ago
Reply to  EdXD

hey, It’s work! can continue the following step

charles1212
charles1212
21 days ago

one more question: when the section off, reconnect will resume to default, need run again those step, can save to google driver?

charles1212
charles1212
21 days ago
Reply to  charles1212

like this, I have save each 500 step, but it can’t find previous setting

error.png
charles1212
charles1212
21 days ago
Reply to  EdXD

after finish the training will creat a .ckpt file, later can i load the file again to another account or for later test?

charles1212
charles1212
20 days ago

Hi, I already trained model, success load seccion, but can’t creat WebUI link, how to fix it?

error.png
charles1212
charles1212
20 days ago
Reply to  EdXD

Hi, now can load the webui, but have some different show in the picture, don’t have the progress bar and just run back to google.colab.
is that the This share link expires in 72 hours?

error.png
Jonathan Vaneyck
Jonathan Vaneyck
20 days ago

Hi Thanks very much for this tuto!

I got a problem when in the image upload step: when I select the images and click the upload button, I got an error “MessageError: RangeError: Maximum call stack size exceeded.” .

Do you have an idea how I can fix it?

Thanks in advance.

Jonathan Vaneyck

charles1212
charles1212
20 days ago

Hi, you go the the link https://birme.net and resize picture to 512 x 512px?

Jonathan Vaneyck
Jonathan Vaneyck
20 days ago
Reply to  charles1212

Yes I have used this website to resize all my pictures to 512×512 px. In addition, I have formatted the name as described in the tuto.

Jonathan Vaneyck
Jonathan Vaneyck
19 days ago

I have found the issue… it is the weight of the image. If the image exceeds 100 kb, I get the error. So by decreasing the weight, the upload succeed.

Ethan
Ethan
12 days ago

i keep getting this issue when trying to start the Training any idea how to fix?

image_2022-11-20_164312029.png
Huzaifa
Huzaifa
7 days ago

Hi. This is a awesome tutorial. Can you please make a tutorial on SD 2.0 with Google Colab in Automatic1111?

EVE
EVE
5 days ago

Hi, thanks for the amazing tutorial!
I’m getting the following error when I try to run webui.py.
Any idea why?
I just run the entire notebook with no changes.
Can it be about the images?

Thanks!!

Traceback (most recent call last):
 File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/webui.py", line 13, in <module>
   from modules import shared, devices, sd_samplers, upscaler, extensions, localization, ui_tempdir
 File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_samplers.py", line 11, in <module>
   from modules import prompt_parser, devices, processing, images
 File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/processing.py", line 15, in <module>
   import modules.sd_hijack
 File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modulesdafsd/sd_hijack.py", line 31, in <module>
   ldm.modules.attention.BasicTransformerBlock.ATTENTION_MODES["softmax-xformers"] = ldm.modules.attention.CrossAttention
AttributeError: type object 'BasicTransformerBlock' has no attribute 'ATTENTION_MODES'
Alan Benitez
Alan Benitez
2 days ago
Reply to  EdXD

I am having the same error. And after the update, it still does not work. Do you know can I fix it? Thanks in advance

Jonny
Jonny
23 hours ago
Reply to  Alan Benitez

Me too!

Eduardo
Eduardo
8 minutes ago
Reply to  EdXD

Fixed deleting sd folder (or in my case renaming). Thanks!

Omidreza
3 days ago

Hi, Thank you very much for the excellent tutorial, I get an error when fine-tuning using SD 2 and trying to test the trained model as below:

File “<ipython-input-10-b1a5bc118eb8>”, line 187
NM==”True”:
^
SyntaxError: invalid syntax

any idea what the problem might be?

You May Also Like