Guides
How to run Dreambooth super fast with Brev
In this guide, we show you how easy it is to launch and run Dreambooth on GPUs with Brev. Everything is pre-configured so you won't have any hassle to get setup!
We support both the ShivamShrirao Dreambooth repo and the Joe Penna Dreambooth repo in this guide. If you're unsure which to choose, just stick with the ShivamShirao implementation.
And if you'd rather not read, we've also made a Youtube Video for this.
What is Dreambooth?
DreamBooth was originally developed by Google as a way to fine-tune text-to-image models. The incredible ML community then found a way to use techniques from Dreambooth to fine-tune the stable diffusion models and generate samples of any subject. Here is one we generated:

ShivamShrirao repo
To get started, hit this link to create a new Brev instance
Sign up for an account
It'll redirect you to an instance creation page pre-configured with the defaults you need. (We recommend sticking with the default GPU).
At the bottom, add a payment method...we give you 30 minutes free but we just need to make sure people don't abuse our systems ๐
Hit create!
Open your new Dreambooth Brev instance
brev open dreambooth --wait
If you don't have the Brev CLI, you can install it here
Run the training job
Login to HuggingFace to be able to download the model.
conda activate diffusers
huggingface-cli login
It'll prompt you to add your huggingface token (make sure you've accepted the Hugging Face license agreement).
Then, upload your training data (around 10-20 images should be enough). We recommend using a varied set of images (e.g. different angles, different lighting, different backgrounds, etc.). You can use JPG, PNG or HEIC (we run a script that converts from HEIC to JPG behind the scenes). With VSCode, you can drag and drop them in.
In launch.sh, rename "./data/dog" on lines 5 and 9 to the name of the folder with your training images. Then run:
sh launch.sh
(this should take about 5 minutes to finetune the model on your images)
Generating samples
To run inference on your newly trained model, run:
conda activate diffusers
python inference.py "fine-tuned-model-output/800" "a photo of adamsmith wearing sunglasses"
Then output0.png, output1.png, output2.png and output3.png will be generated in the base directory.
Using Stable Diffusion v1-5 and regularisation data (optional)
In the repo, we provide another launch file called launch-prior-preservation.sh. This will finetune the SD-1-5 model and use regularisation images (to avoid overfitting to images of your subject. To use it, change line 9 of launch-prior-preservation.sh to point to your data directory then run:
sh launch-prior-preservation.sh
Then generate samples:
conda activate diffusers
python inference.py "class-based-output/804" "a photo of adamsmith wearing sunglasses"
JoePenna repo
The JoePenna repo is slightly more complex to use and takes longer to train but has been known to produce slightly better results.
To get started, hit this link to create a new Dreambooth Brev instance from our template
Sign up for an account
It'll redirect you to an instance creation page pre-configured with the defaults you need. (We recommend sticking with the default GPU).
At the bottom, add a payment method...we give you 30 minutes free but we just need to make sure people don't abuse our systems ๐
Hit create!
Open your new Dreambooth Brev instance
brev open dreambooth --wait
This will open VSCode after installing all dependencies and model files. If you don't have the Brev CLI, install it here.
Running Dreambooth
The first thing you'll want to do is upload your fine-tuning images. With VSCode, you can just drag and drop them in. Some things to consider:
- Use a varied set of images (e.g. different angles, different lighting, different backgrounds, etc.)
- Stick to either JPG, PNG or HEIC (we run a script that converts from HEIC to JPG behind the scenes)
- Upload an even number of pictures
In launch.sh, change the variable DATA_DIR on line 1 to the name of the folder with your training images. Then run:
sh launch.sh
This will take about 20 minutes to run...
Generating samples
Once the finetuning is done, you'll see your fine-tuned model output-model.ckpt in the base directory. To generate samples, we'll use inference.sh. Change line 10 of inference.sh to a prompt you want to use then run:
sh inference.sh
It'll generate 4 images in the outputs folder. Make sure your prompt always includes your dreambooth token followed by the class name (we use "person" as the class by default). For example, if your dreambooth token is "johnsmith" and you want to generate a picture of a person, your prompt should include "johnsmith person".
Some more prompts to try
- <Dreambooth token> person in the style of kentaro miura, 4 k, 8 k, absolute detail of even the smallest details and particles, beautiful shadows, beautiful art, black and white drawing, high rendering of the details of the environment, faces and characters
- Uncanny Valley, <Dreambooth token> person, 3D render
- Film still from [film name], closeup of <Dreambooth token> person, cinematography by [directorโs name], [decade film was made], dramatic lighting, bokeh, grainy
- Film still of <Dreambooth token> person as a stopmotion character, Kubo and the Two Strings, ParaNorman, Aardman, Laika Studios, grainy
Here are some more we generated of our CEO:

Send us your creations on Discord!