home
navigate_next
Blog
navigate_next
Tutorials

Train models like Stable Diffusion and Bloom (175B) using your own computer

Train models like Stable Diffusion and Bloom (175B) using your own computer
Michael Louis
Co-Founder & CEO

Over our last few posts we have been mentioning the hype around Large Language and Generative AI models and how you can decrease both inference and training time. As our users have begun to use these models and fine-tune them, they naturally desire to fine-tune and deploy models containing hundreds of billions of parameters, with the aim of boosting performance for their specific use cases.

Typically this would be a very demanding task, requiring large amounts of compute and the storing of 40GB checkpoints. This is infeasible to do on normal computer hardware. Besides the power and storage required, fine-tuning models of this nature take a long time to run and inherently are very expensive — until now.

Introducing the PEFT library by Huggingface, a library that supports Parameter Efficient Fine-tuning methods such as LoRA, Prefix Tuning etc. that enables the efficient adaption of pre-trained language models to various downstream applications without fine-tuning all the model parameters. These various techniques achieve performance comparable to that of full fine-tuning.

PEFT Methods

An important paradigm of natural language processing consists of large-scale pre-training on general domain data and the adaptation to particular tasks or domains. When it comes to fine-tuning, we update the entire set of model parameters for the target task. While fine-tuning obtains good performance, it is memory-consuming during training because gradients and optimizer states for all parameters must be stored. Moreover, keeping a copy of model parameters for each task during inference is inconvenient since pre-trained models are usually large.

Currently the PEFT library supports 4 methods, LoRA, Prefix Tuning, P-Tuning and Prompt Tuning. While these methods have subtle differences they revolve around a similar paradigm. They freeze the pre-trained model weights and only update a subset of the parameter weights. Previously, existing techniques often introduced latency by extending model depth or reducing the models usable sequence length. These methods therefore previously failed to match the fine-tuning baselines, posing a trade-off between efficiency and model quality.

For example, with LoRA, it freezes the pretrained model weights and injects trainable rank decomposition matrices into each layer of the Transformer architecture, greatly reducing the number of trainable parameters for downstream tasks. Compared to GPT-3 175B fine-tuned with Adam, LoRA can reduce the number of trainable parameters by 10,000 times and the GPU memory requirement by 3 times. LoRA performs on-par or better than fine tuning in model quality on RoBERTa, DeBERTa, GPT-2, and GPT-3, despite having fewer trainable parameters, a higher training throughput, and, unlike adapters, no additional inference latency.

Tutorial

There is a nice example on how to use PEFT to train a Dreambooth model in the PEFT repository.

First let us git clone the repository: git clone git@github.com:huggingface/peft.git and go to the examples/lora-dreambooth directory.

Install the required dependancies: pip install r requirements.txt . They forgot the Peft package in the requirements so run pip install peft .

Once the dependancies are installed you can run the code below and set the following environment variables:

  • INSTANCE_DIR: The path to the images you would like to train your model on.
  • CLASS_DIR: Path to your class images. We recommend you contain at least 10 images in this folder.
  • OUTPUT_DIR: The directory where you would like to save the weights of the trained model.

export MODEL_NAME="CompVis/stable-diffusion-v1-4" #"stabilityai/stable-diffusion-2-1"
export INSTANCE_DIR="path-to-instance-images"
export CLASS_DIR="path-to-class-images"
export OUTPUT_DIR="path-to-save-model"

accelerate launch train_dreambooth.py \
  --pretrained_model_name_or_path=$MODEL_NAME  \
  --instance_data_dir=$INSTANCE_DIR \
  --class_data_dir=$CLASS_DIR \
  --output_dir=$OUTPUT_DIR \
  --train_text_encoder \
  --with_prior_preservation --prior_loss_weight=1.0 \
  --instance_prompt="a profile headshot used for business" \
  --class_prompt="a photo of person" \
  --resolution=512 \
  --train_batch_size=1 \
  --lr_scheduler="constant" \
  --lr_warmup_steps=0 \
  --num_class_images=200 \
  --use_lora \
  --lora_r 16 \
  --lora_alpha 27 \
  --lora_text_encoder_r 16 \
  --lora_text_encoder_alpha 17 \
  --learning_rate=1e-4 \
  --gradient_accumulation_steps=1 \
  --gradient_checkpointing \
  --max_train_steps=800

I ran the above on a AWS g4dn.xlarge instance which runs a Nvidia T4 GPU. I gave 5 images of myself and am training the model to create business headshot photos — think LinkedIn profile photos. Running the above script takes about 35 minutes but if you would like it to go faster just reduce the number of training steps.

Running the above code trains our model on the images you provided and outputs the model weights that you can then use to call inference. Use the code below to call inference on your own model:


import diffusers
import torch
import json
from peft import LoraModel, LoraConfig, get_peft_model_state_dict, set_peft_model_state_dict
from diffusers import DDPMScheduler, PNDMScheduler, StableDiffusionPipeline

def load_and_set_lora_ckpt(pipe, ckpt_dir, instance_prompt, device, dtype):
    with open("/home/ec2-user/peft/examples/lora_dreambooth/weights/a profile headshot used for business_lora_config.json", "r") as f:
        lora_config = json.load(f)
    print(lora_config)

    checkpoint = "/home/ec2-user/peft/examples/lora_dreambooth/weights/a profile headshot used for business_lora.pt"
    lora_checkpoint_sd = torch.load(checkpoint)
    unet_lora_ds = {k: v for k, v in lora_checkpoint_sd.items() if "text_encoder_" not in k}
    text_encoder_lora_ds = {
        k.replace("text_encoder_", ""): v for k, v in lora_checkpoint_sd.items() if "text_encoder_" in k
    }

    unet_config = LoraConfig(**lora_config["peft_config"])
    pipe.unet = LoraModel(unet_config, pipe.unet)
    set_peft_model_state_dict(pipe.unet, unet_lora_ds)

    if "text_encoder_peft_config" in lora_config:
        text_encoder_config = LoraConfig(**lora_config["text_encoder_peft_config"])
        pipe.text_encoder = LoraModel(text_encoder_config, pipe.text_encoder)
        set_peft_model_state_dict(pipe.text_encoder, text_encoder_lora_ds)

    if dtype in (torch.float16, torch.bfloat16):
        pipe.unet.half()
        pipe.text_encoder.half()

    pipe.to(device)
    return pipe

pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16).to("cuda")
pipe = load_and_set_lora_ckpt(pipe, "/home/ec2-user/peft/examples/lora_dreambooth/weights", "Business headshot, smiling in suit", "cuda", torch.float16)

prompt = "Business headshot, smiling in suit"
negative_prompt = "low quality, blurry, unfinished"
image = pipe(prompt, num_inference_steps=50, guidance_scale=7, negative_prompt=negative_prompt).images[0]

import base64
from io import BytesIO

buffered = BytesIO()
image.save(buffered, format="JPEG")
img_str = base64.b64encode(buffered.getvalue())
print(img_str)

In the above snippet you will see we have to load the checkpoints from our training and replace some of the layers in the base model network architecture. You can see that in lines 14–26 where we are adding our LoRA config to the model unet.

I then ran the same tutorial we did a few weeks ago training a Dreambooth model normal on a A10 GPU using the same parameters as above and below of the images for comparison:

The images above aren’t perfect, I only supplied 5 images and most of them weren’t the best quality. However, I am very impressed with the results of LoRA since the results resemble me a lot better.

To test what the final output of this tutorial looks like you can also check it out on a HuggingFace space here.

As you can see the difference in images are negligible and so the only trade-off you would have to make is speed vs cost. PEFT methods are making fine-tuning large language and generative models more accessible to users who might not have the budget or access to performant GPU’s.

We are working on bringing this functionality to the Cerebrium platform so if you would like to fine-tune large Models such as BLOOM, GPT-Neo 20b etc please reach out to us. Additionally please join our communities on Slack and Discord to stay up to date with our latest news.

arrow_back
Back to blog