AI
A Guide to Cost-Effectively Fine-Tuning Mistral
In this notebook and tutorial, we will fine-tune the Mistral 7B model, which outperforms Llama 2 13B on all tested benchmarks!
If you'd like to fine-tune on your own dataset, you can read the tutorial here.
Watch an accompanying video walk-through (but for using your own data) here! If you'd like to see that notebook instead, click here.
Feel free to follow along directly from the notebook instead of here.
I did this for just one dollar ($1) on an 1x A10G 24GB from Brev.dev (instructions below).
This tutorial will use QLoRA, a fine-tuning method that combines quantization and LoRA. For more information about what those are and how they work, see this post.
In this notebook, we will load the large model in 4bit using bitsandbytes
(Mistral-7B-v0.1
) and use LoRA to train using the PEFT library from Hugging Face 🤗.
Note that if you ever have trouble importing something from Huggingface, you may need to run huggingface-cli login
in a shell. To open a shell in Jupyter Lab, click on 'Launcher' (or the '+' if it's not there) next to the notebook tab at the top of the screen. Under "Other", click "Terminal" and then run the command.
Help us make this tutorial better! Please provide feedback on the Discord channel or on X.
Before we begin: A note on OOM errors
If you get an error like this: OutOfMemoryError: CUDA out of memory
, tweak your parameters to make the model less computationally intensive. I will help guide you through that in this guide, and if you have any additional questions you can reach out on the Discord channel or on X.
To re-try after you tweak your parameters, open a Terminal ('Launcher' or '+' in the nav bar above -> Other -> Terminal) and run the command nvidia-smi
. Then find the process ID PID
under Processes
and run the command kill [PID]
. You will need to re-start your notebook from the beginning. (There may be a better way to do this... if so please do let me know!)
Let's begin!
I used a GPU and dev environment from brev.dev. Provision a pre-configured GPU in one click here (a single A10G or L4 should be enough for this dataset; anything with >= 24GB GPU Memory. You may need more GPUs and/or Memory if your sequence max_length is larger than 512). Once you've checked out your machine and landed in your instance page, select the specs you'd like (I used Python 3.10 and CUDA 12.0.1) and click the "Build" button to build your autogpu container. Give this a few minutes.
A few minutes after your model has started Running, click the 'Notebook' button on the top right of your screen once it illuminates (you may need to refresh the screen). You will be taken to a Jupyter Lab environment, where you can upload this Notebook.
Note: You can connect your cloud credits (AWS or GCP) by clicking "Org: " on the top right, and in the panel that slides over, click "Connect AWS" or "Connect GCP" under "Connect your cloud" and follow the instructions linked to attach your credentials.
# You only need to run this once per machine
!pip install -q -U bitsandbytes
!pip install -q -U git+https://github.com/huggingface/transformers.git
!pip install -q -U git+https://github.com/huggingface/peft.git
!pip install -q -U git+https://github.com/huggingface/accelerate.git
!pip install -q -U datasets scipy ipywidgets
0. Accelerator
Set up the Accelerator. I'm not sure if we really need this for a QLoRA given its description (I have to read more about it) but it seems it can't hurt, and it's helpful to have the code for future reference. You can always comment out the accelerator if you want to try without.
from accelerate import FullyShardedDataParallelPlugin, Accelerator
from torch.distributed.fsdp.fully_sharded_data_parallel import FullOptimStateDictConfig, FullStateDictConfig
fsdp_plugin = FullyShardedDataParallelPlugin(
state_dict_config=FullStateDictConfig(offload_to_cpu=True, rank0_only=False),
optim_state_dict_config=FullOptimStateDictConfig(offload_to_cpu=True, rank0_only=False),
)
accelerator = Accelerator(fsdp_plugin=fsdp_plugin)
1. Load Dataset
Let's load a meaning representation dataset, and fine-tune Mistral on that. This is a great fine-tuning dataset as it teaches the model a unique form of desired output on which the base model performs poorly out-of-the box, so it's helpful to easily and inexpensively gauge whether the fine-tuned model has learned well. (Sources: here and here) (In contrast, if you fine-tune on a fact-based dataset, the model may already do quite well on that, and gauging learning is less obvious / may be more computationally expensive.)
from datasets import load_dataset
train_dataset = load_dataset('gem/viggo', split='train')
eval_dataset = load_dataset('gem/viggo', split='validation')
test_dataset = load_dataset('gem/viggo', split='test')
print(train_dataset)
print(eval_dataset)
print(test_dataset)
2. Load Base Model
Let's now load Mistral - mistralai/Mistral-7B-v0.1
- using 4-bit quantization!
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
base_model_id = "mistralai/Mistral-7B-v0.1"
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)
model = AutoModelForCausalLM.from_pretrained(base_model_id, quantization_config=bnb_config)
3. Tokenization
Set up the tokenizer. Add padding on the left as it makes training use less memory.
tokenizer = AutoTokenizer.from_pretrained(
base_model_id,
model_max_length=512,
padding_side="left",
add_eos_token=True)
tokenizer.pad_token = tokenizer.eos_token
Setup the tokenize function to make labels and input_ids the same. This is basically what self-supervised fine-tuning is:
def tokenize(prompt):
result = tokenizer(
prompt,
truncation=True,
max_length=512,
padding="max_length",
)
result["labels"] = result["input_ids"].copy()
return result
And convert each sample into a prompt that I found from this notebook.
def generate_and_tokenize_prompt(data_point):
full_prompt =f"""Given a target sentence construct the underlying meaning representation of the input sentence as a single function with attributes and attribute values.
This function should describe the target string accurately and the function must be one of the following ['inform', 'request', 'give_opinion', 'confirm', 'verify_attribute', 'suggest', 'request_explanation', 'recommend', 'request_attribute'].
The attributes must be one of the following: ['name', 'exp_release_date', 'release_year', 'developer', 'esrb', 'rating', 'genres', 'player_perspective', 'has_multiplayer', 'platforms', 'available_on_steam', 'has_linux_release', 'has_mac_release', 'specifier']
### Target sentence:
{data_point["target"]}
### Meaning representation:
{data_point["meaning_representation"]}
"""
return tokenize(full_prompt)
Reformat the prompt and tokenize each sample:
tokenized_train_dataset = train_dataset.map(generate_and_tokenize_prompt)
tokenized_val_dataset = eval_dataset.map(generate_and_tokenize_prompt)
Check that input_ids
is padded on the left with the eos_token
(2) and there is an eos_token
2 added to the end, and the prompt starts with a bos_token
(1).
print(tokenized_train_dataset[4]['input_ids'])
Check that a sample has the max length, i.e. 512.
print(len(tokenized_train_dataset[4]['input_ids']))
How does the base model do?
Let's grab a test input (meaning_representation
) and desired output (target
) pair to see how the base model does on it.
print("Target Sentence: " + test_dataset[1]['target'])
print("Meaning Representation: " + test_dataset[1]['meaning_representation'] + "\n")
eval_prompt = """Given a target sentence construct the underlying meaning representation of the input sentence as a single function with attributes and attribute values.
This function should describe the target string accurately and the function must be one of the following ['inform', 'request', 'give_opinion', 'confirm', 'verify_attribute', 'suggest', 'request_explanation', 'recommend', 'request_attribute'].
The attributes must be one of the following: ['name', 'exp_release_date', 'release_year', 'developer', 'esrb', 'rating', 'genres', 'player_perspective', 'has_multiplayer', 'platforms', 'available_on_steam', 'has_linux_release', 'has_mac_release', 'specifier']
### Target sentence:
Earlier, you stated that you didn't have strong feelings about PlayStation's Little Big Adventure. Is your opinion true for all games which don't have multiplayer?
### Meaning representation:
"""
# Re-init the tokenizer so it doesn't add padding or eos token
eval_tokenizer = AutoTokenizer.from_pretrained(
base_model_id,
add_bos_token=True,
)
model_input = eval_tokenizer(eval_prompt, return_tensors="pt").to("cuda")
model.eval()
with torch.no_grad():
print(eval_tokenizer.decode(model.generate(**model_input, max_new_tokens=256)[0], skip_special_tokens=True))
We can see it doesn't do very well out of the box.
4. Set Up LoRA
Now, to start our fine-tuning, we have to apply some preprocessing to the model to prepare it for training. For that use the prepare_model_for_kbit_training
method from PEFT.
from peft import prepare_model_for_kbit_training
model.gradient_checkpointing_enable()
model = prepare_model_for_kbit_training(model)
def print_trainable_parameters(model):
"""
Prints the number of trainable parameters in the model.
"""
trainable_params = 0
all_param = 0
for _, param in model.named_parameters():
all_param += param.numel()
if param.requires_grad:
trainable_params += param.numel()
print(
f"trainable params: {trainable_params} || all params: {all_param} || trainable%: {100 * trainable_params / all_param}"
)
Let's print the model to examine its layers, as we will apply QLoRA to all the linear layers of the model. Those layers are q_proj
, k_proj
, v_proj
, o_proj
, gate_proj
, up_proj
, down_proj
, and lm_head
.
print(model)
Here we define the LoRA config.
r
is the rank of the low-rank matrix used in the adapters, which thus controls the number of parameters trained. A higher rank will allow for more expressivity, but there is a compute tradeoff.
alpha
is the scaling factor for the learned weights. The weight matrix is scaled by alpha/r
, and thus a higher value for alpha
assigns more weight to the LoRA activations.
The values used in the QLoRA paper were r=64
and lora_alpha=16
, and these are said to generalize well, but we will use r=8
and lora_alpha=16
so that we have more emphasis on the new fine-tuned data while also reducing computational complexity.
from peft import LoraConfig, get_peft_model
config = LoraConfig(
r=8,
lora_alpha=16,
target_modules=[
"q_proj",
"k_proj",
"v_proj",
"o_proj",
"gate_proj",
"up_proj",
"down_proj",
"lm_head",
],
bias="none",
lora_dropout=0.05, # Conventional
task_type="CAUSAL_LM",
)
model = get_peft_model(model, config)
print_trainable_parameters(model)
# Apply the accelerator. You can comment this out to remove the accelerator.
model = accelerator.prepare_model(model)
See how the model looks different now, with the LoRA adapters added:
print(model)
Let's use Weights & Biases to track our training metrics. You'll need to apply an API key when prompted. Feel free to skip this if you'd like, and just comment out the wandb
parameters in the Trainer
definition below.
!pip install -q wandb -U
import wandb, os
wandb.login()
wandb_project = "viggo-finetune"
if len(wandb_project) > 0:
os.environ["WANDB_PROJECT"] = wandb_project
5. Run Training!
I used 500 steps, but I found the model should have trained for longer as it had not converged by then, so I upped the steps to 1000 below.
A note on training. You can set the max_steps
to be high initially, and examine at what step your model's performance starts to degrade. There is where you'll find a sweet spot for how many steps to perform. For example, say you start with 1000 steps, and find that at around 500 steps the model starts overfitting - the validation loss goes up (bad) while the training loss goes down significantly, meaning the model is learning the training set really well, but is unable to generalize to new datapoints. Therefore, 500 steps would be your sweet spot, so you would use the checkpoint-500
model repo in your output dir (mistral-finetune-viggo
) as your final model in step 6 below.
You can interrupt the process via Kernel -> Interrupt Kernel in the top nav bar once you realize you didn't need to train anymore.
if torch.cuda.device_count() > 1: # If more than 1 GPU
model.is_parallelizable = True
model.model_parallel = True
import transformers
from datetime import datetime
project = "viggo-finetune"
base_model_name = "mistral"
run_name = base_model_name + "-" + project
output_dir = "./" + run_name
tokenizer.pad_token = tokenizer.eos_token
trainer = transformers.Trainer(
model=model,
train_dataset=tokenized_train_dataset,
eval_dataset=tokenized_val_dataset,
args=transformers.TrainingArguments(
output_dir=output_dir,
warmup_steps=5,
per_device_train_batch_size=2,
gradient_accumulation_steps=4,
max_steps=1000,
learning_rate=2.5e-5, # Want about 10x smaller than the Mistral learning rate
logging_steps=50,
bf16=True,
optim="paged_adamw_8bit",
logging_dir="./logs", # Directory for storing logs
save_strategy="steps", # Save the model checkpoint every logging step
save_steps=50, # Save checkpoints every 50 steps
evaluation_strategy="steps", # Evaluate the model every logging step
eval_steps=50, # Evaluate and save checkpoints every 50 steps
do_eval=True, # Perform evaluation at the end of training
report_to="wandb", # Comment this out if you don't want to use weights & baises
run_name=f"{run_name}-{datetime.now().strftime('%Y-%m-%d-%H-%M')}" # Name of the W&B run (optional)
),
data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False),
)
model.config.use_cache = False # silence the warnings. Please re-enable for inference!
trainer.train()
6. Drum Roll... Try the Trained Model!
It's a good idea to kill the current process so that you don't run out of memory loading the base model again on top of the model we just trained. Go to Kernel > Restart Kernel
or kill the process via the Terminal (nvidia smi
> kill [PID]
).
By default, the PEFT library will only save the QLoRA adapters, so we need to first load the base Mistral model from the Huggingface Hub:
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
base_model_id = "mistralai/Mistral-7B-v0.1"
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)
base_model = AutoModelForCausalLM.from_pretrained(
base_model_id, # Mistral, same as before
quantization_config=bnb_config, # Same quantization config as before
device_map="auto",
trust_remote_code=True,
use_auth_token=True
)
eval_tokenizer = AutoTokenizer.from_pretrained(
base_model_id,
add_bos_token=True,
trust_remote_code=True,
)
Now load the QLoRA adapter from the appropriate checkpoint directory, i.e. the best performing model checkpoint:
from peft import PeftModel
ft_model = PeftModel.from_pretrained(base_model, "mistral-viggo-finetune/checkpoint-1000")
and run your inference!
Let's try the same eval_prompt
and thus model_input
as above, and see if the new finetuned model performs better.
eval_prompt = """Given a target sentence construct the underlying meaning representation of the input sentence as a single function with attributes and attribute values.
This function should describe the target string accurately and the function must be one of the following ['inform', 'request', 'give_opinion', 'confirm', 'verify_attribute', 'suggest', 'request_explanation', 'recommend', 'request_attribute'].
The attributes must be one of the following: ['name', 'exp_release_date', 'release_year', 'developer', 'esrb', 'rating', 'genres', 'player_perspective', 'has_multiplayer', 'platforms', 'available_on_steam', 'has_linux_release', 'has_mac_release', 'specifier']
### Target sentence:
Earlier, you stated that you didn't have strong feelings about PlayStation's Little Big Adventure. Is your opinion true for all games which don't have multiplayer?
### Meaning representation:
"""
model_input = tokenizer(eval_prompt, return_tensors="pt").to("cuda")
ft_model.eval()
with torch.no_grad():
print(eval_tokenizer.decode(ft_model.generate(**model_input, max_new_tokens=100)[0], skip_special_tokens=True))
Sweet... it worked! The fine-tuned model now understands the meaning representation!
I hope you enjoyed this tutorial on fine-tuning Mistral. If you have any questions, feel free to reach out to me on X or on the Discord channel.
🤙 🤙 🤙 🤙 🤙 🤙 🤙 🤙 🤙 🤙 🤙 🤙 🤙 🤙 🤙 🤙 🤙 🤙 🤙 🤙 🤙 🤙 🤙 🤙 🤙 🤙 🤙 🤙 🤙 🤙 🤙 🤙 🤙 🤙 🤙 🤙 🤙 🤙 🤙 🤙 🤙 🤙 🤙 🤙