From 02f94c27bbdbd3dc481e527bf891415b7957d4b7 Mon Sep 17 00:00:00 2001
From: Philipp Schmid <32632186+philschmid@users.noreply.github.com>
Date: Fri, 14 Mar 2025 15:01:51 +0100
Subject: [PATCH 1/4] Add files via upload
---
.../huggingface_text_finetune_qlora.ipynb | 643 ++++++++++++++++++
1 file changed, 643 insertions(+)
create mode 100644 site/en/gemma/docs/core/huggingface_text_finetune_qlora.ipynb
diff --git a/site/en/gemma/docs/core/huggingface_text_finetune_qlora.ipynb b/site/en/gemma/docs/core/huggingface_text_finetune_qlora.ipynb
new file mode 100644
index 000000000..accfd4579
--- /dev/null
+++ b/site/en/gemma/docs/core/huggingface_text_finetune_qlora.ipynb
@@ -0,0 +1,643 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "926bada6",
+ "metadata": {},
+ "source": [
+ "##### Copyright 2025 Google LLC."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "a110dfce",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n",
+ "# you may not use this file except in compliance with the License.\n",
+ "# You may obtain a copy of the License at\n",
+ "#\n",
+ "# https://www.apache.org/licenses/LICENSE-2.0\n",
+ "#\n",
+ "# Unless required by applicable law or agreed to in writing, software\n",
+ "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
+ "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
+ "# See the License for the specific language governing permissions and\n",
+ "# limitations under the License."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "f9673bd6",
+ "metadata": {},
+ "source": [
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "e624ec07",
+ "metadata": {},
+ "source": [
+ "# Fine-Tune Gemma using Hugging Face Transformers and QloRA\n",
+ "\n",
+ "This guide walks you through how to fine-tune Gemma on a custom text-to-sql dataset using Hugging Face [Transformers](https://huggingface.co/docs/transformers/index) and [TRL](https://huggingface.co/docs/trl/index). You will learn:\n",
+ "\n",
+ "- What is Quantized Low-Rank Adaptation (QLoRA)\n",
+ "- Setup development environment\n",
+ "- Create and prepare the fine-tuning dataset\n",
+ "- Fine-tune Gemma using TRL and the SFTTrainer\n",
+ "- Test Model Inference and generate SQL queries\n",
+ "\n",
+ "Note: This guide was created to run on a Google colaboratory account using a NVIDIA T4 GPU with 16GB and Gemma 1B, but can be adapted to run on bigger GPUs and bigger models.\n",
+ "\n",
+ "## What is Quantized Low-Rank Adaptation (QLoRA)\n",
+ "\n",
+ "This guide demonstrates the use of [Quantized Low-Rank Adaptation (QLoRA)](https://arxiv.org/abs/2305.14314), which emerged as a popular method to efficiently fine-tune LLMs as it reduces computational resource requirements while maintaining high performance. In QloRA, the pretrained model is quantized to 4-bit and the weights are frozen. Then trainable adapter layers (LoRA) are attached and only the adapter layers are trained. Afterwards, the adapter weights can be merged with the base model or kept as a separate adapter.\n",
+ "\n",
+ "## Setup development environment\n",
+ "\n",
+ "The first step is to install Hugging Face Libraries, including TRL, and datasets to fine-tune open model, including different RLHF and alignment techniques."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "ba51aa79",
+ "metadata": {
+ "attributes": {
+ "classes": [
+ "py3"
+ ],
+ "id": ""
+ }
+ },
+ "outputs": [],
+ "source": [
+ "# Install Pytorch & other libraries\n",
+ "%pip install \"torch>=2.4.0\" tensorboard\n",
+ "\n",
+ "# Install Gemma release branch from Hugging Face\n",
+ "%pip install git+https://github.com/huggingface/transformers@v4.49.0-Gemma-3\n",
+ "\n",
+ "# Install Hugging Face libraries\n",
+ "%pip install --upgrade \\\n",
+ " \"datasets==3.3.2\" \\\n",
+ " \"accelerate==1.4.0\" \\\n",
+ " \"evaluate==0.4.3\" \\\n",
+ " \"bitsandbytes==0.45.3\" \\\n",
+ " \"trl==0.15.2\" \\\n",
+ " \"peft==0.14.0\" \\\n",
+ " protobuf \\\n",
+ " sentencepiece\n",
+ "\n",
+ "# COMMENT IN: if you are running on a GPU that supports BF16 data type and flash attn, such as NVIDIA L4 or NVIDIA A100\n",
+ "#% pip install flash-attn"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "7ef3d54b",
+ "metadata": {},
+ "source": [
+ "_Note: If you are using a GPU with Ampere architecture (such as NVIDIA L4) or newer, you can use Flash attention. Flash Attention is a method that significantly speeds computations up and reduces memory usage from quadratic to linear in sequence length, leading to acelerating training up to 3x. Learn more at [FlashAttention](https://github.com/Dao-AILab/flash-attention/tree/main)._\n",
+ "\n",
+ "Before you can start training, you have to make sure that you accepted the terms of use for Gemma. You can accept the license on [Hugging Face](http://huggingface.co/google/gemma-3-1b-pt) by clicking on the Agree and access repository button on the model page at: http://huggingface.co/google/gemma-3-1b-pt\n",
+ "\n",
+ "After you have accepted the license, you need a valid Hugging Face Token to access the model. If you are running inside a Google Colab, you can securely use your Hugging Face Token using the Colab secrets otherwise you can set the token as directly in the `login` method. Make sure your token has write access too, as you push your model to the Hub during training."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "b6d79c93",
+ "metadata": {
+ "attributes": {
+ "classes": [
+ "py3"
+ ],
+ "id": ""
+ }
+ },
+ "outputs": [],
+ "source": [
+ "from google.colab import userdata\n",
+ "from huggingface_hub import login\n",
+ "\n",
+ "# Login into Hugging Face Hub\n",
+ "hf_token = userdata.get('HF_TOKEN') # If you are running inside a Google Colab \n",
+ "login(hf_token)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "42c60525",
+ "metadata": {},
+ "source": [
+ "## Create and prepare the fine-tuning dataset\n",
+ "\n",
+ "When fine-tuning LLMs, it is important to know your use case and the task you want to solve. This helps you create a dataset to fine-tune your model. If you haven't defined your use case yet, you might want to go back to the drawing board.\n",
+ "\n",
+ "As an example, this guide focuses on the following use case:\n",
+ "\n",
+ "- Fine-tune a natural language to SQL model for seamless integration into a data analysis tool. The objective is to significantly reduce the time and expertise required for SQL query generation, enabling even non-technical users to extract meaningful insights from data.\n",
+ "\n",
+ "Text-to-SQL can be a good use case for fine-tuning LLMs, as it is a complex task that requires a lot of (internal) knowledge about the data and the SQL language.\n",
+ "\n",
+ "Once you have determined that fine-tuning is the right solution, you need a dataset to fine-tune. The dataset should be a diverse set of demonstrations of the task(s) you want to solve. There are several ways to create such a dataset, including:\n",
+ "\n",
+ "- Using existing open-source datasets, such as [Spider](https://huggingface.co/datasets/spider)\n",
+ "- Using synthetic datasets created by LLMs, such as [Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca)\n",
+ "- Using datasets created by humans, such as [Dolly](https://huggingface.co/datasets/databricks/databricks-dolly-15k).\n",
+ "- Using a combination of the methods, such as [Orca](https://huggingface.co/datasets/Open-Orca/OpenOrca)\n",
+ "\n",
+ "Each of the methods has its own advantages and disadvantages and depends on the budget, time, and quality requirements. For example, using an existing dataset is the easiest but might not be tailored to your specific use case, while using domain experts might be the most accurate but can be time-consuming and expensive. It is also possible to combine several methods to create an instruction dataset, as shown in [Orca: Progressive Learning from Complex Explanation Traces of GPT-4.](https://arxiv.org/abs/2306.02707)\n",
+ "\n",
+ "This guide uses an already existing dataset ([philschmid/gretel-synthetic-text-to-sql](https://huggingface.co/datasets/philschmid/gretel-synthetic-text-to-sql)), a high quality synthetic Text-to-SQL dataset including natural language instructions, schema definitions, reasoning and the corresponding SQL query.\n",
+ "\n",
+ "[Hugging Face TRL](https://huggingface.co/docs/trl/en/index) supports automatic templating of conversation dataset formats. This means you only need to convert your dataset into the right json objects, and `trl` takes care of templating and putting it into the right format.\n",
+ "\n",
+ "```\n",
+ "{\"messages\": [{\"role\": \"system\", \"content\": \"You are...\"}, {\"role\": \"user\", \"content\": \"...\"}, {\"role\": \"assistant\", \"content\": \"...\"}]}\n",
+ "{\"messages\": [{\"role\": \"system\", \"content\": \"You are...\"}, {\"role\": \"user\", \"content\": \"...\"}, {\"role\": \"assistant\", \"content\": \"...\"}]}\n",
+ "{\"messages\": [{\"role\": \"system\", \"content\": \"You are...\"}, {\"role\": \"user\", \"content\": \"...\"}, {\"role\": \"assistant\", \"content\": \"...\"}]}\n",
+ "```"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "c4ecf6db",
+ "metadata": {},
+ "source": [
+ "The [philschmid/gretel-synthetic-text-to-sql](https://huggingface.co/datasets/philschmid/gretel-synthetic-text-to-sql) contains over 100k samples. To keep the guide small, it is downsampled to only use 10,000 samples.\n",
+ "\n",
+ "You can now use the Hugging Face Datasets library to load the dataset and create a prompt template to combine the natural language instruction, schema definition and add a system message for your assistant."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "40c3a2cf",
+ "metadata": {
+ "attributes": {
+ "classes": [
+ "py3"
+ ],
+ "id": ""
+ }
+ },
+ "outputs": [],
+ "source": [
+ "from datasets import load_dataset\n",
+ "\n",
+ "# System message for the assistant \n",
+ "system_message = \"\"\"You are a text to SQL query translator. Users will ask you questions in English and you will generate a SQL query based on the provided SCHEMA.\"\"\"\n",
+ "\n",
+ "# User prompt that combines the user query and the schema\n",
+ "user_prompt = \"\"\"Given the and the , generate the corresponding SQL command to retrieve the desired data, considering the query's syntax, semantics, and schema constraints.\n",
+ "\n",
+ "\n",
+ "{context}\n",
+ "\n",
+ "\n",
+ "\n",
+ "{question}\n",
+ "\n",
+ "\"\"\"\n",
+ "def create_conversation(sample):\n",
+ " return {\n",
+ " \"messages\": [\n",
+ " # {\"role\": \"system\", \"content\": system_message},\n",
+ " {\"role\": \"user\", \"content\": user_prompt.format(question=sample[\"sql_prompt\"], context=sample[\"sql_context\"])},\n",
+ " {\"role\": \"assistant\", \"content\": sample[\"sql\"]}\n",
+ " ]\n",
+ " } \n",
+ "\n",
+ "# Load dataset from the hub\n",
+ "dataset = load_dataset(\"philschmid/gretel-synthetic-text-to-sql\", split=\"train\")\n",
+ "dataset = dataset.shuffle().select(range(12500))\n",
+ "\n",
+ "# Convert dataset to OAI messages\n",
+ "dataset = dataset.map(create_conversation, remove_columns=dataset.features,batched=False)\n",
+ "# split dataset into 10,000 training samples and 2,500 test samples\n",
+ "dataset = dataset.train_test_split(test_size=2500/12500)\n",
+ "\n",
+ "# Print formatted user prompt\n",
+ "print(dataset[\"train\"][345][\"messages\"][1][\"content\"])"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "c0eb2e06",
+ "metadata": {},
+ "source": [
+ "## Fine-tune Gemma using TRL and the SFTTrainer\n",
+ "\n",
+ "You are now ready to fine-tune your model. Hugging Face TRL [SFTTrainer](https://huggingface.co/docs/trl/sft_trainer) makes it straightforward to supervise fine-tune open LLMs. The `SFTTrainer` is a subclass of the `Trainer` from the `transformers` library and supports all the same features, including logging, evaluation, and checkpointing, but adds additional quality of life features, including:\n",
+ "\n",
+ "* Dataset formatting, including conversational and instruction formats\n",
+ "* Training on completions only, ignoring prompts\n",
+ "* Packing datasets for more efficient training\n",
+ "* Parameter-efficient fine-tuning (PEFT) support including QloRA\n",
+ "* Preparing the model and tokenizer for conversational fine-tuning (such as adding special tokens)\n",
+ "\n",
+ "The following code loads the Gemma model and tokenizer from Hugging Face and initializes the quantization configuration."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "18069ed2",
+ "metadata": {
+ "attributes": {
+ "classes": [
+ "py3"
+ ],
+ "id": ""
+ }
+ },
+ "outputs": [],
+ "source": [
+ "import torch\n",
+ "from transformers import AutoTokenizer, AutoModelForCausalLM, AutoModelForImageTextToText, BitsAndBytesConfig\n",
+ "\n",
+ "# Hugging Face model id\n",
+ "model_id = \"google/gemma-3-1b-pt\" # or `google/gemma-3-4b-pt`, `google/gemma-3-12b-pt`, `google/gemma-3-27b-pt`\n",
+ "\n",
+ "# Select model class based on id\n",
+ "if model_id == \"google/gemma-3-1b-pt\":\n",
+ " model_class = AutoModelForCausalLM\n",
+ "else:\n",
+ " model_class = AutoModelForImageTextToText\n",
+ "\n",
+ "# Check if GPU benefits from bfloat16\n",
+ "if torch.cuda.get_device_capability()[0] >= 8:\n",
+ " torch_dtype = torch.bfloat16\n",
+ "else:\n",
+ " torch_dtype = torch.float16\n",
+ "\n",
+ "# Define model init arguments\n",
+ "model_kwargs = dict(\n",
+ " attn_implementation=\"eager\", # Use \"flash_attention_2\" when running on Ampere or newer GPU\n",
+ " torch_dtype=torch_dtype, # What torch dtype to use, defaults to auto\n",
+ " device_map=\"auto\", # Let torch decide how to load the model\n",
+ ")\n",
+ "\n",
+ "# BitsAndBytesConfig: Enables 4-bit quantization to reduce model size/memory usage\n",
+ "model_kwargs[\"quantization_config\"] = BitsAndBytesConfig(\n",
+ " load_in_4bit=True,\n",
+ " bnb_4bit_use_double_quant=True,\n",
+ " bnb_4bit_quant_type='nf4',\n",
+ " bnb_4bit_compute_dtype=model_kwargs['torch_dtype'],\n",
+ " bnb_4bit_quant_storage=model_kwargs['torch_dtype'],\n",
+ ")\n",
+ "\n",
+ "# Load model and tokenizer\n",
+ "model = model_class.from_pretrained(model_id, **model_kwargs)\n",
+ "tokenizer = AutoTokenizer.from_pretrained(\"google/gemma-3-1b-it\") # Load the Instruction Tokenizer to use the official Gemma template"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "37ec1d1b",
+ "metadata": {},
+ "source": [
+ "The `SFTTrainer` supports a native integration with `peft`, which makes it straightforward to efficiently tune LLMs using QLoRA. You only need to create a `LoraConfig` and provide it to the trainer."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "ed00e846",
+ "metadata": {
+ "attributes": {
+ "classes": [
+ "py3"
+ ],
+ "id": ""
+ }
+ },
+ "outputs": [],
+ "source": [
+ "from peft import LoraConfig\n",
+ "\n",
+ "peft_config = LoraConfig(\n",
+ " lora_alpha=16,\n",
+ " lora_dropout=0.05,\n",
+ " r=16,\n",
+ " bias=\"none\",\n",
+ " target_modules=\"all-linear\",\n",
+ " task_type=\"CAUSAL_LM\",\n",
+ " modules_to_save=[\"lm_head\", \"embed_tokens\"] # make sure to save the lm_head and embed_tokens as you train the special tokens\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "bbd9fc1b",
+ "metadata": {},
+ "source": [
+ "Before you can start your training, you need to define the hyperparameter you want to use in a `SFTConfig` instance."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "989be3c1",
+ "metadata": {
+ "attributes": {
+ "classes": [
+ "py3"
+ ],
+ "id": ""
+ }
+ },
+ "outputs": [],
+ "source": [
+ "from trl import SFTConfig\n",
+ "\n",
+ "args = SFTConfig(\n",
+ " output_dir=\"gemma-text-to-sql\", # directory to save and repository id\n",
+ " max_seq_length=512, # max sequence length for model and packing of the dataset\n",
+ " packing=True, # Groups multiple samples in the dataset into a single sequence\n",
+ " num_train_epochs=3, # number of training epochs\n",
+ " per_device_train_batch_size=1, # batch size per device during training\n",
+ " gradient_accumulation_steps=4, # number of steps before performing a backward/update pass\n",
+ " gradient_checkpointing=True, # use gradient checkpointing to save memory\n",
+ " optim=\"adamw_torch_fused\", # use fused adamw optimizer\n",
+ " logging_steps=10, # log every 10 steps\n",
+ " save_strategy=\"epoch\", # save checkpoint every epoch\n",
+ " learning_rate=2e-4, # learning rate, based on QLoRA paper\n",
+ " fp16=True if torch_dtype == torch.float16 else False, # use float16 precision\n",
+ " bf16=True if torch_dtype == torch.bfloat16 else False, # use bfloat16 precision\n",
+ " max_grad_norm=0.3, # max gradient norm based on QLoRA paper\n",
+ " warmup_ratio=0.03, # warmup ratio based on QLoRA paper\n",
+ " lr_scheduler_type=\"constant\", # use constant learning rate scheduler\n",
+ " push_to_hub=True, # push model to hub\n",
+ " report_to=\"tensorboard\", # report metrics to tensorboard\n",
+ " dataset_kwargs={\n",
+ " \"add_special_tokens\": False, # We template with special tokens\n",
+ " \"append_concat_token\": True, # Add EOS token as separator token between examples\n",
+ " }\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "dd88e798",
+ "metadata": {},
+ "source": [
+ "You now have every building block you need to create your `SFTTrainer` to start the training of your model."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "ade95df7",
+ "metadata": {
+ "attributes": {
+ "classes": [
+ "py3"
+ ],
+ "id": ""
+ }
+ },
+ "outputs": [],
+ "source": [
+ "from trl import SFTTrainer\n",
+ "\n",
+ "# Create Trainer object\n",
+ "trainer = SFTTrainer(\n",
+ " model=model,\n",
+ " args=args,\n",
+ " train_dataset=dataset[\"train\"],\n",
+ " peft_config=peft_config,\n",
+ " processing_class=tokenizer\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "fad61a6a",
+ "metadata": {},
+ "source": [
+ "Start training by calling the `train()` method."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "995e7e38",
+ "metadata": {
+ "attributes": {
+ "classes": [
+ "py3"
+ ],
+ "id": ""
+ }
+ },
+ "outputs": [],
+ "source": [
+ "# Start training, the model will be automatically saved to the Hub and the output directory\n",
+ "trainer.train()\n",
+ "\n",
+ "# Save the final model again to the Hugging Face Hub\n",
+ "trainer.save_model()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "b47b9733",
+ "metadata": {},
+ "source": [
+ "Before you can test your model, make sure to free the memory."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "40a32ed7",
+ "metadata": {
+ "attributes": {
+ "classes": [
+ "py3"
+ ],
+ "id": ""
+ }
+ },
+ "outputs": [],
+ "source": [
+ "# free the memory again\n",
+ "del model\n",
+ "del trainer\n",
+ "torch.cuda.empty_cache()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "862e9728",
+ "metadata": {},
+ "source": [
+ "When using QLoRA, you only train adapters and not the full model. This means when saving the model during training you only save the adapter weights and not the full model. If you want to save the full model, which makes it easier to use with serving stacks like vLLM or TGI, you can merge the adapter weights into the model weights using the `merge_and_unload` method and then save the model with the `save_pretrained` method. This saves a default model, which can be used for inference.\n",
+ "\n",
+ "Note: It requires more than 30GB of CPU Memory when you want to merge the adapter into the model. You can skip this and continue with Test Model Inference."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "761e324b",
+ "metadata": {
+ "attributes": {
+ "classes": [
+ "py3"
+ ],
+ "id": ""
+ }
+ },
+ "outputs": [],
+ "source": [
+ "from peft import PeftModel\n",
+ "\n",
+ "# Load Model base model\n",
+ "model = model_class.from_pretrained(model_id, low_cpu_mem_usage=True)\n",
+ "\n",
+ "# Merge LoRA and base model and save\n",
+ "peft_model = PeftModel.from_pretrained(model, args.output_dir)\n",
+ "merged_model = peft_model.merge_and_unload()\n",
+ "merged_model.save_pretrained(\"merged_model\", safe_serialization=True, max_shard_size=\"2GB\")\n",
+ "\n",
+ "processor = AutoTokenizer.from_pretrained(args.output_dir)\n",
+ "processor.save_pretrained(\"merged_model\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "bf86e31d",
+ "metadata": {},
+ "source": [
+ "## Test Model Inference and generate SQL queries\n",
+ "\n",
+ "After the training is done, you'll want to evaluate and test your model. You can load different samples from the test dataset and evaluate the model on those samples.\n",
+ "\n",
+ "Note: Evaluating generative AI models is not a trivial task since one input can have multiple correct outputs. This guide only focuses on manual evaluation and vibe checks."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "aab1c5c5",
+ "metadata": {
+ "attributes": {
+ "classes": [
+ "py3"
+ ],
+ "id": ""
+ }
+ },
+ "outputs": [],
+ "source": [
+ "import torch\n",
+ "from transformers import pipeline\n",
+ "\n",
+ "model_id = \"gemma-text-to-sql\"\n",
+ "\n",
+ "# Load Model with PEFT adapter\n",
+ "model = model_class.from_pretrained(\n",
+ " model_id,\n",
+ " device_map=\"auto\",\n",
+ " torch_dtype=torch_dtype,\n",
+ " attn_implementation=\"eager\",\n",
+ ")\n",
+ "tokenizer = AutoTokenizer.from_pretrained(model_id)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "3dccb57c",
+ "metadata": {},
+ "source": [
+ "Let's load a random sample from the test dataset and generate a SQL command."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "1fd887f4",
+ "metadata": {
+ "attributes": {
+ "classes": [
+ "py3"
+ ],
+ "id": ""
+ }
+ },
+ "outputs": [],
+ "source": [
+ "from random import randint\n",
+ "import re\n",
+ "\n",
+ "# Load the model and tokenizer into the pipeline\n",
+ "pipe = pipeline(\"text-generation\", model=model, tokenizer=tokenizer)\n",
+ "\n",
+ "# Load a random sample from the test dataset\n",
+ "rand_idx = randint(0, len(dataset[\"test\"]))\n",
+ "test_sample = dataset[\"test\"][rand_idx]\n",
+ "\n",
+ "# Convert as test example into a prompt with the Gemma template\n",
+ "stop_token_ids = [tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids(\"\")]\n",
+ "prompt = pipe.tokenizer.apply_chat_template(test_sample[\"messages\"][:2], tokenize=False, add_generation_prompt=True)\n",
+ "\n",
+ "# Generate our SQL query.\n",
+ "outputs = pipe(prompt, max_new_tokens=256, do_sample=False, temperature=0.1, top_k=50, top_p=0.1, eos_token_id=stop_token_ids, disable_compile=True)\n",
+ "\n",
+ "# Extract the user query and original answer\n",
+ "print(f\"Context:\\n\", re.search(r'\\n(.*?)\\n', test_sample['messages'][0]['content'], re.DOTALL).group(1).strip())\n",
+ "print(f\"Query:\\n\", re.search(r'\\n(.*?)\\n', test_sample['messages'][0]['content'], re.DOTALL).group(1).strip())\n",
+ "print(f\"Original Answer:\\n{test_sample['messages'][1]['content']}\")\n",
+ "print(f\"Generated Answer:\\n{outputs[0]['generated_text'][len(prompt):].strip()}\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "6f8ff452",
+ "metadata": {},
+ "source": [
+ "## Summary and next steps\n",
+ "\n",
+ "This tutorial covered how to fine-tune a Gemma model using TRL and QLoRA. Check out the following docs next:\n",
+ "\n",
+ "* Learn how to [generate text with a Gemma model](https://ai.google.dev/gemma/docs/get_started).\n",
+ "* Learn how to [fine-tune Gemma for vision tasks using Hugging Face Transformers](https://ai.google.dev/gemma/docs/core/huggingface_vision_finetune_qlora).\n",
+ "* Learn how to perform [distributed fine-tuning and inference on a Gemma model](https://ai.google.dev/gemma/docs/core/distributed_tuning).\n",
+ "* Learn how to [use Gemma open models with Vertex AI](https://cloud.google.com/vertex-ai/docs/generative-ai/open-models/use-gemma).\n",
+ "* Learn how to [fine-tune Gemma using KerasNLP and deploy to Vertex AI](https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/model_garden/model_garden_gemma_kerasnlp_to_vertexai.ipynb)."
+ ]
+ }
+ ],
+ "metadata": {
+ "language_info": {
+ "name": "python"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
From 290798f4b02b8facc84e025110a8f2b985b9bf18 Mon Sep 17 00:00:00 2001
From: philschmid
Date: Fri, 14 Mar 2025 14:19:15 +0000
Subject: [PATCH 2/4] format
---
.../huggingface_text_finetune_qlora.ipynb | 1256 ++++++++---------
1 file changed, 618 insertions(+), 638 deletions(-)
diff --git a/site/en/gemma/docs/core/huggingface_text_finetune_qlora.ipynb b/site/en/gemma/docs/core/huggingface_text_finetune_qlora.ipynb
index accfd4579..b309080cd 100644
--- a/site/en/gemma/docs/core/huggingface_text_finetune_qlora.ipynb
+++ b/site/en/gemma/docs/core/huggingface_text_finetune_qlora.ipynb
@@ -1,643 +1,623 @@
{
- "cells": [
- {
- "cell_type": "markdown",
- "id": "926bada6",
- "metadata": {},
- "source": [
- "##### Copyright 2025 Google LLC."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "a110dfce",
- "metadata": {},
- "outputs": [],
- "source": [
- "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n",
- "# you may not use this file except in compliance with the License.\n",
- "# You may obtain a copy of the License at\n",
- "#\n",
- "# https://www.apache.org/licenses/LICENSE-2.0\n",
- "#\n",
- "# Unless required by applicable law or agreed to in writing, software\n",
- "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
- "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
- "# See the License for the specific language governing permissions and\n",
- "# limitations under the License."
- ]
- },
- {
- "cell_type": "markdown",
- "id": "f9673bd6",
- "metadata": {},
- "source": [
- ""
- ]
- },
- {
- "cell_type": "markdown",
- "id": "e624ec07",
- "metadata": {},
- "source": [
- "# Fine-Tune Gemma using Hugging Face Transformers and QloRA\n",
- "\n",
- "This guide walks you through how to fine-tune Gemma on a custom text-to-sql dataset using Hugging Face [Transformers](https://huggingface.co/docs/transformers/index) and [TRL](https://huggingface.co/docs/trl/index). You will learn:\n",
- "\n",
- "- What is Quantized Low-Rank Adaptation (QLoRA)\n",
- "- Setup development environment\n",
- "- Create and prepare the fine-tuning dataset\n",
- "- Fine-tune Gemma using TRL and the SFTTrainer\n",
- "- Test Model Inference and generate SQL queries\n",
- "\n",
- "Note: This guide was created to run on a Google colaboratory account using a NVIDIA T4 GPU with 16GB and Gemma 1B, but can be adapted to run on bigger GPUs and bigger models.\n",
- "\n",
- "## What is Quantized Low-Rank Adaptation (QLoRA)\n",
- "\n",
- "This guide demonstrates the use of [Quantized Low-Rank Adaptation (QLoRA)](https://arxiv.org/abs/2305.14314), which emerged as a popular method to efficiently fine-tune LLMs as it reduces computational resource requirements while maintaining high performance. In QloRA, the pretrained model is quantized to 4-bit and the weights are frozen. Then trainable adapter layers (LoRA) are attached and only the adapter layers are trained. Afterwards, the adapter weights can be merged with the base model or kept as a separate adapter.\n",
- "\n",
- "## Setup development environment\n",
- "\n",
- "The first step is to install Hugging Face Libraries, including TRL, and datasets to fine-tune open model, including different RLHF and alignment techniques."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "ba51aa79",
- "metadata": {
- "attributes": {
- "classes": [
- "py3"
- ],
- "id": ""
- }
- },
- "outputs": [],
- "source": [
- "# Install Pytorch & other libraries\n",
- "%pip install \"torch>=2.4.0\" tensorboard\n",
- "\n",
- "# Install Gemma release branch from Hugging Face\n",
- "%pip install git+https://github.com/huggingface/transformers@v4.49.0-Gemma-3\n",
- "\n",
- "# Install Hugging Face libraries\n",
- "%pip install --upgrade \\\n",
- " \"datasets==3.3.2\" \\\n",
- " \"accelerate==1.4.0\" \\\n",
- " \"evaluate==0.4.3\" \\\n",
- " \"bitsandbytes==0.45.3\" \\\n",
- " \"trl==0.15.2\" \\\n",
- " \"peft==0.14.0\" \\\n",
- " protobuf \\\n",
- " sentencepiece\n",
- "\n",
- "# COMMENT IN: if you are running on a GPU that supports BF16 data type and flash attn, such as NVIDIA L4 or NVIDIA A100\n",
- "#% pip install flash-attn"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "7ef3d54b",
- "metadata": {},
- "source": [
- "_Note: If you are using a GPU with Ampere architecture (such as NVIDIA L4) or newer, you can use Flash attention. Flash Attention is a method that significantly speeds computations up and reduces memory usage from quadratic to linear in sequence length, leading to acelerating training up to 3x. Learn more at [FlashAttention](https://github.com/Dao-AILab/flash-attention/tree/main)._\n",
- "\n",
- "Before you can start training, you have to make sure that you accepted the terms of use for Gemma. You can accept the license on [Hugging Face](http://huggingface.co/google/gemma-3-1b-pt) by clicking on the Agree and access repository button on the model page at: http://huggingface.co/google/gemma-3-1b-pt\n",
- "\n",
- "After you have accepted the license, you need a valid Hugging Face Token to access the model. If you are running inside a Google Colab, you can securely use your Hugging Face Token using the Colab secrets otherwise you can set the token as directly in the `login` method. Make sure your token has write access too, as you push your model to the Hub during training."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "b6d79c93",
- "metadata": {
- "attributes": {
- "classes": [
- "py3"
- ],
- "id": ""
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "id": "926bada6",
+ "metadata": {
+ "id": "3c5dbcc9ae0c"
+ },
+ "source": [
+ "##### Copyright 2025 Google LLC."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "a110dfce",
+ "metadata": {
+ "cellView": "form",
+ "id": "906e07f6e562"
+ },
+ "outputs": [],
+ "source": [
+ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n",
+ "# you may not use this file except in compliance with the License.\n",
+ "# You may obtain a copy of the License at\n",
+ "#\n",
+ "# https://www.apache.org/licenses/LICENSE-2.0\n",
+ "#\n",
+ "# Unless required by applicable law or agreed to in writing, software\n",
+ "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
+ "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
+ "# See the License for the specific language governing permissions and\n",
+ "# limitations under the License."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "f9673bd6",
+ "metadata": {
+ "id": "d7d2099eafce"
+ },
+ "source": [
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "e624ec07",
+ "metadata": {
+ "id": "8127e779d635"
+ },
+ "source": [
+ "# Fine-Tune Gemma using Hugging Face Transformers and QloRA\n",
+ "\n",
+ "This guide walks you through how to fine-tune Gemma on a custom text-to-sql dataset using Hugging Face [Transformers](https://huggingface.co/docs/transformers/index) and [TRL](https://huggingface.co/docs/trl/index). You will learn:\n",
+ "\n",
+ "- What is Quantized Low-Rank Adaptation (QLoRA)\n",
+ "- Setup development environment\n",
+ "- Create and prepare the fine-tuning dataset\n",
+ "- Fine-tune Gemma using TRL and the SFTTrainer\n",
+ "- Test Model Inference and generate SQL queries\n",
+ "\n",
+ "Note: This guide was created to run on a Google colaboratory account using a NVIDIA T4 GPU with 16GB and Gemma 1B, but can be adapted to run on bigger GPUs and bigger models.\n",
+ "\n",
+ "## What is Quantized Low-Rank Adaptation (QLoRA)\n",
+ "\n",
+ "This guide demonstrates the use of [Quantized Low-Rank Adaptation (QLoRA)](https://arxiv.org/abs/2305.14314), which emerged as a popular method to efficiently fine-tune LLMs as it reduces computational resource requirements while maintaining high performance. In QloRA, the pretrained model is quantized to 4-bit and the weights are frozen. Then trainable adapter layers (LoRA) are attached and only the adapter layers are trained. Afterwards, the adapter weights can be merged with the base model or kept as a separate adapter.\n",
+ "\n",
+ "## Setup development environment\n",
+ "\n",
+ "The first step is to install Hugging Face Libraries, including TRL, and datasets to fine-tune open model, including different RLHF and alignment techniques."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "ba51aa79",
+ "metadata": {
+ "id": "c2883ce6ba68"
+ },
+ "outputs": [],
+ "source": [
+ "# Install Pytorch & other libraries\n",
+ "%pip install \"torch>=2.4.0\" tensorboard\n",
+ "\n",
+ "# Install Gemma release branch from Hugging Face\n",
+ "%pip install git+https://github.com/huggingface/transformers@v4.49.0-Gemma-3\n",
+ "\n",
+ "# Install Hugging Face libraries\n",
+ "%pip install --upgrade \\\n",
+ " \"datasets==3.3.2\" \\\n",
+ " \"accelerate==1.4.0\" \\\n",
+ " \"evaluate==0.4.3\" \\\n",
+ " \"bitsandbytes==0.45.3\" \\\n",
+ " \"trl==0.15.2\" \\\n",
+ " \"peft==0.14.0\" \\\n",
+ " \"protobuf<4\" \\\n",
+ " sentencepiece\n",
+ "\n",
+ "# COMMENT IN: if you are running on a GPU that supports BF16 data type and flash attn, such as NVIDIA L4 or NVIDIA A100\n",
+ "#% pip install flash-attn"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "7ef3d54b",
+ "metadata": {
+ "id": "4c62ac9014a2"
+ },
+ "source": [
+ "_Note: If you are using a GPU with Ampere architecture (such as NVIDIA L4) or newer, you can use Flash attention. Flash Attention is a method that significantly speeds computations up and reduces memory usage from quadratic to linear in sequence length, leading to acelerating training up to 3x. Learn more at [FlashAttention](https://github.com/Dao-AILab/flash-attention/tree/main)._\n",
+ "\n",
+ "Before you can start training, you have to make sure that you accepted the terms of use for Gemma. You can accept the license on [Hugging Face](http://huggingface.co/google/gemma-3-1b-pt) by clicking on the Agree and access repository button on the model page at: http://huggingface.co/google/gemma-3-1b-pt\n",
+ "\n",
+ "After you have accepted the license, you need a valid Hugging Face Token to access the model. If you are running inside a Google Colab, you can securely use your Hugging Face Token using the Colab secrets otherwise you can set the token as directly in the `login` method. Make sure your token has write access too, as you push your model to the Hub during training."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "b6d79c93",
+ "metadata": {
+ "id": "af3aaebfa7d0"
+ },
+ "outputs": [],
+ "source": [
+ "from google.colab import userdata\n",
+ "from huggingface_hub import login\n",
+ "\n",
+ "# Login into Hugging Face Hub\n",
+ "hf_token = userdata.get('HF_TOKEN') # If you are running inside a Google Colab \n",
+ "login(hf_token)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "42c60525",
+ "metadata": {
+ "id": "b9765bcf07a2"
+ },
+ "source": [
+ "## Create and prepare the fine-tuning dataset\n",
+ "\n",
+ "When fine-tuning LLMs, it is important to know your use case and the task you want to solve. This helps you create a dataset to fine-tune your model. If you haven't defined your use case yet, you might want to go back to the drawing board.\n",
+ "\n",
+ "As an example, this guide focuses on the following use case:\n",
+ "\n",
+ "- Fine-tune a natural language to SQL model for seamless integration into a data analysis tool. The objective is to significantly reduce the time and expertise required for SQL query generation, enabling even non-technical users to extract meaningful insights from data.\n",
+ "\n",
+ "Text-to-SQL can be a good use case for fine-tuning LLMs, as it is a complex task that requires a lot of (internal) knowledge about the data and the SQL language.\n",
+ "\n",
+ "Once you have determined that fine-tuning is the right solution, you need a dataset to fine-tune. The dataset should be a diverse set of demonstrations of the task(s) you want to solve. There are several ways to create such a dataset, including:\n",
+ "\n",
+ "- Using existing open-source datasets, such as [Spider](https://huggingface.co/datasets/spider)\n",
+ "- Using synthetic datasets created by LLMs, such as [Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca)\n",
+ "- Using datasets created by humans, such as [Dolly](https://huggingface.co/datasets/databricks/databricks-dolly-15k).\n",
+ "- Using a combination of the methods, such as [Orca](https://huggingface.co/datasets/Open-Orca/OpenOrca)\n",
+ "\n",
+ "Each of the methods has its own advantages and disadvantages and depends on the budget, time, and quality requirements. For example, using an existing dataset is the easiest but might not be tailored to your specific use case, while using domain experts might be the most accurate but can be time-consuming and expensive. It is also possible to combine several methods to create an instruction dataset, as shown in [Orca: Progressive Learning from Complex Explanation Traces of GPT-4.](https://arxiv.org/abs/2306.02707)\n",
+ "\n",
+ "This guide uses an already existing dataset ([philschmid/gretel-synthetic-text-to-sql](https://huggingface.co/datasets/philschmid/gretel-synthetic-text-to-sql)), a high quality synthetic Text-to-SQL dataset including natural language instructions, schema definitions, reasoning and the corresponding SQL query.\n",
+ "\n",
+ "[Hugging Face TRL](https://huggingface.co/docs/trl/en/index) supports automatic templating of conversation dataset formats. This means you only need to convert your dataset into the right json objects, and `trl` takes care of templating and putting it into the right format.\n",
+ "\n",
+ "```\n",
+ "{\"messages\": [{\"role\": \"system\", \"content\": \"You are...\"}, {\"role\": \"user\", \"content\": \"...\"}, {\"role\": \"assistant\", \"content\": \"...\"}]}\n",
+ "{\"messages\": [{\"role\": \"system\", \"content\": \"You are...\"}, {\"role\": \"user\", \"content\": \"...\"}, {\"role\": \"assistant\", \"content\": \"...\"}]}\n",
+ "{\"messages\": [{\"role\": \"system\", \"content\": \"You are...\"}, {\"role\": \"user\", \"content\": \"...\"}, {\"role\": \"assistant\", \"content\": \"...\"}]}\n",
+ "```"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "c4ecf6db",
+ "metadata": {
+ "id": "eb639af2b54c"
+ },
+ "source": [
+ "The [philschmid/gretel-synthetic-text-to-sql](https://huggingface.co/datasets/philschmid/gretel-synthetic-text-to-sql) contains over 100k samples. To keep the guide small, it is downsampled to only use 10,000 samples.\n",
+ "\n",
+ "You can now use the Hugging Face Datasets library to load the dataset and create a prompt template to combine the natural language instruction, schema definition and add a system message for your assistant."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "40c3a2cf",
+ "metadata": {
+ "id": "03b3e5c4bdcc"
+ },
+ "outputs": [],
+ "source": [
+ "from datasets import load_dataset\n",
+ "\n",
+ "# System message for the assistant \n",
+ "system_message = \"\"\"You are a text to SQL query translator. Users will ask you questions in English and you will generate a SQL query based on the provided SCHEMA.\"\"\"\n",
+ "\n",
+ "# User prompt that combines the user query and the schema\n",
+ "user_prompt = \"\"\"Given the and the , generate the corresponding SQL command to retrieve the desired data, considering the query's syntax, semantics, and schema constraints.\n",
+ "\n",
+ "\n",
+ "{context}\n",
+ "\n",
+ "\n",
+ "\n",
+ "{question}\n",
+ "\n",
+ "\"\"\"\n",
+ "def create_conversation(sample):\n",
+ " return {\n",
+ " \"messages\": [\n",
+ " # {\"role\": \"system\", \"content\": system_message},\n",
+ " {\"role\": \"user\", \"content\": user_prompt.format(question=sample[\"sql_prompt\"], context=sample[\"sql_context\"])},\n",
+ " {\"role\": \"assistant\", \"content\": sample[\"sql\"]}\n",
+ " ]\n",
+ " } \n",
+ "\n",
+ "# Load dataset from the hub\n",
+ "dataset = load_dataset(\"philschmid/gretel-synthetic-text-to-sql\", split=\"train\")\n",
+ "dataset = dataset.shuffle().select(range(12500))\n",
+ "\n",
+ "# Convert dataset to OAI messages\n",
+ "dataset = dataset.map(create_conversation, remove_columns=dataset.features,batched=False)\n",
+ "# split dataset into 10,000 training samples and 2,500 test samples\n",
+ "dataset = dataset.train_test_split(test_size=2500/12500)\n",
+ "\n",
+ "# Print formatted user prompt\n",
+ "print(dataset[\"train\"][345][\"messages\"][1][\"content\"])"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "c0eb2e06",
+ "metadata": {
+ "id": "54b135d17bb5"
+ },
+ "source": [
+ "## Fine-tune Gemma using TRL and the SFTTrainer\n",
+ "\n",
+ "You are now ready to fine-tune your model. Hugging Face TRL [SFTTrainer](https://huggingface.co/docs/trl/sft_trainer) makes it straightforward to supervise fine-tune open LLMs. The `SFTTrainer` is a subclass of the `Trainer` from the `transformers` library and supports all the same features, including logging, evaluation, and checkpointing, but adds additional quality of life features, including:\n",
+ "\n",
+ "* Dataset formatting, including conversational and instruction formats\n",
+ "* Training on completions only, ignoring prompts\n",
+ "* Packing datasets for more efficient training\n",
+ "* Parameter-efficient fine-tuning (PEFT) support including QloRA\n",
+ "* Preparing the model and tokenizer for conversational fine-tuning (such as adding special tokens)\n",
+ "\n",
+ "The following code loads the Gemma model and tokenizer from Hugging Face and initializes the quantization configuration."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "18069ed2",
+ "metadata": {
+ "id": "9ef12b04dd2a"
+ },
+ "outputs": [],
+ "source": [
+ "import torch\n",
+ "from transformers import AutoTokenizer, AutoModelForCausalLM, AutoModelForImageTextToText, BitsAndBytesConfig\n",
+ "\n",
+ "# Hugging Face model id\n",
+ "model_id = \"google/gemma-3-1b-pt\" # or `google/gemma-3-4b-pt`, `google/gemma-3-12b-pt`, `google/gemma-3-27b-pt`\n",
+ "\n",
+ "# Select model class based on id\n",
+ "if model_id == \"google/gemma-3-1b-pt\":\n",
+ " model_class = AutoModelForCausalLM\n",
+ "else:\n",
+ " model_class = AutoModelForImageTextToText\n",
+ "\n",
+ "# Check if GPU benefits from bfloat16\n",
+ "if torch.cuda.get_device_capability()[0] >= 8:\n",
+ " torch_dtype = torch.bfloat16\n",
+ "else:\n",
+ " torch_dtype = torch.float16\n",
+ "\n",
+ "# Define model init arguments\n",
+ "model_kwargs = dict(\n",
+ " attn_implementation=\"eager\", # Use \"flash_attention_2\" when running on Ampere or newer GPU\n",
+ " torch_dtype=torch_dtype, # What torch dtype to use, defaults to auto\n",
+ " device_map=\"auto\", # Let torch decide how to load the model\n",
+ ")\n",
+ "\n",
+ "# BitsAndBytesConfig: Enables 4-bit quantization to reduce model size/memory usage\n",
+ "model_kwargs[\"quantization_config\"] = BitsAndBytesConfig(\n",
+ " load_in_4bit=True,\n",
+ " bnb_4bit_use_double_quant=True,\n",
+ " bnb_4bit_quant_type='nf4',\n",
+ " bnb_4bit_compute_dtype=model_kwargs['torch_dtype'],\n",
+ " bnb_4bit_quant_storage=model_kwargs['torch_dtype'],\n",
+ ")\n",
+ "\n",
+ "# Load model and tokenizer\n",
+ "model = model_class.from_pretrained(model_id, **model_kwargs)\n",
+ "tokenizer = AutoTokenizer.from_pretrained(\"google/gemma-3-1b-it\") # Load the Instruction Tokenizer to use the official Gemma template"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "37ec1d1b",
+ "metadata": {
+ "id": "3a062e371c81"
+ },
+ "source": [
+ "The `SFTTrainer` supports a native integration with `peft`, which makes it straightforward to efficiently tune LLMs using QLoRA. You only need to create a `LoraConfig` and provide it to the trainer."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "ed00e846",
+ "metadata": {
+ "id": "cf20be9605bd"
+ },
+ "outputs": [],
+ "source": [
+ "from peft import LoraConfig\n",
+ "\n",
+ "peft_config = LoraConfig(\n",
+ " lora_alpha=16,\n",
+ " lora_dropout=0.05,\n",
+ " r=16,\n",
+ " bias=\"none\",\n",
+ " target_modules=\"all-linear\",\n",
+ " task_type=\"CAUSAL_LM\",\n",
+ " modules_to_save=[\"lm_head\", \"embed_tokens\"] # make sure to save the lm_head and embed_tokens as you train the special tokens\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "bbd9fc1b",
+ "metadata": {
+ "id": "de3610c6ffa0"
+ },
+ "source": [
+ "Before you can start your training, you need to define the hyperparameter you want to use in a `SFTConfig` instance."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "989be3c1",
+ "metadata": {
+ "id": "094c8a6fa19f"
+ },
+ "outputs": [],
+ "source": [
+ "from trl import SFTConfig\n",
+ "\n",
+ "args = SFTConfig(\n",
+ " output_dir=\"gemma-text-to-sql\", # directory to save and repository id\n",
+ " max_seq_length=512, # max sequence length for model and packing of the dataset\n",
+ " packing=True, # Groups multiple samples in the dataset into a single sequence\n",
+ " num_train_epochs=3, # number of training epochs\n",
+ " per_device_train_batch_size=1, # batch size per device during training\n",
+ " gradient_accumulation_steps=4, # number of steps before performing a backward/update pass\n",
+ " gradient_checkpointing=True, # use gradient checkpointing to save memory\n",
+ " optim=\"adamw_torch_fused\", # use fused adamw optimizer\n",
+ " logging_steps=10, # log every 10 steps\n",
+ " save_strategy=\"epoch\", # save checkpoint every epoch\n",
+ " learning_rate=2e-4, # learning rate, based on QLoRA paper\n",
+ " fp16=True if torch_dtype == torch.float16 else False, # use float16 precision\n",
+ " bf16=True if torch_dtype == torch.bfloat16 else False, # use bfloat16 precision\n",
+ " max_grad_norm=0.3, # max gradient norm based on QLoRA paper\n",
+ " warmup_ratio=0.03, # warmup ratio based on QLoRA paper\n",
+ " lr_scheduler_type=\"constant\", # use constant learning rate scheduler\n",
+ " push_to_hub=True, # push model to hub\n",
+ " report_to=\"tensorboard\", # report metrics to tensorboard\n",
+ " dataset_kwargs={\n",
+ " \"add_special_tokens\": False, # We template with special tokens\n",
+ " \"append_concat_token\": True, # Add EOS token as separator token between examples\n",
+ " }\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "dd88e798",
+ "metadata": {
+ "id": "9d26b4c47de3"
+ },
+ "source": [
+ "You now have every building block you need to create your `SFTTrainer` to start the training of your model."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "ade95df7",
+ "metadata": {
+ "id": "c512e5b223f3"
+ },
+ "outputs": [],
+ "source": [
+ "from trl import SFTTrainer\n",
+ "\n",
+ "# Create Trainer object\n",
+ "trainer = SFTTrainer(\n",
+ " model=model,\n",
+ " args=args,\n",
+ " train_dataset=dataset[\"train\"],\n",
+ " peft_config=peft_config,\n",
+ " processing_class=tokenizer\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "fad61a6a",
+ "metadata": {
+ "id": "0658bdb606c6"
+ },
+ "source": [
+ "Start training by calling the `train()` method."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "995e7e38",
+ "metadata": {
+ "id": "404979808661"
+ },
+ "outputs": [],
+ "source": [
+ "# Start training, the model will be automatically saved to the Hub and the output directory\n",
+ "trainer.train()\n",
+ "\n",
+ "# Save the final model again to the Hugging Face Hub\n",
+ "trainer.save_model()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "b47b9733",
+ "metadata": {
+ "id": "402f62975f8a"
+ },
+ "source": [
+ "Before you can test your model, make sure to free the memory."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "40a32ed7",
+ "metadata": {
+ "id": "f1eea9ab9c32"
+ },
+ "outputs": [],
+ "source": [
+ "# free the memory again\n",
+ "del model\n",
+ "del trainer\n",
+ "torch.cuda.empty_cache()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "862e9728",
+ "metadata": {
+ "id": "e2746be11ef2"
+ },
+ "source": [
+ "When using QLoRA, you only train adapters and not the full model. This means when saving the model during training you only save the adapter weights and not the full model. If you want to save the full model, which makes it easier to use with serving stacks like vLLM or TGI, you can merge the adapter weights into the model weights using the `merge_and_unload` method and then save the model with the `save_pretrained` method. This saves a default model, which can be used for inference.\n",
+ "\n",
+ "Note: It requires more than 30GB of CPU Memory when you want to merge the adapter into the model. You can skip this and continue with Test Model Inference."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "761e324b",
+ "metadata": {
+ "id": "cae5b572abbb"
+ },
+ "outputs": [],
+ "source": [
+ "from peft import PeftModel\n",
+ "\n",
+ "# Load Model base model\n",
+ "model = model_class.from_pretrained(model_id, low_cpu_mem_usage=True)\n",
+ "\n",
+ "# Merge LoRA and base model and save\n",
+ "peft_model = PeftModel.from_pretrained(model, args.output_dir)\n",
+ "merged_model = peft_model.merge_and_unload()\n",
+ "merged_model.save_pretrained(\"merged_model\", safe_serialization=True, max_shard_size=\"2GB\")\n",
+ "\n",
+ "processor = AutoTokenizer.from_pretrained(args.output_dir)\n",
+ "processor.save_pretrained(\"merged_model\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "bf86e31d",
+ "metadata": {
+ "id": "51cfef1ff581"
+ },
+ "source": [
+ "## Test Model Inference and generate SQL queries\n",
+ "\n",
+ "After the training is done, you'll want to evaluate and test your model. You can load different samples from the test dataset and evaluate the model on those samples.\n",
+ "\n",
+ "Note: Evaluating generative AI models is not a trivial task since one input can have multiple correct outputs. This guide only focuses on manual evaluation and vibe checks."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "aab1c5c5",
+ "metadata": {
+ "id": "0f3b26afb2e4"
+ },
+ "outputs": [],
+ "source": [
+ "import torch\n",
+ "from transformers import pipeline\n",
+ "\n",
+ "model_id = \"gemma-text-to-sql\"\n",
+ "\n",
+ "# Load Model with PEFT adapter\n",
+ "model = model_class.from_pretrained(\n",
+ " model_id,\n",
+ " device_map=\"auto\",\n",
+ " torch_dtype=torch_dtype,\n",
+ " attn_implementation=\"eager\",\n",
+ ")\n",
+ "tokenizer = AutoTokenizer.from_pretrained(model_id)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "3dccb57c",
+ "metadata": {
+ "id": "25e2bb9acfab"
+ },
+ "source": [
+ "Let's load a random sample from the test dataset and generate a SQL command."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "1fd887f4",
+ "metadata": {
+ "id": "f58be7d77c02"
+ },
+ "outputs": [],
+ "source": [
+ "from random import randint\n",
+ "import re\n",
+ "\n",
+ "# Load the model and tokenizer into the pipeline\n",
+ "pipe = pipeline(\"text-generation\", model=model, tokenizer=tokenizer)\n",
+ "\n",
+ "# Load a random sample from the test dataset\n",
+ "rand_idx = randint(0, len(dataset[\"test\"]))\n",
+ "test_sample = dataset[\"test\"][rand_idx]\n",
+ "\n",
+ "# Convert as test example into a prompt with the Gemma template\n",
+ "stop_token_ids = [tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids(\"\")]\n",
+ "prompt = pipe.tokenizer.apply_chat_template(test_sample[\"messages\"][:2], tokenize=False, add_generation_prompt=True)\n",
+ "\n",
+ "# Generate our SQL query.\n",
+ "outputs = pipe(prompt, max_new_tokens=256, do_sample=False, temperature=0.1, top_k=50, top_p=0.1, eos_token_id=stop_token_ids, disable_compile=True)\n",
+ "\n",
+ "# Extract the user query and original answer\n",
+ "print(f\"Context:\\n\", re.search(r'\\n(.*?)\\n', test_sample['messages'][0]['content'], re.DOTALL).group(1).strip())\n",
+ "print(f\"Query:\\n\", re.search(r'\\n(.*?)\\n', test_sample['messages'][0]['content'], re.DOTALL).group(1).strip())\n",
+ "print(f\"Original Answer:\\n{test_sample['messages'][1]['content']}\")\n",
+ "print(f\"Generated Answer:\\n{outputs[0]['generated_text'][len(prompt):].strip()}\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "6f8ff452",
+ "metadata": {
+ "id": "df36c2036a87"
+ },
+ "source": [
+ "## Summary and next steps\n",
+ "\n",
+ "This tutorial covered how to fine-tune a Gemma model using TRL and QLoRA. Check out the following docs next:\n",
+ "\n",
+ "* Learn how to [generate text with a Gemma model](https://ai.google.dev/gemma/docs/get_started).\n",
+ "* Learn how to [fine-tune Gemma for vision tasks using Hugging Face Transformers](https://ai.google.dev/gemma/docs/core/huggingface_vision_finetune_qlora).\n",
+ "* Learn how to perform [distributed fine-tuning and inference on a Gemma model](https://ai.google.dev/gemma/docs/core/distributed_tuning).\n",
+ "* Learn how to [use Gemma open models with Vertex AI](https://cloud.google.com/vertex-ai/docs/generative-ai/open-models/use-gemma).\n",
+ "* Learn how to [fine-tune Gemma using KerasNLP and deploy to Vertex AI](https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/model_garden/model_garden_gemma_kerasnlp_to_vertexai.ipynb)."
+ ]
}
- },
- "outputs": [],
- "source": [
- "from google.colab import userdata\n",
- "from huggingface_hub import login\n",
- "\n",
- "# Login into Hugging Face Hub\n",
- "hf_token = userdata.get('HF_TOKEN') # If you are running inside a Google Colab \n",
- "login(hf_token)"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "42c60525",
- "metadata": {},
- "source": [
- "## Create and prepare the fine-tuning dataset\n",
- "\n",
- "When fine-tuning LLMs, it is important to know your use case and the task you want to solve. This helps you create a dataset to fine-tune your model. If you haven't defined your use case yet, you might want to go back to the drawing board.\n",
- "\n",
- "As an example, this guide focuses on the following use case:\n",
- "\n",
- "- Fine-tune a natural language to SQL model for seamless integration into a data analysis tool. The objective is to significantly reduce the time and expertise required for SQL query generation, enabling even non-technical users to extract meaningful insights from data.\n",
- "\n",
- "Text-to-SQL can be a good use case for fine-tuning LLMs, as it is a complex task that requires a lot of (internal) knowledge about the data and the SQL language.\n",
- "\n",
- "Once you have determined that fine-tuning is the right solution, you need a dataset to fine-tune. The dataset should be a diverse set of demonstrations of the task(s) you want to solve. There are several ways to create such a dataset, including:\n",
- "\n",
- "- Using existing open-source datasets, such as [Spider](https://huggingface.co/datasets/spider)\n",
- "- Using synthetic datasets created by LLMs, such as [Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca)\n",
- "- Using datasets created by humans, such as [Dolly](https://huggingface.co/datasets/databricks/databricks-dolly-15k).\n",
- "- Using a combination of the methods, such as [Orca](https://huggingface.co/datasets/Open-Orca/OpenOrca)\n",
- "\n",
- "Each of the methods has its own advantages and disadvantages and depends on the budget, time, and quality requirements. For example, using an existing dataset is the easiest but might not be tailored to your specific use case, while using domain experts might be the most accurate but can be time-consuming and expensive. It is also possible to combine several methods to create an instruction dataset, as shown in [Orca: Progressive Learning from Complex Explanation Traces of GPT-4.](https://arxiv.org/abs/2306.02707)\n",
- "\n",
- "This guide uses an already existing dataset ([philschmid/gretel-synthetic-text-to-sql](https://huggingface.co/datasets/philschmid/gretel-synthetic-text-to-sql)), a high quality synthetic Text-to-SQL dataset including natural language instructions, schema definitions, reasoning and the corresponding SQL query.\n",
- "\n",
- "[Hugging Face TRL](https://huggingface.co/docs/trl/en/index) supports automatic templating of conversation dataset formats. This means you only need to convert your dataset into the right json objects, and `trl` takes care of templating and putting it into the right format.\n",
- "\n",
- "```\n",
- "{\"messages\": [{\"role\": \"system\", \"content\": \"You are...\"}, {\"role\": \"user\", \"content\": \"...\"}, {\"role\": \"assistant\", \"content\": \"...\"}]}\n",
- "{\"messages\": [{\"role\": \"system\", \"content\": \"You are...\"}, {\"role\": \"user\", \"content\": \"...\"}, {\"role\": \"assistant\", \"content\": \"...\"}]}\n",
- "{\"messages\": [{\"role\": \"system\", \"content\": \"You are...\"}, {\"role\": \"user\", \"content\": \"...\"}, {\"role\": \"assistant\", \"content\": \"...\"}]}\n",
- "```"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "c4ecf6db",
- "metadata": {},
- "source": [
- "The [philschmid/gretel-synthetic-text-to-sql](https://huggingface.co/datasets/philschmid/gretel-synthetic-text-to-sql) contains over 100k samples. To keep the guide small, it is downsampled to only use 10,000 samples.\n",
- "\n",
- "You can now use the Hugging Face Datasets library to load the dataset and create a prompt template to combine the natural language instruction, schema definition and add a system message for your assistant."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "40c3a2cf",
- "metadata": {
- "attributes": {
- "classes": [
- "py3"
- ],
- "id": ""
- }
- },
- "outputs": [],
- "source": [
- "from datasets import load_dataset\n",
- "\n",
- "# System message for the assistant \n",
- "system_message = \"\"\"You are a text to SQL query translator. Users will ask you questions in English and you will generate a SQL query based on the provided SCHEMA.\"\"\"\n",
- "\n",
- "# User prompt that combines the user query and the schema\n",
- "user_prompt = \"\"\"Given the and the , generate the corresponding SQL command to retrieve the desired data, considering the query's syntax, semantics, and schema constraints.\n",
- "\n",
- "\n",
- "{context}\n",
- "\n",
- "\n",
- "\n",
- "{question}\n",
- "\n",
- "\"\"\"\n",
- "def create_conversation(sample):\n",
- " return {\n",
- " \"messages\": [\n",
- " # {\"role\": \"system\", \"content\": system_message},\n",
- " {\"role\": \"user\", \"content\": user_prompt.format(question=sample[\"sql_prompt\"], context=sample[\"sql_context\"])},\n",
- " {\"role\": \"assistant\", \"content\": sample[\"sql\"]}\n",
- " ]\n",
- " } \n",
- "\n",
- "# Load dataset from the hub\n",
- "dataset = load_dataset(\"philschmid/gretel-synthetic-text-to-sql\", split=\"train\")\n",
- "dataset = dataset.shuffle().select(range(12500))\n",
- "\n",
- "# Convert dataset to OAI messages\n",
- "dataset = dataset.map(create_conversation, remove_columns=dataset.features,batched=False)\n",
- "# split dataset into 10,000 training samples and 2,500 test samples\n",
- "dataset = dataset.train_test_split(test_size=2500/12500)\n",
- "\n",
- "# Print formatted user prompt\n",
- "print(dataset[\"train\"][345][\"messages\"][1][\"content\"])"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "c0eb2e06",
- "metadata": {},
- "source": [
- "## Fine-tune Gemma using TRL and the SFTTrainer\n",
- "\n",
- "You are now ready to fine-tune your model. Hugging Face TRL [SFTTrainer](https://huggingface.co/docs/trl/sft_trainer) makes it straightforward to supervise fine-tune open LLMs. The `SFTTrainer` is a subclass of the `Trainer` from the `transformers` library and supports all the same features, including logging, evaluation, and checkpointing, but adds additional quality of life features, including:\n",
- "\n",
- "* Dataset formatting, including conversational and instruction formats\n",
- "* Training on completions only, ignoring prompts\n",
- "* Packing datasets for more efficient training\n",
- "* Parameter-efficient fine-tuning (PEFT) support including QloRA\n",
- "* Preparing the model and tokenizer for conversational fine-tuning (such as adding special tokens)\n",
- "\n",
- "The following code loads the Gemma model and tokenizer from Hugging Face and initializes the quantization configuration."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "18069ed2",
- "metadata": {
- "attributes": {
- "classes": [
- "py3"
- ],
- "id": ""
- }
- },
- "outputs": [],
- "source": [
- "import torch\n",
- "from transformers import AutoTokenizer, AutoModelForCausalLM, AutoModelForImageTextToText, BitsAndBytesConfig\n",
- "\n",
- "# Hugging Face model id\n",
- "model_id = \"google/gemma-3-1b-pt\" # or `google/gemma-3-4b-pt`, `google/gemma-3-12b-pt`, `google/gemma-3-27b-pt`\n",
- "\n",
- "# Select model class based on id\n",
- "if model_id == \"google/gemma-3-1b-pt\":\n",
- " model_class = AutoModelForCausalLM\n",
- "else:\n",
- " model_class = AutoModelForImageTextToText\n",
- "\n",
- "# Check if GPU benefits from bfloat16\n",
- "if torch.cuda.get_device_capability()[0] >= 8:\n",
- " torch_dtype = torch.bfloat16\n",
- "else:\n",
- " torch_dtype = torch.float16\n",
- "\n",
- "# Define model init arguments\n",
- "model_kwargs = dict(\n",
- " attn_implementation=\"eager\", # Use \"flash_attention_2\" when running on Ampere or newer GPU\n",
- " torch_dtype=torch_dtype, # What torch dtype to use, defaults to auto\n",
- " device_map=\"auto\", # Let torch decide how to load the model\n",
- ")\n",
- "\n",
- "# BitsAndBytesConfig: Enables 4-bit quantization to reduce model size/memory usage\n",
- "model_kwargs[\"quantization_config\"] = BitsAndBytesConfig(\n",
- " load_in_4bit=True,\n",
- " bnb_4bit_use_double_quant=True,\n",
- " bnb_4bit_quant_type='nf4',\n",
- " bnb_4bit_compute_dtype=model_kwargs['torch_dtype'],\n",
- " bnb_4bit_quant_storage=model_kwargs['torch_dtype'],\n",
- ")\n",
- "\n",
- "# Load model and tokenizer\n",
- "model = model_class.from_pretrained(model_id, **model_kwargs)\n",
- "tokenizer = AutoTokenizer.from_pretrained(\"google/gemma-3-1b-it\") # Load the Instruction Tokenizer to use the official Gemma template"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "37ec1d1b",
- "metadata": {},
- "source": [
- "The `SFTTrainer` supports a native integration with `peft`, which makes it straightforward to efficiently tune LLMs using QLoRA. You only need to create a `LoraConfig` and provide it to the trainer."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "ed00e846",
- "metadata": {
- "attributes": {
- "classes": [
- "py3"
- ],
- "id": ""
- }
- },
- "outputs": [],
- "source": [
- "from peft import LoraConfig\n",
- "\n",
- "peft_config = LoraConfig(\n",
- " lora_alpha=16,\n",
- " lora_dropout=0.05,\n",
- " r=16,\n",
- " bias=\"none\",\n",
- " target_modules=\"all-linear\",\n",
- " task_type=\"CAUSAL_LM\",\n",
- " modules_to_save=[\"lm_head\", \"embed_tokens\"] # make sure to save the lm_head and embed_tokens as you train the special tokens\n",
- ")"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "bbd9fc1b",
- "metadata": {},
- "source": [
- "Before you can start your training, you need to define the hyperparameter you want to use in a `SFTConfig` instance."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "989be3c1",
- "metadata": {
- "attributes": {
- "classes": [
- "py3"
- ],
- "id": ""
- }
- },
- "outputs": [],
- "source": [
- "from trl import SFTConfig\n",
- "\n",
- "args = SFTConfig(\n",
- " output_dir=\"gemma-text-to-sql\", # directory to save and repository id\n",
- " max_seq_length=512, # max sequence length for model and packing of the dataset\n",
- " packing=True, # Groups multiple samples in the dataset into a single sequence\n",
- " num_train_epochs=3, # number of training epochs\n",
- " per_device_train_batch_size=1, # batch size per device during training\n",
- " gradient_accumulation_steps=4, # number of steps before performing a backward/update pass\n",
- " gradient_checkpointing=True, # use gradient checkpointing to save memory\n",
- " optim=\"adamw_torch_fused\", # use fused adamw optimizer\n",
- " logging_steps=10, # log every 10 steps\n",
- " save_strategy=\"epoch\", # save checkpoint every epoch\n",
- " learning_rate=2e-4, # learning rate, based on QLoRA paper\n",
- " fp16=True if torch_dtype == torch.float16 else False, # use float16 precision\n",
- " bf16=True if torch_dtype == torch.bfloat16 else False, # use bfloat16 precision\n",
- " max_grad_norm=0.3, # max gradient norm based on QLoRA paper\n",
- " warmup_ratio=0.03, # warmup ratio based on QLoRA paper\n",
- " lr_scheduler_type=\"constant\", # use constant learning rate scheduler\n",
- " push_to_hub=True, # push model to hub\n",
- " report_to=\"tensorboard\", # report metrics to tensorboard\n",
- " dataset_kwargs={\n",
- " \"add_special_tokens\": False, # We template with special tokens\n",
- " \"append_concat_token\": True, # Add EOS token as separator token between examples\n",
- " }\n",
- ")"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "dd88e798",
- "metadata": {},
- "source": [
- "You now have every building block you need to create your `SFTTrainer` to start the training of your model."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "ade95df7",
- "metadata": {
- "attributes": {
- "classes": [
- "py3"
- ],
- "id": ""
- }
- },
- "outputs": [],
- "source": [
- "from trl import SFTTrainer\n",
- "\n",
- "# Create Trainer object\n",
- "trainer = SFTTrainer(\n",
- " model=model,\n",
- " args=args,\n",
- " train_dataset=dataset[\"train\"],\n",
- " peft_config=peft_config,\n",
- " processing_class=tokenizer\n",
- ")"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "fad61a6a",
- "metadata": {},
- "source": [
- "Start training by calling the `train()` method."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "995e7e38",
- "metadata": {
- "attributes": {
- "classes": [
- "py3"
- ],
- "id": ""
- }
- },
- "outputs": [],
- "source": [
- "# Start training, the model will be automatically saved to the Hub and the output directory\n",
- "trainer.train()\n",
- "\n",
- "# Save the final model again to the Hugging Face Hub\n",
- "trainer.save_model()"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "b47b9733",
- "metadata": {},
- "source": [
- "Before you can test your model, make sure to free the memory."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "40a32ed7",
- "metadata": {
- "attributes": {
- "classes": [
- "py3"
- ],
- "id": ""
- }
- },
- "outputs": [],
- "source": [
- "# free the memory again\n",
- "del model\n",
- "del trainer\n",
- "torch.cuda.empty_cache()"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "862e9728",
- "metadata": {},
- "source": [
- "When using QLoRA, you only train adapters and not the full model. This means when saving the model during training you only save the adapter weights and not the full model. If you want to save the full model, which makes it easier to use with serving stacks like vLLM or TGI, you can merge the adapter weights into the model weights using the `merge_and_unload` method and then save the model with the `save_pretrained` method. This saves a default model, which can be used for inference.\n",
- "\n",
- "Note: It requires more than 30GB of CPU Memory when you want to merge the adapter into the model. You can skip this and continue with Test Model Inference."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "761e324b",
- "metadata": {
- "attributes": {
- "classes": [
- "py3"
- ],
- "id": ""
- }
- },
- "outputs": [],
- "source": [
- "from peft import PeftModel\n",
- "\n",
- "# Load Model base model\n",
- "model = model_class.from_pretrained(model_id, low_cpu_mem_usage=True)\n",
- "\n",
- "# Merge LoRA and base model and save\n",
- "peft_model = PeftModel.from_pretrained(model, args.output_dir)\n",
- "merged_model = peft_model.merge_and_unload()\n",
- "merged_model.save_pretrained(\"merged_model\", safe_serialization=True, max_shard_size=\"2GB\")\n",
- "\n",
- "processor = AutoTokenizer.from_pretrained(args.output_dir)\n",
- "processor.save_pretrained(\"merged_model\")"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "bf86e31d",
- "metadata": {},
- "source": [
- "## Test Model Inference and generate SQL queries\n",
- "\n",
- "After the training is done, you'll want to evaluate and test your model. You can load different samples from the test dataset and evaluate the model on those samples.\n",
- "\n",
- "Note: Evaluating generative AI models is not a trivial task since one input can have multiple correct outputs. This guide only focuses on manual evaluation and vibe checks."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "aab1c5c5",
- "metadata": {
- "attributes": {
- "classes": [
- "py3"
- ],
- "id": ""
- }
- },
- "outputs": [],
- "source": [
- "import torch\n",
- "from transformers import pipeline\n",
- "\n",
- "model_id = \"gemma-text-to-sql\"\n",
- "\n",
- "# Load Model with PEFT adapter\n",
- "model = model_class.from_pretrained(\n",
- " model_id,\n",
- " device_map=\"auto\",\n",
- " torch_dtype=torch_dtype,\n",
- " attn_implementation=\"eager\",\n",
- ")\n",
- "tokenizer = AutoTokenizer.from_pretrained(model_id)"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "3dccb57c",
- "metadata": {},
- "source": [
- "Let's load a random sample from the test dataset and generate a SQL command."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "1fd887f4",
- "metadata": {
- "attributes": {
- "classes": [
- "py3"
- ],
- "id": ""
+ ],
+ "metadata": {
+ "colab": {
+ "name": "huggingface_text_finetune_qlora.ipynb",
+ "toc_visible": true
+ },
+ "kernelspec": {
+ "display_name": "Python 3",
+ "name": "python3"
}
- },
- "outputs": [],
- "source": [
- "from random import randint\n",
- "import re\n",
- "\n",
- "# Load the model and tokenizer into the pipeline\n",
- "pipe = pipeline(\"text-generation\", model=model, tokenizer=tokenizer)\n",
- "\n",
- "# Load a random sample from the test dataset\n",
- "rand_idx = randint(0, len(dataset[\"test\"]))\n",
- "test_sample = dataset[\"test\"][rand_idx]\n",
- "\n",
- "# Convert as test example into a prompt with the Gemma template\n",
- "stop_token_ids = [tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids(\"\")]\n",
- "prompt = pipe.tokenizer.apply_chat_template(test_sample[\"messages\"][:2], tokenize=False, add_generation_prompt=True)\n",
- "\n",
- "# Generate our SQL query.\n",
- "outputs = pipe(prompt, max_new_tokens=256, do_sample=False, temperature=0.1, top_k=50, top_p=0.1, eos_token_id=stop_token_ids, disable_compile=True)\n",
- "\n",
- "# Extract the user query and original answer\n",
- "print(f\"Context:\\n\", re.search(r'\\n(.*?)\\n', test_sample['messages'][0]['content'], re.DOTALL).group(1).strip())\n",
- "print(f\"Query:\\n\", re.search(r'\\n(.*?)\\n', test_sample['messages'][0]['content'], re.DOTALL).group(1).strip())\n",
- "print(f\"Original Answer:\\n{test_sample['messages'][1]['content']}\")\n",
- "print(f\"Generated Answer:\\n{outputs[0]['generated_text'][len(prompt):].strip()}\")"
- ]
},
- {
- "cell_type": "markdown",
- "id": "6f8ff452",
- "metadata": {},
- "source": [
- "## Summary and next steps\n",
- "\n",
- "This tutorial covered how to fine-tune a Gemma model using TRL and QLoRA. Check out the following docs next:\n",
- "\n",
- "* Learn how to [generate text with a Gemma model](https://ai.google.dev/gemma/docs/get_started).\n",
- "* Learn how to [fine-tune Gemma for vision tasks using Hugging Face Transformers](https://ai.google.dev/gemma/docs/core/huggingface_vision_finetune_qlora).\n",
- "* Learn how to perform [distributed fine-tuning and inference on a Gemma model](https://ai.google.dev/gemma/docs/core/distributed_tuning).\n",
- "* Learn how to [use Gemma open models with Vertex AI](https://cloud.google.com/vertex-ai/docs/generative-ai/open-models/use-gemma).\n",
- "* Learn how to [fine-tune Gemma using KerasNLP and deploy to Vertex AI](https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/model_garden/model_garden_gemma_kerasnlp_to_vertexai.ipynb)."
- ]
- }
- ],
- "metadata": {
- "language_info": {
- "name": "python"
- }
- },
- "nbformat": 4,
- "nbformat_minor": 5
+ "nbformat": 4,
+ "nbformat_minor": 0
}
From 732817e8fcb4d6644596938ab91317cc6a28b0f0 Mon Sep 17 00:00:00 2001
From: philschmid
Date: Fri, 14 Mar 2025 14:24:09 +0000
Subject: [PATCH 3/4] lnit
---
site/en/gemma/docs/core/huggingface_text_finetune_qlora.ipynb | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/site/en/gemma/docs/core/huggingface_text_finetune_qlora.ipynb b/site/en/gemma/docs/core/huggingface_text_finetune_qlora.ipynb
index b309080cd..e2367f43f 100644
--- a/site/en/gemma/docs/core/huggingface_text_finetune_qlora.ipynb
+++ b/site/en/gemma/docs/core/huggingface_text_finetune_qlora.ipynb
@@ -319,7 +319,7 @@
"id": "3a062e371c81"
},
"source": [
- "The `SFTTrainer` supports a native integration with `peft`, which makes it straightforward to efficiently tune LLMs using QLoRA. You only need to create a `LoraConfig` and provide it to the trainer."
+ "The `SFTTrainer` supports a built-in integration with `peft`, which makes it straightforward to efficiently tune LLMs using QLoRA. You only need to create a `LoraConfig` and provide it to the trainer."
]
},
{
@@ -385,7 +385,7 @@
" push_to_hub=True, # push model to hub\n",
" report_to=\"tensorboard\", # report metrics to tensorboard\n",
" dataset_kwargs={\n",
- " \"add_special_tokens\": False, # We template with special tokens\n",
+ " \"add_special_tokens\": False, # Template with special tokens\n",
" \"append_concat_token\": True, # Add EOS token as separator token between examples\n",
" }\n",
")"
From 1e9cd90a3b25de53aa45a5cbf0dff12b6fbf3e60 Mon Sep 17 00:00:00 2001
From: philschmid
Date: Tue, 25 Mar 2025 16:34:45 +0100
Subject: [PATCH 4/4] add core
---
site/en/gemma/docs/core/huggingface_text_finetune_qlora.ipynb | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/site/en/gemma/docs/core/huggingface_text_finetune_qlora.ipynb b/site/en/gemma/docs/core/huggingface_text_finetune_qlora.ipynb
index e2367f43f..663293740 100644
--- a/site/en/gemma/docs/core/huggingface_text_finetune_qlora.ipynb
+++ b/site/en/gemma/docs/core/huggingface_text_finetune_qlora.ipynb
@@ -45,7 +45,7 @@
"
View on ai.google.dev\n",
" \n",
" \n",
- " Run in Google Colab\n",
+ " Run in Google Colab\n",
" | \n",
" \n",
" Run in Kaggle\n",
|