diff --git a/Unsloth.md b/Unsloth.md index 7708864..9733927 100644 --- a/Unsloth.md +++ b/Unsloth.md @@ -75,17 +75,17 @@ You can also view all our uploaded models on [Hugging Face directly](https://hug Learn to install Unsloth locally or on Google Colab. -# Updating +## Updating To update Unsloth, follow the steps below: -## Updating without dependency updates +### Updating without dependency updates ``` pip uninstall unsloth -y pip install --upgrade --no-cache-dir "unsloth[colab-new] @ git+https://github.com/unslothai/u ``` -# Conda Install +## Conda Install To install Unsloth locally on Conda, follow the steps below: @@ -106,33 +106,25 @@ pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git" pip install --no-deps "trl<0.9.0" peft accelerate bitsandbytes ``` -# Pip Install +## Pip Install To install Unsloth locally via Pip, follow the steps below: Do **NOT** use this if you have Anaconda. You must use the Conda install method, or else stuff will BREAK. 1. Find your CUDA version via - - -Copy ``` import torch; torch.version.cuda ``` 1. For Pytorch 2.1.0: You can update Pytorch via Pip (interchange `cu121` / `cu118`). Go to https://pytorch.org/ to learn more. Select either `cu118` for CUDA 11.8 or `cu121` for CUDA 12.1. If you have a RTX 3060 or higher (A100, H100 etc), use the `"ampere"` path. For Pytorch 2.1.1: go to step 3. For Pytorch 2.2.0: go to step 4. - - -Copy ``` pip install --upgrade --force-reinstall --no-cache-dir torch==2.1.0 triton \ --index-url https://download.pytorch.org/whl/cu121 ``` -Copy - ``` pip install "unsloth[cu118] @ git+https://github.com/unslothai/unsloth.git" pip install "unsloth[cu121] @ git+https://github.com/unslothai/unsloth.git" @@ -141,9 +133,6 @@ pip install "unsloth[cu121-ampere] @ git+https://github.com/unslothai/unsloth.gi ``` 1. For Pytorch 2.1.1: Use the `"ampere"` path for newer RTX 30xx GPUs or higher. - - -Copy ``` pip install --upgrade --force-reinstall --no-cache-dir torch==2.1.1 triton \ @@ -160,17 +149,12 @@ pip install "unsloth[cu121-ampere-torch211] @ git+https://github.com/unslothai/u ``` 1. For Pytorch 2.2.0: Use the `"ampere"` path for newer RTX 30xx GPUs or higher. - - -Copy ``` pip install --upgrade --force-reinstall --no-cache-dir torch==2.2.0 triton \ --index-url https://download.pytorch.org/whl/cu121 ``` -Copy - ``` pip install "unsloth[cu118-torch220] @ git+https://github.com/unslothai/unsloth.git" pip install "unsloth[cu121-torch220] @ git+https://github.com/unslothai/unsloth.git" @@ -179,18 +163,12 @@ pip install "unsloth[cu121-ampere-torch220] @ git+https://github.com/unslothai/u ``` 1. If you get errors, try the below first, then go back to step 1: - - -Copy ``` pip install --upgrade pip ``` 1. For Pytorch 2.2.1: - - -Copy ``` # RTX 3090, 4090 Ampere GPUs: @@ -203,9 +181,6 @@ pip install --no-deps xformers "trl<0.9.0" peft accelerate bitsandbytes ``` 1. For Pytorch 2.3.0: Use the `"ampere"` path for newer RTX 30xx GPUs or higher. - - -Copy ``` pip install "unsloth[cu118-torch230] @ git+https://github.com/unslothai/unsloth.git" @@ -215,9 +190,6 @@ pip install "unsloth[cu121-ampere-torch230] @ git+https://github.com/unslothai/u ``` 1. To troubleshoot installs try the below (all must succeed). Xformers should mostly all be available. - - -Copy ``` nvcc @@ -234,20 +206,21 @@ To install and run Unsloth on Google Colab, follow the steps below: If you have never used a Colab notebook, a quick primer on the notebook itself: 1. **Play Button at each "cell".** Click on this to run that cell's code. You must not skip any cells and you must run every cell in chronological order. If you encounter errors, simply rerun the cell you did not run. Another option is to click CTRL + ENTER if you don't want to click the play button. - + 2. **Runtime Button in the top toolbar.** You can also use this button and hit "Run all" to run the entire notebook in 1 go. This will skip all the customization steps, but is a good first try. - + 3. **Connect / Reconnect T4 button.** T4 is the free GPU Google is providing. It's quite powerful! - -The first installation cell looks like below: Remember to click the PLAY button in the brackets [ ]. We grab our open source Github package, and install some other packages. +The first installation cell looks like below: Remember to click the PLAY button in the brackets `[ ]`. We grab our open source Github package, and install some other packages. +---- # Basics # 📂Saving Models Learn how to save your finetuned model so you can run it in your favorite inference engine. +-------- # Saving to GGUF Saving models to 16bit for GGUF so you can use it for Ollama, Jan AI, Open WebUI and more! @@ -308,5 +281,354 @@ ALLOWED_QUANTS = \ } ``` +---- # Saving to Ollama + +## Saving on Google Colab + +You can save the finetuned model as a small 100MB file called a LoRA adapter like below. You can instead push to the Hugging Face hub as well if you want to upload your model! Remember to get a Hugging Face token via: [https://huggingface.co/settings/tokens](https://huggingface.co/settings/tokens) and add your token! + + + +After saving the model, we can again use Unsloth to run the model itself! Use `FastLanguageModel` again to call it for inference! + + + +## Exporting to Ollama + +Finally we can export our finetuned model to Ollama itself! First we have to install Ollama in the Colab notebook: + + + +Then we export the finetuned model we have to llama.cpp's GGUF formats like below: + + + +Reminder to convert `False` to `True` for 1 row, and not change every row to `True`, or else you'll be waiting for a very time! We normally suggest the first row getting set to `True`, so we can export the finetuned model quickly to `Q8_0` format (8 bit quantization). We also allow you to export to a whole list of quantization methods as well, with a popular one being `q4_k_m`. + +Head over to [https://github.com/ggerganov/llama.cpp](https://github.com/ggerganov/llama.cpp) to learn more about GGUF. We also have some manual instructions of how to export to GGUF if you want here: [https://github.com/unslothai/unsloth/wiki#manually-saving-to-gguf](https://github.com/unslothai/unsloth/wiki#manually-saving-to-gguf) + +You will see a long list of text like below - please wait 5 to 10 minutes!! + + + +And finally at the very end, it'll look like below: + + + +Then, we have to run Ollama itself in the background. We use `subprocess` because Colab doesn't like asynchronous calls, but normally one just runs `ollama serve` in the terminal / command prompt. + + + +## Automatic `Modelfile` creation + +The trick Unsloth provides is we automatically create a `Modelfile` which Ollama requires! This is a just a list of settings and includes the chat template which we used for the finetune process! You can also print the `Modelfile` generated like below: + + + +We then ask Ollama to create a model which is Ollama compatible, by using the `Modelfile` + + + +## Ollama Inference + +And we can now call the model for inference if you want to do call the Ollama server itself which is running on your own local machine / in the free Colab notebook in the background. Remember you can edit the yellow underlined part. + +# Troubleshooting + +### Saving to `safetensors`, not `bin` format in Colab + +We save to `.bin` in Colab so it's like 4x faster, but set `safe_serialization = None` to force saving to `.safetensors`. So `model.save_pretrained(..., safe_serialization = None)` or `model.push_to_hub(..., safe_serialization = None)` + +### If saving to GGUF or vLLM 16bit crashes + +You can try reducing the maximum GPU usage during saving by changing `maximum_memory_usage`. + +The default is `model.save_pretrained(..., maximum_memory_usage = 0.75)`. Reduce it to say 0.5 to use 50% of GPU peak memory or lower. This can reduce OOM crashes during saving. + + +----- + +# ♻️Continued Pretraining + +AKA as Continued Finetuning. Unsloth allows you to continually pretrain so a model can learn a new language. + +The [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for continued pretraining/raw text. The [continued pretraining notebook](https://colab.research.google.com/drive/1tEd1FrOXWMnCU9UIvdYhs61tkxdMuKZu?usp=sharing) is for learning another language. + +You can read more about continued pretraining and our release in our [blog post](https://unsloth.ai/blog/contpretraining). + +## What is Continued Pretraining? + +Continued or continual pretraining (CPT) is necessary to “steer” the language model to understand new domains of knowledge, or out of distribution domains. Base models like Llama-3 8b or Mistral 7b are first pretrained on gigantic datasets of trillions of tokens (Llama-3 for e.g. is 15 trillion). + +But sometimes these models have not been well trained on other languages, or text specific domains, like law, medicine or other areas. So continued pretraining (CPT) is necessary to make the language model learn new tokens or datasets. + +## Advanced Features: + +### Loading LoRA adapters for continued finetuning + +If you saved a LoRA adapter through Unsloth, you can also continue training using your LoRA weights. The optimizer state will be reset as well. To load even optimizer states to continue finetuning, see the next section. + +``` +from unsloth import FastLanguageModel +model, tokenizer = FastLanguageModel.from_pretrained( + model_name = "LORA_MODEL_NAME", + max_seq_length = max_seq_length, + dtype = dtype, + load_in_4bit = load_in_4bit, +) +trainer = Trainer(...) +trainer.train() +``` + +### Continued Pretraining & Finetuning the `lm_head` and `embed_tokens` matrices + +Add `lm_head` and `embed_tokens`. For Colab, sometimes you will go out of memory for Llama-3 8b. If so, just add `lm_head`. + +``` +model = FastLanguageModel.get_peft_model( + model, + r = 16, + target_modules = ["q_proj", "k_proj", "v_proj", "o_proj", + "gate_proj", "up_proj", "down_proj", + "lm_head", "embed_tokens",], + lora_alpha = 16, +) +``` + +Then use 2 different learning rates - a 2-10x smaller one for the `lm_head` or `embed_tokens` like so: + +``` +from unsloth import UnslothTrainer, UnslothTrainingArguments + +trainer = UnslothTrainer( + .... + args = UnslothTrainingArguments( + .... + learning_rate = 5e-5, + embedding_learning_rate = 5e-6, # 2-10x smaller than learning_rate + ), +) +``` + + +----- +# 💬Chat Templates + +### List of Colab chat template notebooks: + +- [Conversational](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) +- [ChatML](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) +- [Ollama](https://colab.research.google.com/drive/1WZDi7APtQ9VsvOrQSSC5DDtxq159j8iZ?usp=sharing) +- [Text Classification](https://github.com/timothelaborie/text_classification_scripts/blob/main/unsloth_classification.ipynb) by Timotheeee +- [Multiple Datasets](https://colab.research.google.com/drive/1njCCbE1YVal9xC83hjdo2hiGItpY_D6t?usp=sharing) by Flail + +### More Info + +Assuming your dataset is a list of list of dictionaries like the below: + +``` +[ + [{'from': 'human', 'value': 'Hi there!'}, + {'from': 'gpt', 'value': 'Hi how can I help?'}, + {'from': 'human', 'value': 'What is 2+2?'}], + [{'from': 'human', 'value': 'What's your name?'}, + {'from': 'gpt', 'value': 'I'm Daniel!'}, + {'from': 'human', 'value': 'Ok! Nice!'}, + {'from': 'gpt', 'value': 'What can I do for you?'}, + {'from': 'human', 'value': 'Oh nothing :)'},], +] +``` + +You can use our `get_chat_template` to format it. Select `chat_template` to be any of `zephyr, chatml, mistral, llama, alpaca, vicuna, vicuna_old, unsloth`, and use `mapping` to map the dictionary values `from`, `value` etc. `map_eos_token` allows you to map `<|im_end|>` to EOS without any training. + +``` +from unsloth.chat_templates import get_chat_template + +tokenizer = get_chat_template( + tokenizer, + chat_template = "chatml", # Supports zephyr, chatml, mistral, llama, alpaca, vicuna, vicuna_old, unsloth + mapping = {"role" : "from", "content" : "value", "user" : "human", "assistant" : "gpt"}, # ShareGPT style + map_eos_token = True, # Maps <|im_end|> to instead +) + +def formatting_prompts_func(examples): + convos = examples["conversations"] + texts = [tokenizer.apply_chat_template(convo, tokenize = False, add_generation_prompt = False) for convo in convos] + return { "text" : texts, } +pass + +from datasets import load_dataset +dataset = load_dataset("philschmid/guanaco-sharegpt-style", split = "train") +dataset = dataset.map(formatting_prompts_func, batched = True,) +``` + +You can also make your own custom chat templates! For example our internal chat template we use is below. You must pass in a `tuple` of `(custom_template, eos_token)` where the `eos_token` must be used inside the template. + +``` +unsloth_template = \ + "{{ bos_token }}"\ + "{{ 'You are a helpful assistant to the user\n' }}"\ + ""\ + "