This course will walk you through everything you need to know in order to fine-tune your own LLMs. We'll spend a lot of time picking apart exactly how fine-tuning works under the hood and methods to make fine-tuning accessible to all. We'll then dive into a number of ways to implement your own fine-tuning pipelines.
A gentle introduction to the theory and practice of fine-tuning LLMs, when to fine-tune and when not to, how fine-tuning works under the hood, and the different types of fine-tuning.
Learn how to use OpenAI's managed platform to fine-tune OpenAI's proprietary GPT-3.5 model using your own datasets.
A thorough primer on the magic behind low-rank adaptation (LoRA), a technique for reducing the number of trainable parameters in LLMs and making them easy to fine-tune.
A guide on quantization: a revolutionary method to shrink the size of LLMs so that they can easily fit in GPU memory.