Customizing LLMs with Fine-Tuning

Course Description

This course will walk you through everything you need to know in order to fine-tune your own LLMs. We'll spend a lot of time picking apart exactly how fine-tuning works under the hood and methods to make fine-tuning accessible to all. We'll then dive into a number of ways to implement your own fine-tuning pipelines.

Lessons
0
A Deep-Dive Into Fine-Tuning
theory
intro

A gentle introduction to the theory and practice of fine-tuning LLMs, when to fine-tune and when not to, how fine-tuning works under the hood, and the different types of fine-tuning.

1
Fine-Tuning OpenAI GPT with Custom Data
project
openai
gpt-3.5

Learn how to use OpenAI's managed platform to fine-tune OpenAI's proprietary GPT-3.5 model using your own datasets.

2
LoRA: Reducing Trainable Parameters
theory
lora

A thorough primer on the magic behind low-rank adaptation (LoRA), a technique for reducing the number of trainable parameters in LLMs and making them easy to fine-tune.

3
Quantization: Shrinking the Size of LLMs
theory
quantization

A guide on quantization: a revolutionary method to shrink the size of LLMs so that they can easily fit in GPU memory.