Fine-tuning
Module: fundamentals
What it is
Fine-tuning is additional training on specific data to adapt a pre-trained model for particular tasks or behaviours. After broad pre-training, models are fine-tuned to be helpful assistants, follow instructions, or specialise in domains like medicine or law. Fine-tuning is much cheaper than pre-training.
Why it matters
Fine-tuning is how general models become useful products. The difference between raw GPT and ChatGPT is fine-tuning. It's also how organisations can adapt models to their specific needs without training from scratch. Understanding fine-tuning helps explain why different AI assistants have different personalities and capabilities.