Training & Fine-tuning LLMs Course
Lesson Schedule
Lesson #1 - Foundations of LLMs (Aug 23)
Darek Kleczek (W&B), Ayush Thakur (W&B), Weiwei Yang (Microsoft), Mark Saroufim (Meta)
Begin your journey into the world of LLMs by understanding when and how to train or fine-tune them. This session covers the foundational knowledge, available model architectures, and introduces the NeurIPS LLM Efficiency Challenge.
Lesson #2 - LLM Evaluation: Metrics, Benchmarks, Challenges (Aug 30)
Darek Kleczek (W&B)
Master the intricacies of evaluating LLMs. This session explores key benchmarks, their strengths and pitfalls, and how to effectively run evaluations on trained models while recognizing their limitations.
Lesson #3 - The Lifeblood of LLMs: Data Sourcing and Processing (Sep 13)
Jonathan Frankle (MosaicML)
Uncover the essentials of data for LLMs. From understanding data scaling laws to constructing custom datasets, dive deep into data curation, ethics, storage, and streaming best practices.
Lesson #4 - LLM Training & Fine-tuning: Techniques & Best Practices (Sep 27)
Jonathan Frankle (MosaicML)
From training LLMs from scratch to the nuances of fine-tuning, get hands-on with strategies and techniques. This session addresses hardware, tokenization, optimization, hyperparameter tuning, and more. See the Agenda page for more information.