![]()
Welcome to LLMs from Scratch, an all-killer-no-filler curriculum that takes you from tokenization to alignment with meticulously crafted Jupyter notebooks, actionable theory, and production-ready code. Whether you are a researcher, engineer, or curious builder, this course gives you the scaffolding to demystify modern LLMs and deploy your own.
๐ Course Highlights#
Hands-on notebooks for every lessonโclone locally or launch instantly in Lightning Studio.
Practical checkpoints and datasets so you can experiment without babysitting boilerplate.
Theory, references, and best practices interwoven with code so every concept sticks.
Production-aware workflow covering training, scaling, alignment, quantization, and deployment-friendly fine-tuning.
๐ Course Structure#
Each module is a standalone notebook packed with explanations, exercises, and implementation details. View them on GitHub, launch them via GitHub Pages, or open them interactively in Lightning Studio.
Module |
Topic |
Notebook |
|---|---|---|
01 |
Tokenization Foundations |
|
02 |
Building a Tiny LLM |
|
03 |
Advancing Our LLM |
|
04 |
Data Engineering for LLMs |
|
05 |
Scaling Laws in Practice |
|
06 |
Pretraining at Scale |
|
07 |
Supervised Fine-Tuning |
|
08 |
RLHF and Alignment |
|
09 |
LoRA & RLVR Techniques |
|
10 |
Pruning & Distillation |
|
11 |
Appendix: Position Embeddings |
|
12 |
Appendix: Quantisation Strategies |
|
13 |
Appendix: Parameter-Efficient Tuning |
|
14 |
Bonus: Energy Based and Diffusion LLMs |
|
15 |
Bonus: State Space Models |
๐ง What Youโll Learn#
The end-to-end data flow of an LLMโfrom tokenization and batching to inference-time decoding.
How to implement core transformer components, attention variations, and optimization tricks.
Strategies for scaling datasets, managing checkpoints, and monitoring training stability.
Practical alignment techniques: SFT, preference modeling, RLHF, and reward modeling.
Deployment-ready compression: pruning, distillation, quantization, and PEFT recipes.
Bonus sections on Energy based models (EBMs), Diffusion LLMs, and State Space Models (SSMs).
โ๏ธ Quick Start#
Option A: Launch in Lightning Studio (no setup!)#
Click the Open in Studio badge above.
Authenticate with Lightning (or create a free account).
Explore the notebooks in a fully provisioned environment with GPU options.
The studio has all model checkpoints saved and you can test them with code given in
test-model.ipynb.
Option B: Run Locally#
Clone the repository
git clone https://github.com/shreshthtuli/llms-from-scratch.git cd llms-from-scratch
Install dependencies (recommended: Python 3.10+)
pip install uv uv sync
Add API Keys in
.envfile. Follow.env.example.Launch Jupyter
jupyter labOpen any notebook to start experimenting.
Need data? Check the
data/directory and follow the dataset preparation steps inside each notebook.
๐งญ Suggested Learning Path#
Foundations (Modules 01โ03) โ Understand tokens, build your first transformer, and iterate on architecture improvements.
Data & Scaling (Modules 04โ06) โ Curate corpora, tune training loops, and scale pretraining experiments responsibly.
Alignment (Modules 07โ09) โ Apply SFT, RLHF, and efficient adaptation techniques to align your model with human intent.
Optimization (Modules 10โ15) โ Compress, fine-tune, and deploy models using state-of-the-art efficiency tricks.
Capstone โ Combine your learnings to train, align, and ship a bespoke LLM tailored to your use case.
Mix and match as neededโevery notebook is designed to stand on its own, but following this order unlocks the smoothest learning curve.
๐ Hands-On Playground#
Lightning Studio: Run the entire repo in the cloud with zero setup using the badge above.
GitHub Codespaces: Launch a dev container directly from the repo for quick edits.
Local GPUs / Clusters: Scripts in
src/support distributed and mixed-precision training out of the box.
๐จโ๐ซ About the Instructor#
Iโm Shreshth Tuliโresearcher, builder, and educator focused on making advanced ML systems approachable. Iโve shipped production LLMs, authored peer-reviewed papers, and taught hundreds of practitioners how to wield these models responsibly. Expect honest takes, transparent trade-offs, and plenty of real-world war stories.
๐ค Contributions#
Contributions, bug reports, and suggestions are warmly welcomed! To contribute:
Fork the repo and create a feature branch.
Open a PR describing your changes and the motivation behind them.
Tag any relevant notebooks or scripts and include screenshots/metrics if applicable.
Check the issue tracker for bite-sized tasks or open a discussion if you want to propose new modules.
๐ License#
This project is open-sourced under the Apache 2.0 License. Feel free to use the materials for your own learning, workshops, or derivative coursesโjust keep attribution intact.
The best way to learn LLMs is to build one. ๐