Hi there from Paolo Perrotta!
Book Links
Updated Code Examples
The code in the book runs on old-ish versions of Python and its libraries. If you have trouble running the examples, look for updated code and installation instructions on this GitHub repo. I made a number of small tweaks to the code so that it runs on recent versions of Python, NumPy, etc.
I’ll keep the repo updated whenever bitrot sets in. If you still have trouble running the code, please drop me a message on the book’s forum. I’ll check out the issue and update the repo.
Additional Material
Supplemental Articles
These articles contain explanations that I couldn’t fit in the book.
- Of Gradients and Matrices
- The Problem with Accuracy
- Grokking the Cross Entropy Loss
- Killer Combo: Softmax and Cross Entropy
- Neural Networks Visualized (called The Math of Multiple Layers in the book’s first printing)
Canceled Articles
- Convolutional Neural Networks in Plain English: After some handwringing and a lot of procrastinating, I resolved not to write this article. I’ll remove references to it from future printings of the book. I don’t feel that the article fits the narrative of the book anymore. CNNs have become less relevant since I wrote the book, progressively losing ground to Transformer-based architectures. If you came here for a good primer to CNNs, I recommend this brilliant interactive demo from Georgia Tech.
Related Blog Posts
- This Is Machine Learning. A two-posts series that covers the same ground as the book’s first chapter. Here are Part 1: Learning vs. Coding and Part 2: Supervised Learning.
- Google Colab — The First Few Steps. A quick introduction to Google Colab, including an optional hands-on section.