Working Notes: a commonplace notebook for recording & exploring ideas.
Home. Site Map. Subscribe. More at expLog. Kunal

2026-01-18

Setting up some goals for building, learning, and writing for myself. I'm aiming to write a running commentary in these letters for at least half of the weeks this year, preferably a weekly commentary on what I've been playing with and working on.

Once a month I plan to publish an essay that's a bit more considered and thought through.

Notebook

Lots of small fixes and updates to the notebook over the past 2 weeks

Happily, the full generation code is still within a 1000 lines of code; once I hit 1k I'll probably try and refactor to simplify and shorten further.

Using eglot-python-preset has been a boon: I'd just been thinking about building (or having Claude build me) something like this, and I came across Mike's post on Reddit introducing eglot-python-preset which understands the uv script dependencies.

In general I've been thinking about how to make it easier to remember what I've been studying and have consistent references as I continue working for the rest of my life and evolving these notes accordingly.

Working with AI

I've been trying to figure out how to best leverage agents in a way that works for me, outside of the hype cycles. After building and deleting several attempts, I've been collecting them in a repository called djn (Djinn):

I'll write up a summary of all of these projects at the end of the month.

It took all of 3 hours to finish my Claude Pro subscription: I ended up jumping straight to Max to make the most of this month, I suspect I'll revert to Pro after. I have a lot of questions around the economics of this, and how long I should expect to have access to this kind of inference.

Learning Modeling & Transformers

I've currently stalled on the lectures I've been following CS336, CMU/DL Systems -- because I started over engineering the homework -- and have been having a hard time staying engaged with the material.

Instead, I'll probably go back to my original attempts while skimming the videos & lectures: implementing really tiny models to explore ideas and concepts and being able to verify I understood things; paired with published notebooks outlining everything I learned so that I can retain it.

I also experimented with feeding Claude, Gemini, and Codex all my public notes and work and asking them to help me learn faster and better; Codex was probably the most effective, though all the results were at least something I could engage with to try and learn better.

(See also No Ideas, but in Things and my plans up top to use this notebook for a lifetime of work.)

Papers

LLM Inference by David Patterson et al is clearly written, I'm learning to read papers with GPT helping on the side.

In general I still need to find a sweet spot in reading papers effectively and efficiently, as well as choosing the papers to read. Sometimes it's much more valuable to wait for a textbook, sometimes the papers that inspire chapters are much more approachable.

Books

Collecting books I started reading; at some point I'll complete these

(Clearly I have problems with starting too much without finishing, but I'm working on it.)