Working Notes: a commonplace notebook for recording & exploring ideas.
Home. Site Map. Subscribe. More at expLog.

Advent Of Code 2025

This year I've decided to try out AoC using my new Palma, Termux and Free Threaded Python. I've generally had the most fun doing AoC by

A 12 day advent also sounds much more manageable, so I'm looking forward to keeping up with the programs; I wonder what the problem difficulty scaling is going to be like this year.

The actual source code is on GitHub. Times are either direct leaderboard times if I managed to get to the problem as soon as it was released, or time from when I first started looking at it.

Day 9 (6:28, 8:58:16)

Wow I had a rough time with part 2, and I'm pretty sure the real solution should be much faster. I'd like to blame jetlag, and was getting really frustrated by the limited screen space on the boox, but honestly I'm just really rusty at geometry and any graphics-ish programming.

Part 1 was fairly easy: looking at all possible sizes while accounting for the off-by-one introduced by character shapes.

Part 2 had me scratching my head a lot with several mistakes and programming errors: I vaguely remembered using a ray to check for being "inside" an arbitrary shape, and re-read the approach from wikipedia and implemented it. The final solution I have (which is terribly slow, roughly 10 seconds) is:

The last one is the one I was missing, and only figured out after getting some sleep and disconnecting from the problem.

Just handling rays with vertex related edge cases was fairly painful, and I had to read the wikipedia description several times to understand how to handle it (checking the other vertex of the segment, and making sure it was on a consistent side).

I feel much better after reading the Reddit posts for the day: the falling piano gifs are extremely appropriate. My solution also roughly matches what I think other people are describing, with all the hairiness of handling overlapping boundaries.

The bits that did stand out for me from other people's solutions were the visualizations; I was skeptical of getting something reasonable working but it would probably have helped. More interestingly, the idea of compaction: compacting the grid by using unique for x/y and giving them ranks and just plotting that. This only makes sense if the distinct values are low and depends on the type of computation being done on the figure, but it dramatically reduces the complexity of the inputs for this example.

More personally, relying on tuples/arrays and indexing into 0/1 repeatedly definitely made things much harder for myself; something I should watch out for and preferably use better abstractions around (complex numbers, named tuple).

For AoC -- and sofware engineering projects in general -- there's a time / complexity / abstractiont tradeoff and finding the sweet spot is very hard. For most projects of any meaningful size, if I can make it easy to solve the problem, and then I solve the problem things tend to work well -- at the bite sized AoC problem level I try to go as minimal as possible to be able to have ~5 minute solutions. This ends up making something very fragile and complex that's hard to reason about and fix, and is basically a bet that I'll be able to manage the complexity in my head without having to aggressively rewrite everything; I brute forced today's solution by continuing hacking and thinking through all potential edge cases because there's also a reasonably fast feedback loop, but I was definitely reaching the limits of what I could do without resorting to more advanced debugging/visualization techniques. It's a good exercise, and one I don't have a lot of experience with -- for any code I plan to keep I'll keep tidying much more aggressively.

Day 8 (24:01, 26:52)

Another puzzle done while traveling: this time on a flight back to New York. Sadly this will come with a slight cost of adjusting to the time zone, but hopefully I'll be able to figure it out.

It took me some time to realize what the puzzle was asking, and then figuring out the right data structures to maintain: a very first read made me think I needed a heap, but given the actual distances weren't changing live I didn't think it would be that useful (though it would reduce wasted work if iteration finished early). For figuring out groups of circuits I ended up just maintaining junction box -> circuit indexes, and then continuously updated the indexes. All of this worked because there were only a 1000 inputs, otherwise I'd have had to revisit my solutions significantly I think.

With some breathing space, I think I could make this faster by: using heapify to make a minheap to walk over the smallest values only till I need them; explicitly maintaining a list of circuit sets to fix values because that's always <= total nodes.

Reddit pointed out that this was an implementation of Kruskal's algorithm -- and more interestingly -- the union find operations/data structure to maintain the list of sets. I'm holding off on implementing solutions for now just because heapify doesn't support a key -- something I might try to contribute to Python if no one else beats me to it.

Another tentative plan I have is to redo this year's AoC in Cuda after the series finishes; I think it would be interesting to see how far I could make it go and run in a single application.

Day 7 (8:34, 28:20)

Today was a fun puzzle: the first part was pretty quick and apart from checking if there was a variant of index that captured all matches I didn't run into any hitches. Maintaining an array of points where the beam is currently, and testing them against the current row for beam splitters worked beautifully.

Part 2 took some thinking and then fiddling against the sample input to make sure I had it right -- it's a dynamic programming problem, and the idea is to maintain an active list of the number of ways for a beam to reach that point. When encountering a beam splitter, the cell before and after the splitter can be reached in those number of ways, otherwise it just carries forward the previous count from the row above.

I missed out on realizing that the same cell can be added to multiple times and wasted some time debugging there; and then ended up double counting till I re-read the code and fixed my solution.

Day 6 (11:07, 25:11)

Fairly comfortable puzzle today (though I'm pretty worried about jinxing everything by saying this) that was mostly involved around parsing and handling edge cases correctly.

I mis-named some variables at the beginning and paid for it with a very slow first solution that I then had to debug; took my time with the second problem and that went much more smoothly. There were enough edge cases (particularly around varying row lengths) that I'm glad I tested repeatedly with the sample input first.

Somewhat embarrassingly I forgot the syntax for reduce and even where to find isdigit. One nice library I find handy is operator, which has a handy mul function -- avoiding needing lambdas.

Reddit solutions are pretty cool: zip(*rows) and zip_longest are functions I hadn't thought about trying. And also that math.prod exists.

Day 5 (8:09, 17:41)

A relatively simple set today: I over-thought it at the beginning and ended up losing some time.

The inputs were small enough to repeatedly do a linear scan, though I can speed that up a lot by sorting both IDs and search ranges, and then zipping them together.

For the second part, I initially didn't consider duplicates, and then had a trivial bug that cost me a few precious minutes.

The pattern I need to watch out for now is letting my memories/impressions of past advent of codes interfere with the problem actually in front of me; this is probably a general life lesson I should pay attention to.

As a follow up: implemented the zipping ranges and IDs together go back to an O(ranges + IDs + rlogr + ilogi) solutions instead of an O(ranges * IDs + rlogr) solution.

A trick I'd learned from Andrei Alexandrescu's code is to have a while True loop that does bounds checks at the beginning, and at each iteration of the loop only move one of the pointers. An invariant to maintain is to move one pointer in each iteration of the loop (or exit) -- this makes sure every change is protected by the bounds checks without having to repeat code, it's guaranteed to terminate, and is generally much much easier to write and debug.

Alternatively only using iterators to avoid having to manually manage indexes.

Day 4 (24:10, 28:09)

At this point I think I feel preemptively exhausted when I see a grid in AoC, just because I'm so tired of writing the same loops (this suggests I'm missing a grid abstraction that works for me out of the box).

I messed up the initial implementation a little bit, and worked out a more observable one that explicitly counted neighbors for each cell as a first step to check what was breaking. I forgot that doing [[0] * 5] * 5 would repeat the same array object and that added a significant amount of time to debugging.

The second part went fairly smoothly and just needed a little bit of refactoring to clean up my existing solution to apply it repeatedly.

Thinking through different ways to represent a grid:

Iterating over neighbors:

I'd been thinking about using convolutions, and there were a couple of comments in the day's solutions that do rely on using scipy and a convolution2D which I should try out.

Another solution talks about ChainMap on Python which is something I hadn't known about till today, but looks like an abstraction I'd enjoy using.

Looking over the discussions and thinking about it, I think using a set of tuples or complex numbers would be a much cleaner solution for today's problem.

Day 3 (10:17, 18:49)

Didn't think through the solution on my first attempt and paid for it in wasted time: I initially just took the largest two numbers and their relative positions, before having to run through the examples and realizing that I needed to take a greedy approach.

I must also admit to a brief moment of panic when I thought I'd have to do dynamic programming on day 3, but that passed after I thought for a few more seconds.

The cleaner approach when the stop index can end up being the entire array I came up with after was to explicitly do arr[x:len + stop] instead, which I didn't need to special case.

Another fun problem all in all.

Day 2 (6:33, 12:46)

Today was special: we were driving back from a family vacation and I did the problem using a hot-spot from the back seat in a car with Dad driving us through winding mountain roads.

Happily I had a pretty good time: 6:33 and 12:46 for parts 1 and 2; I ended up doing a brute force solution to stringify the numbers and comparing parts. After the first solution I've been playing a little bit to get the speed better by improving the substring checks.

A fun surprise I had (because I'm running these on an android device with aggressive frequency CPU scaling going on all the time) was that the time to execute was varying wildly -- after running with time I could see that I was improving the actual CPU utilisation, but wall time kept moving as the device adjusted to the load.

After that I decided to play with the fact that I was using a free threaded python for the first time, and dropped in a concurrent.futures call to run each input range in parallel -- I'm surprisingly delighted to be able to write Python and use multiple cores just like that.

Day 1 (6:12, 19:28)

Took around 15-20 minutes: I can't read the question on the Palma because of the low contrast text on a dark background, and attempts at using the readability extension didn't really work out for me so I ended up copy/pasting the questions today (which was painful; I'll probably end up scripting something to work around this in the future).

Part 1 just worked at first try, which was a pleasant surprise. In Part 2 I treated the transition from 0 to negative as an extra click unintentionally -- only figured it out by working through the example.

I'm not sure what the ideal way to write out the math to make it symmetric would be so that I don't have to special case a left rotation from 0; so far I haven't seen a great solution on Reddit -- a lot of the solutions there relied on explicitly doing the rotations and counting.

For Python 3.14 I'm really enjoying the new fancier REPL, and need to incorporate working directly in the REPL much more to solve these problems. (As another aside: using the REPL with a collaborative AI sounds like a fun problem to solve; it's also so much more context for the model).

It's always humbling to notice all the small mistakes that come in when I'm trying to work quickly; but I think the bulk of today's time was trying to figure out the best way to be able to read the question instead of anything else.

Day 0

For preparation, I've been setting up a minimal environment to work in on the Boox: initially planned to use uv before realising that Python 3.14t isn't directly available there yet. jj Happily I came across https://github.com/yubrajbhoi/termux-python which has a custom patched version of Python. I'm relying on obsidian to take notes as I go, with the programs and repository set up in Termux.

Finally, for editing: after a brief moment of weakness where I tried to see if I could get a good nvim editing experience in ten minutes I'm happy to report that my last emacs config refactor did what it was supposed to: and I could get up and running with use-package auto-installing everything I wanted fairly rapidly.

I haven't succeeded in getting any of the LSP servers to work yet: but did get some version of jedi to work (though not using the local venv). Something to follow up on later.

Kunal