Advent of Code 2019: Commentary
December 1, 2019
Here's my running commentary on Advent of Code 2019, for which I'm using Go. The full code is available on GitHub.
Edit: Retrospective is here
Table of Contents
[Day 1] [Day 2] [Day 3] [Day 4] [Day 5] [Day 6] [Day 7] [Day 8] [intcode refactoring] [Day 9] [intcode refactoring round 2] [Day 10] [Day 11] [Day 12] [Day 13] [Day 14] [Day 15] [Day 16] [Day 17] [Day 18] [Day 19] [intcode reverse engineering] [Day 20] [Day 21] [Day 22] [Day 23] [Day 24] [Day 25]
Day 1
Both parts were straightfoward. Each line of the input is just an integer, so we iterate over each line and convert it. For part 1 we just apply the fuelToMass function to each value and sum the total, while for part 2 we repeatedly apply fuelToMass until it becomes negative.
1func fuelForMass(mass int) int {
2 return (mass / 3)  2
3}
4
5func fuelSum(mass int) int {
6 sum := 0
7 for {
8 fuel := fuelForMass(mass)
9 if fuel < 0 {
10 break
11 }
12 sum += fuel
13 mass = fuel
14 }
15 return sum
16}
I spent more time deciding how to structure the program to do the scanning and parsing and so on in a reusable manner. Here's a place where generics would be nice:
1func scanLines(r io.Reader) <chan int {
2 ch := make(chan int)
3 go func() {
4 scan := bufio.NewScanner(r)
5 for scan.Scan() {
6 i, err := strconv.Atoi(scan.Text())
7 if err != nil {
8 log.Fatal(err)
9 }
10 ch < i
11 }
12 if err := scan.Err(); err != nil {
13 log.Fatal(err)
14 }
15 close(ch)
16 }()
17 return ch
18}
(The error handling is fine because this isn't a generalpurpose library, in which case the error should be returned the the caller, and I want the program to die if it fails to process the input.)
scanLines
would be reusable for each problem if it could take an argument
of type func (string) T
to parse each line, where the generic type T
would
also parameterize the output channel: <chan T
. As it is, without generics,
I could either replace T
with interface{}
, or just do something else. One
solution that's less bad than using interface{}
is just passing the entire
line, and letting each problem define a channel transformation function around
a lines channel:
1func scan(in <chan string) <chan int {
2 ch := make(chan int)
3 go func() {
4 for line := range in {
5 i, err := strconv.Atoi(line)
6 if err != nil {
7 log.Fatal(err)
8 }
9 ch < i
10 }
11 close(ch)
12 }()
13 return ch
14}
But this is as much code as scanLines itself, nearly all of it the same, which just ends up inflating the size of the program. In the end I opted to avoid the abstraction and bundle scanning with parsing together. It's less than twenty lines of code, and maybe a better abstraction will appear later.
Edit: I looked at the AoC
subreddit
and everyone writes very short solutions! So I wrote a shorter, less modular
version that uses fmt.Fscanf
instead of bufio.Scanner
with fuelForMass
and fuelSum
as defined above:
1func main() {
2 f, err := os.Open(os.Args[1])
3 if err != nil {
4 log.Fatal(err)
5 }
6 defer f.Close()
7 sum, recSum := 0, 0
8 var mass int
9 for {
10 if _, err := fmt.Fscanf(f, "%d\n", &mass); err == io.EOF {
11 break
12 } else if err != nil {
13 log.Fatal(err)
14 }
15 sum += fuelForMass(mass)
16 recSum += fuelSum(mass)
17 }
18 fmt.Println(sum)
19 fmt.Println(recSum)
20}
Full code is here.
Day 2
Reading the input was a bit different today, since it was commadelimited
instead of linedelimited, but I suppose one could have done s/,/\n/g
and used
linereading logic. I read the entire file into memory and split around
commas, but defining a bufio.SplitFunc
and using bufio.Scanner
would be
necessary for inputs that don't fit into memory. After that, you just convert
each token to an integer to get the data:
1func readData(path string) []int {
2 txt, err := ioutil.ReadFile(path)
3 if err != nil {
4 log.Fatal(err)
5 }
6 toks := strings.Split(string(txt[:len(txt)1]), ",")
7 data := make([]int, len(toks))
8 for i, tok := range toks {
9 val, err := strconv.Atoi(tok)
10 if err != nil {
11 log.Fatal(err)
12 }
13 data[i] = val
14 }
15 return data
16}
Running the computer is just a forloop with a switch statement. Make sure you're doubleindexing into the data, though, since the arguments to the opcode specify addresses and not the argument values themselves:
1func execute(data []int) {
2 for i := 0; i < len(data); i += 4 {
3 switch data[i] {
4 case 1:
5 data[data[i+3]] = data[data[i+1]] + data[data[i+2]]
6 case 2:
7 data[data[i+3]] = data[data[i+1]] * data[data[i+2]]
8 case 99:
9 return
10 default:
11 log.Fatalf("undefined opcode at position %d: %d", i, data[i])
12 }
13 }
14}
This is enough to solve part 1. Note that, as mentioned in the problem text, for future problems we'll want to modify the loop to use an opcodespecific increment.
For part 2 I wrapped execution into a function to create a local copy of the data to avoid reusing memory from previous attempts, as required:
1func executeWith(data []int, noun, verb int) int {
2 local := make([]int, len(data))
3 copy(local, data)
4 local[1], local[2] = noun, verb
5 execute(local)
6 return local[0]
7}
To find the answer I just bruteforce searched for the noun and verb. An
algorithmically cleverer solution is not obvious to me, but since the search
is embarrassingly parallel, dividing the search space and launching some
goroutines might speed it up. Quitting early when one of the goroutines found
the answer could be implemented with a second done
channel. I might come
back later and implement that, but for now here's the rest of the code:
1func main() {
2 data := readData(os.Args[1])
3
4 // part 1
5 fmt.Println(executeWith(data, 12, 2))
6
7 // part 2
8 for noun := 0; noun <= 99; noun++ {
9 for verb := 0; verb <= 99; verb++ {
10 result := executeWith(data, noun, verb)
11 if result == 19690720 {
12 fmt.Println(100*noun + verb)
13 return
14 }
15 }
16 }
17}
Full code and inputs and outputs are here.
Edit: Some folks on Reddit (1 2 among others) came up with clever solutions to part 2 that don't use brute force! One used a contraint solver, which I've not seen since college and don't remember much about, and another used symbolic algebra. Shout out to Eric, the creator, for not requiring these approaches but making them possible! (He did a talk about creating Advent of Code that's worth watching. Link is here)
Day 3
The input is essentially two lists of vectors (magnitude and direction). I
wasted a bunch of time switching between parsing with ioutil.ReadFile
and
strings.Split
vs. bufio.Scanner
and bufio.Reader
, but the former used
fewer lines so I kept it. I'll omit it here because it's uninteresting, but
the full code is on
GitHub.
Once we have the input, we need to convert it into a list of coordinates along the paths. We'll then find intersections along the two paths. The vectorpathtocoordpath conversion is pretty verbose; without gofmt I'd just collapse all the switch case branches into single lines.
1type vec struct {
2 dir rune
3 dist int
4}
5
6type coord struct {
7 x, y int
8}
9
10func toPath(vecPath []vec) []coord {
11 var path []coord
12 var cur coord
13 for _, v := range vecPath {
14 var dim *int
15 d := 0
16 switch v.dir {
17 case 'U':
18 dim = &cur.y
19 d = +1
20 case 'D':
21 dim = &cur.y
22 d = 1
23 case 'R':
24 dim = &cur.x
25 d = +1
26 case 'L':
27 dim = &cur.x
28 d = 1
29 default:
30 log.Fatalf("bad direction: %s", v.dir)
31 }
32 for n := v.dist; n > 0; n {
33 *dim += d
34 path = append(path, cur)
35 }
36 }
37 return path
38}
39
Edit: Somebody on the subreddit used a map of direction character to x and y diffs, which reduces the lines of code by quite a bit.
Finding intersecting points and choosing the one that's closest to the starting point (i.e. closest to 0,0) isn't bad. I initially wasted time and lines of code on handling more than two wires, since I wasn't sure what would come in part 2. I cleaned it up a bit after finding the answer, since handling just two paths requires fewer loops and avoids some slices. I should have done that from the beginning, actually, since it's not hard to refactor from two to N parameters, and "maybe I'll need it later" is a terrible reason to add abstraction. I woke up at 5 to do this in bed in the dark, though, and I just didn't think clearly enough about it.
To intersect the two coordinate paths, we dump the first one into a
map[coord]bool
representing a set, and then look for all the coordinates in
the second path that are in that set.
1func findIntersects(coords1, coords2 []coord) []coord {
2 coords := make(map[coord]bool)
3 for _, c := range coords1 {
4 coords[c] = true
5 }
6 var intersections []coord
7 for _, c := range coords2 {
8 if coords[c] {
9 intersections = append(intersections, c)
10 }
11 }
12 return intersections
13}
Finding the closest intersection is just finding the one with the smallest magnitude.
1func dist(c coord) int {
2 return int(math.Abs(float64(c.x)) + math.Abs(float64(c.y)))
3}
4
5func closestIntersect(intersects []coord) int {
6 var closestDist int
7 for _, coord := range intersects {
8 if d := dist(coord); closestDist == 0  d < closestDist {
9 closestDist = d
10 }
11 }
12 return closestDist
13}
For part 2, we need to find how many steps it took each path to reach each intersection point. Since the path coordinates are ordered according to the order in which they were visited, we can just iterate to find the index of the intersection point. Then we iterate over each intersection and add those step counts for the two paths:
1func stepsTo(to coord, path []coord) int {
2 for i, cur := range path {
3 if cur == to {
4 return i + 1
5 }
6 }
7 return 0
8}
9
10func fastestIntersect(coords []coord, path1, path2 []coord) int {
11 var speed int
12 for _, c := range coords {
13 sum := stepsTo(c, path1) + stepsTo(c, path2)
14 if speed == 0  sum < speed {
15 speed = sum
16 }
17 }
18 return speed
19}
That's it! Full code is here.
Edit: I went back tonight and extracted libraries for 2d geometry, integer math, and the intcode machine. Hopefully this will come in handy later :)
Day 4
This was much more straightforward than the last two days. Loop over every number in the given range and test whether it's valid:
1func countValidPasswords(from, to int) (int, int) {
2 numValid1, numValid2 := 0, 0
3 for i := from; i < to; i++ {
4 p := toPassword(i)
5 valid1, valid2 := valid(p)
6 if valid1 {
7 numValid1++
8 }
9 if valid2 {
10 numValid2++
11 }
12 }
13 return numValid1, numValid2
14}
We need access to the individual digits in order to check validity:
1func toPassword(x int) [6]byte {
2 var p [6]byte
3 for i := 5; i >= 0; i {
4 p[i] = byte(x % 10)
5 x /= 10
6 }
7 return p
8}
For checking part 1 validity, we only need to look at one digit at a time and its neighbor to the right. We can stop immediately if we find a decreasing digit, and all we have to track is whether we've already seen a repeat.
1func valid(p [6]byte) bool {
2 twoAdjacentSame := false
3 for i := 0; i < len(p)1; i++ {
4 if p[i] > p[i+1] {
5 return false, false
6 }
7 if p[i] == p[i+1] {
8 twoAdjacentSame = true
9 }
10 }
11 return twoAdjacentSame
12}
Adding the part 2 condition that there's a repeating group of length only two means we need to keep track of the length of the current group in addition to tracking whether the condition has already been satisfied. Since we can only know if the repeating group is length 2 after exiting it, we have to check it in the loop as well as after exiting:
1func valid(p [6]byte) (bool, bool) {
2 twoAdjacentSame, onlyTwoAdjacentSame := false, false
3 matchLen := 1
4 for i := 0; i < len(p)1; i++ {
5 if p[i] > p[i+1] {
6 return false, false
7 }
8 if p[i] == p[i+1] {
9 twoAdjacentSame = true
10 matchLen++
11 } else if matchLen == 2 {
12 onlyTwoAdjacentSame = true
13 } else {
14 matchLen = 1
15 }
16 }
17 return twoAdjacentSame, onlyTwoAdjacentSame  matchLen == 2
18}
Full code is on GitHub.
Lots of people on Reddit used either a regular expression or sorting+sets for validation. My validation algorithm is O(n) in the length of the password, whereas sorting a string and setifying it for each candidate is O(n log n). For small inputs this doesn't matter, of course, and doing something clever produces much shorter code than mine: mine takes 18 lines to validate. The regex idea is interesting but produces nontrivial regexes with lots of captures, and I personally avoid writing nontrivial regexes since they can be pretty opaque.
I have to say, looking at all the Python solutions makes me jealous of Python for this kind of problemsolving. I find it hard to imagine a readable Go implementation of any of these so far that's less than ~70 lines or so, whereas the Python implementations I see are pretty consistently ~10 lines or less. On the other hand, my Go solutions appear (to me, of course) very explicit and wellfactored. This could be a result of the language forcing such a style, or it could be my lack of experience with code golfing and the impact of (too much? :) professional software development.
Day 5
This is deeply satisfying:
1day5 $ time ./day5 input.txt 1 5
29025675
311981754
4
5real 0m0.006s
6user 0m0.000s
7sys 0m0.006s
Today's problem involved extending the intcode computer from Day 2, but I just decided to write again from scratch. That code made certain assumptions that were violated by today's problem (particularly read modes and input/output), and I had a clear enough idea about how to proceed that I felt it would be faster to write from scratch than refactor. I'm keeping it as a postAoC TODO to come back and unify all of the implementations.
Reading the input is just a string split on commas and mapping strconv.Atoi
,
so I'll omit that and jump to instruction parsing. First, how to represent
modes (unneccessary actually, could just store a byte) and instructions:
1type mode int
2
3const (
4 POS mode = iota
5 IMM
6)
7
8type instr struct {
9 opcode int
10 params int
11 modes []mode
12}
To parse an instr
, we split the instruction integer into the opcode and
modes parts using i%100
and i/100
, then pull off individual mode bits one
at a time. This resembles the password validation from Day 4:
1func parseInstr(i int) instr {
2 var in instr
3 in.opcode = i % 100
4 in.params = opcodeToParam[in.opcode]
5 for i /= 100; len(in.modes) < in.params; i /= 10 {
6 in.modes = append(in.modes, mode(i%10))
7 }
8 return in
9}
Notice how we handle the leading zeroes: since repeated division of zero is zero, which is indeed the desired mode for a dropped leading zero, we just keep pulling off leading zeroes for as many parameters are expected for the given opcode. We keep the number of parameters expected for each opcode in a map:
1var opcodeToParam = map[int]int{
2 1: 3, 2: 3, 3: 1, 4: 1, 5: 2, 6: 2, 7: 3, 8: 3, 99: 0,
3}
So now we can parse an instruction from an integer in the data. Before proceeding to execution, let's talk about retrieving opcode arguments now that we have two parameter modes (immediate and position). I extracted lookup into a function that handles this so it doesn't further clutter the execution logic:
1func get(data []int, i int, m mode) int {
2 v := data[i]
3 switch m {
4 case POS:
5 return data[v]
6 case IMM:
7 return v
8 }
9 log.Fatalf("unknown mode: %d", m)
10 return 0
11}
Pretty straightforward. This is used during program execution, implemented again (as in day 2) using a loop and a switch over opcodes. Getting pretty long now; if we keep adding opcodes, it it would be worth generalizing a bit.
Let's look at the code first, and then I'll explain a bit more.
1func run(data []int, in <chan int, out chan< int) {
2 for i := 0; i < len(data); {
3 instr := parseInstr(data[i])
4 switch instr.opcode {
5 case 1:
6 l := get(data, i+1, instr.modes[0])
7 r := get(data, i+2, instr.modes[1])
8 s := data[i+3]
9 data[s] = l + r
10 i += instr.params + 1
11 case 2:
12 l := get(data, i+1, instr.modes[0])
13 r := get(data, i+2, instr.modes[1])
14 s := data[i+3]
15 data[s] = l * r
16 i += instr.params + 1
17 case 3:
18 s := data[i+1]
19 data[s] = <in
20 i += instr.params + 1
21 case 4:
22 v := get(data, i+1, instr.modes[0])
23 out < v
24 i += instr.params + 1
25 case 5:
26 l := get(data, i+1, instr.modes[0])
27 r := get(data, i+2, instr.modes[1])
28 if l != 0 {
29 i = r
30 } else {
31 i += instr.params + 1
32 }
33 case 6:
34 l := get(data, i+1, instr.modes[0])
35 r := get(data, i+2, instr.modes[1])
36 if l == 0 {
37 i = r
38 } else {
39 i += instr.params + 1
40 }
41 case 7:
42 l := get(data, i+1, instr.modes[0])
43 r := get(data, i+2, instr.modes[1])
44 s := data[i+3]
45 if l < r {
46 data[s] = 1
47 } else {
48 data[s] = 0
49 }
50 i += instr.params + 1
51 case 8:
52 l := get(data, i+1, instr.modes[0])
53 r := get(data, i+2, instr.modes[1])
54 s := data[i+3]
55 if l == r {
56 data[s] = 1
57 } else {
58 data[s] = 0
59 }
60 i += instr.params + 1
61 case 99:
62 close(out)
63 return
64 }
65 }
66}
The logic for opcodes 1 and 2 is the same as in day 2. For input/output in opcodes 3 and 4 I used channels: the machine takes a readonly channel for getting input and a writeonly channel for emitting output. This completely abstracts the implementation details from the program execution logic.
For part 2, the opcode 5 and 6 logic for jumping is relatively straightforward (retrieve the arguments, compare, then update the program counter appropriately), and the opcode 7 and 8 compareandstore logic is similar to opcodes 1 and 2.
Running the machine now requires copying the input data, setting up the channels, and starting execution in a goroutine. We go ahead and send the single input value on a buffered channel so it doesn't block, then read and discard all output values until the last one, which is returned as the final result.
1func execute(data []int, input int) int {
2 data = copied(data)
3 in, out := make(chan int, 1), make(chan int)
4 in < input
5 go run(data, in, out)
6 var o int
7 for o = range out {
8 }
9 close(in)
10 return o
11}
During debugging it was useful to print the outputs, and a nicer implementation of the machine would support this with a flag, but I removed the logging when I had the answers to both parts.
Full code is here.
Now that we have branching and jumping, the machine should now be Turing complete, i.e. it should be able to do anything that any given programming language can do. In particular, one could write a C compiler (or Go, or Haskell, etc) that emits this intcode to be executed by our machine. I'm sure someone on Reddit will do this in the coming days and weeks :)
Edit: It took less than one day: Intscript
Day 6
This was my quickest puzzle since day 1, about 40 minutes to both answers. So
far it's taken me between an hour and 1:20 or so to finish each day,
regardless of difficulty, of which I think a large part is reading data and
building the right data structures. This required only one data structure,
though, and parsing it was simple  not even any strconv.Atoi
today.
1type orbit struct {
2 orbiter, orbited string
3}
Both problems needed a graph representation to store links between orbiter and orbited. Using a real graph might have had some nice properties, but I used a map since it's so easy to look up and iterate, whereas finding or writing a real graph structure would have required either learning its API or building one. This is one nice property of using a standard library / builtin language features over dependencies: usage patterns are the ones you already know.
1func orbitMap(orbits []orbit) map[string]string {
2 m := make(map[string]string)
3 for _, o := range orbits {
4 m[o.orbiter] = o.orbited
5 }
6 return m
7}
To count orbits for any object k
, we need to count orbits for the object it
is orbiting v
, and so on recursively until reaching an object that orbits
nothing. I used iteration here instead of explicit recursion, again because
it's so straightforward with a for loop over the map:
1func chain(k string, m map[string]string) []string {
2 var chain []string
3 for v, ok := m[k]; ok; v, ok = m[v] {
4 chain = append(chain, v)
5 }
6 return chain
7}
8
9func countOrbits(orbits map[string]string) int {
10 n := 0
11 for k, _ := range orbits {
12 n += len(chain(k, orbits))
13 }
14 return n
15}
(Note that we could cache the transitive chains as we build them, instead of visiting the entire chain for each object. See: dynamic programming)
So we look at each orbit, find the chain of all indirectly orbited objects, and sum up the chain lengths.
For part 2, we want to find the orbit chains for YOU
and SAN
, find the
object o
that is closest to both of them, and return the sum of the distance
from each of YOU
and SAN
to o
:
1func closestAncestor(chain1, chain2 []string) (string, int, int) {
2 m := make(map[string]int)
3 for i, k := range chain1 {
4 m[k] = i
5 }
6 for i, o := range chain2 {
7 if j, ok := m[o]; ok {
8 return o, i, j
9 }
10 }
11 return "", 0, 0
12}
First we dump all of the first chain in a map to keep track of the distances for each object in it, then look at the second chain and find the first object that's in the map. That's the closest ancestor, and we return its index in the chain as the second distance and its value in the map as the first distance.
This works because the chains are sorted by distance from the starting object
(see the function chain
above), so the first object that we visit in the
second chain that's in the first one will always be closest than any other
ancestor: any closer ancestor for the second chain would have been visited
earlier in the loop over the second chain.
The number of required transfers is just the sum of these two distances:
1func transfers(orbits map[string]string, from, to string) int {
2 fromChain, toChain := chain(from, orbits), chain(to, orbits)
3 _, dist1, dist2 := closestAncestor(fromChain, toChain)
4 return dist1 + dist2
5}
That's it! Code is on GitHub.
Day 7
Well, today I wasted half an hour on how to generate permutations. I misread and started by generating with replacement (i.e. 44444 is valid), and then, since this produced signals that didn't match the examples, I spent a while trying to debug the execution logic. Anyway, after adapting the code to properly generate without replacement, I couldn't figure out how to detect that all permutations had been generated and the output channel could be closed. Eventually I gave up and closed it manually after n! values had been produced. This could surely be much cleaner. I look forward to reading the soultions where they produce all of these in a single line of Python...
1func fact(n int) int {
2 if n < 1 {
3 return 1
4 }
5 return n * fact(n1)
6}
7
8func genSeq(s *seq, i int, avail map[int]bool, out chan< seq) {
9 if i >= 5 {
10 out < *s
11 return
12 }
13 for phase, free := range avail {
14 if free {
15 avail[phase] = false
16 s[i] = phase
17 genSeq(s, i+1, avail, out)
18 avail[phase] = true
19 }
20 }
21}
22
23func genSeqs(phases []int) chan seq {
24 out := make(chan seq)
25 var s seq
26 go func() {
27 ch := make(chan seq)
28 avail := availMap(phases)
29 go genSeq(&s, 0, avail, ch)
30 for i := 0; i < fact(len(s)); i++ {
31 s := <ch
32 out < s
33 }
34 close(out)
35 }()
36 return out
37}
Essentially, to generate sequences of length n, we choose a number that's not
already been used (tracked in a map[int]bool
), generate sequences of length
n1, and use recursion. We return the values in a channel because I wanted to
be able to do for seq := range genSeqs()
. I think this would be much cleaner
if I were to just generate one giant slice of all the permutations that could
be modified recursively and returned. Anyway, lots of channel usage in this
problem, which was new for me  I've not really used them much until now.
For machine execution, I adapted the input/output so that each execution takes and returns a channel. Then we can send and receive an arbitrary number of signals, which helps in part 2. The only trickiness is appending the phase setting to the front of the input channel.
1func execute(data []int, phase int, signals <chan int) chan int {
2 data = copied(data)
3 in, out := make(chan int), make(chan int)
4 go func() {
5 in < phase
6 for signal := range signals {
7 in < signal
8 }
9 close(in)
10 }()
11 go run(data, in, out)
12 return out
13}
Now we can execute with a given phase sequence by kicking off the five amplifiers by using the output value from one amplifier as the input to the next amplifier. The output from the final amplifier's output channel is the generated signal.
1func executeSeq(data []int, s seq) int {
2 out := 0
3 for _, phase := range s {
4 in := make(chan int, 1)
5 in < out
6 close(in)
7 out = <execute(data, phase, in)
8 }
9 return out
10}
To find the max signal, we iterate over all sequences using genSeqs
and
track the largest signal produced so far. This implementation takes the
execution function as a parameter, which we need for part 2.
1func maxSignal(data []int, exec func([]int, seq) int, nums []int) int {
2 max := 0
3 for seq := range genSeqs(nums) {
4 out := exec(data, seq)
5 if out > max {
6 max = out
7 }
8 }
9 return max
10}
Okay, so part 2 requires looping the inputs and outputs. This is pretty straightforward using our channel mechanism: instead of piping the single output value produced by each amplifier into the next one, we send the entire output channel from one amplifier as the input channel to the next one. Then, instead of returning the first value produced by the final amplifier, we save a copy of it and pipe it back into the first amplifier until the channel is closed (which happens when the final amplifier halts). The last output value we saw is the signal.
1func executeWithFeedback(data []int, s seq) int {
2 // send the initial input
3 in := make(chan int, 1)
4 in < 0
5
6 // pipe the amplifiers together
7 out := in
8 for _, phase := range s {
9 out = execute(data, phase, out)
10 }
11
12 // pipe the output back into the input, but keep track of it
13 var o int
14 for o = range out {
15 in < o
16 }
17
18 // last output is the output signal
19 return o
20}
This function is used for a second call to maxSignal
as defined above, since
the looping over sequences and tracking max signal logic is identical.
1func main() {
2 data := read(os.Args[1])
3 fmt.Println(maxSignal(data, executeSeq, []int{0, 1, 2, 3, 4}))
4 fmt.Println(maxSignal(data, executeWithFeedback, []int{5, 6, 7, 8, 9}))
5}
This one was pretty difficult, although I think if I had more experience with channels (and hadn't misread the sequence requirements) it would have been much more straightforward, because using channels for the input and output really makes it easy to hook the machines together and run them concurrently.
All of the actual intcode machine logic is identical to Day 5. Full code is on GitHub.
Edit: I came back and replaced the terrible, complex channelbased generation algorithm with a simple recursive one that builds a straightforward slice of the sequences. Here it is:
{% raw %}
1func genSeqsRec(nums []int, used map[int]bool, until int) []seq {
2 if until == 0 {
3 return []seq{{}}
4 }
5 var seqs []seq
6 for _, num := range nums {
7 if !used[num] {
8 used[num] = true
9 for _, recSeq := range genSeqsRec(nums, used, until1) {
10 recSeq[until1] = num
11 seqs = append(seqs, recSeq)
12 }
13 used[num] = false
14 }
15 }
16 return seqs
17}
18
19func genSeqs(nums []int) []seq {
20 return genSeqsRec(nums, make(map[int]bool), len(nums))
21}
{% endraw %}
Essentially, we choose a phase setting and mark it unavailable, then generate all phases sequences of length n1 without the phase we removed, then add the one we removed to the end of all the recursivelygenerated sequences. We do this for each phase setting. Should have started there, but got a bit channelambitious :)
Day 8
A nice and easy one today, even though I spent way too much time on reading
and parsing the input, just like every day. These problems are great practice
for this, though. One thing I want to do is actually move away from using
ioutil.ReadFile
and do proper scanning, which today I started to do again
before realizing I was spending a lot of time on it. In retrospect, who
cares? Well, for one, my daughter, who is awake and I'll need to get out of
her crib in fourteen minutes. Maybe I can come back to it later and refactor
to using byte scanning. At some point I'll have to actually spend the time,
otherwise I'll always be more comfortable with manipulating the entire thing
in memory.
Anyway, we first read in the pixel data layerbylayer into an image structure:
1type image struct {
2 width, height int
3 layers [][]byte
4}
To compute the "checksum", we iterate over the layers, find the one that has the fewest zeroes, and multiply the number of ones and twos together. Here's a place where Go is just depressingly overboardverbose.
1func zeroes(pixels []byte) int {
2 n := 0
3 for _, p := range pixels {
4 if p == 0 {
5 n++
6 }
7 }
8 return n
9}
10
11func layerChecksum(pixels []byte) int {
12 ones, twos := 0, 0
13 for _, p := range pixels {
14 switch p {
15 case 1:
16 ones++
17 case 2:
18 twos++
19 }
20 }
21 return ones * twos
22}
23
24func checksum(img image) int {
25 fewestLayer := img.layers[0]
26 fewestZeroes := zeroes(fewestLayer)
27 x := layerChecksum(fewestLayer)
28 for _, layer := range img.layers[1:] {
29 if z := zeroes(layer); z < fewestZeroes {
30 fewestZeroes = z
31 fewestLayer = layer
32 x = layerChecksum(layer)
33 }
34 }
35 return x
36}
This is just so much code. Some of it could be reduced by inlining the switch statement into the checksum forloop and including the zeroes in it, but we're still talking about like twentyish lines of code at a minimum.
Anyway, enough whining, because yesterday's problem showed how Go can make hard problems easy: from reading the subreddit it's clear that a lot of people had trouble with running several copies of the intcode machine and connecting them together, while for me this was an almosttrivial change using channels and goroutines.
To decode the image, we can start at the bottom layer and move upwards, always replacing a pixel if it's nontranspartent:
1func apply(base, layer []byte) {
2 for i := 0; i < len(base); i++ {
3 if layerPix := layer[i]; layerPix != 2 {
4 base[i] = layerPix
5 }
6 }
7}
8
9func decode(img image) []byte {
10 b := make([]byte, img.width*img.height)
11 for i := len(img.layers)  1; i >= 0; i {
12 apply(b, img.layers[i])
13 }
14 return b
15}
Now we just need to print it:
1func printImage(b []byte, width, height int) {
2 for i := 0; i < height; i++ {
3 for j := 0; j < width; j++ {
4 pix := b[i*width+j]
5 if pix == 0 {
6 fmt.Printf(" ")
7 } else {
8 fmt.Printf("%d", b[i*width+j])
9 }
10 }
11 fmt.Println()
12 }
13}
That's it. Code is on GitHub as usual :)
Intcode refactoring
On Sunday night I decided to refactor my intcode implementation, which paid off the next day. Let me go over what I did a bit before getting to the day 9 changes.
To begin with, I extracted machine state into a struct:
1type machine struct {
2 data []int
3 in <chan int
4 out chan< int
5}
6
7func newMachine(data []int, in <chan int, out chan< int) *machine {
8 m := &machine{0, make([]data, len(data)), in, out}
9 copy(m.data, data)
10 return m
11}
I also extracted get/set/read/write into methods:
1func (m *machine) get(i int, md mode) int {
2 v := m.data[i]
3 switch md {
4 case pos:
5 return m.data[v]
6 case imm:
7 return v
8 case rel:
9 return m.data[v+m.relbase]
10 }
11 log.Fatalf("unknown mode: %d", md)
12 return 0
13}
14
15func (m *machine) set(i, x int, md mode) {
16 switch md {
17 case pos:
18 m.data[i] = x
19 case rel:
20 m.data[i+m.relbase] = x
21 default:
22 log.Fatalf("bad mode for write: %d", md)
23 }
24}
25
26func (m *machine) read() int {
27 return <m.in
28}
29
30func (m *machine) write(x int) {
31 m.out < x
32}
Then I defined enums for the opcodes:
1type opcode int
2
3const (
4 add opcode = 1
5 mul = 2
6 read = 3
7 print = 4
8 jmpif = 5
9 jmpnot = 6
10 lt = 7
11 eq = 8
12 halt = 99
13)
This makes it easier to read the code and to refer to them in the arity map, for example, and in the new handler map: I also moved opcode implementations out of the giant forswitch and into a map of handlers. For example:
1type handler func(m *machine, pc int, instr instruction) (int, bool)
2
3var handlers = map[opcode]handler{
4 add: func(m *machine, pc int, instr instruction) (int, bool) {
5 l := m.get(pc+1, instr.modes[0])
6 r := m.get(pc+2, instr.modes[1])
7 s := m.data[pc+3]
8 m.set(s, l+r, instr.modes[2])
9 return pc + instr.arity + 1, true
10 },
11}
The handler returns the updated program counter and a boolean indicating whether the machine should continue running. So HALT is just:
1halt: func(m *machine, pc int, instr instruction) (int, bool) {
2 return 0, false
3},
This simplifies the core execution logic, which now just adjusts the program counter and invokes opcode handlers.
1func (m *machine) run() {
2 for pc, ok := 0, true; ok; {
3 instr := parseInstruction(m.data[pc])
4 if h, present := handlers[instr.op]; present {
5 pc, ok = h(m, pc, instr)
6 } else {
7 log.Fatalf("bad instr at pos %d: %v", pc, instr)
8 }
9 if !ok {
10 close(m.out)
11 }
12 }
13}
The full code for the extracted intcode machine is on GitHub. The package's public interface is:
1$ go doc github.com/dhconnelly/adventofcode2019/intcode
2
3package intcode // import "github.com/dhconnelly/adventofcode2019/intcode"
4
5func ReadProgram(path string) ([]int, error)
6func RunProgram(data []int, in <chan int) <chan int
Day 9
Okay, so now we've finished implementing the intcode computer. I suspect we're not done using it, but okay :) Today's problem was not bad for me:

Support for large numbers just worked with no changes. Go's
int
type is machinedefined, though, so this may not work on a 32bit machine, which would require switching to explicit int64 types. 
To support memory beyond the program, I initially started to add slice expansion logic into the machine's
get
andset
methods. It then occurred to me that I could just use amap[int]int
for memory instead of a slice. I looked through the usages of the data slice, decided that none of them required actual slice semantics (i.e. no appends and no explicit dependency on ordering or contiguous data), then simply switched to a map. This worked perfectly with no problems. 
Adding relative mode support involved adding a field to the
machine
struct and updatingget
andset
to support that mode.
1func (m *machine) get(i int, md mode) int {
2 v := m.data[i]
3 switch md {
4 /* snip */
5 case rel:
6 return m.data[v+m.relbase]
7 }
8 log.Fatalf("unknown mode: %d", md)
9 return 0
10}
11
12func (m *machine) set(i, x int, md mode) {
13 switch md {
14 /* snip */
15 case rel:
16 m.data[i+m.relbase] = x
17 default:
18 log.Fatalf("bad mode for write: %d", md)
19 }
20}
21
22var handlers = map[opcode]handler{
23 /* snip */
24 adjrel: func(m *machine, pc int, instr instruction) (int, bool) {
25 v := m.get(pc+1, instr.modes[0])
26 m.relbase += v
27 return pc + instr.arity + 1, true
28 },
29 /* snip */
30}
This took me about a half an hour and produced correct answers to the examples as well as both parts of the problem on the first try. I'm glad we didn't have to implement all of this in a single day, and I'm also glad that it was spread over several days so that I was able to think about the implementation and possible improvements over the entire week. Even when it seems straightforward to write a program, the right structure is usually not clear at the beginning. It takes rewriting and thinking about it offline and talking about it and so on before the right structure starts to appear.
Quick stats, since I was surprised that it didn't take my Chromebook longer to run, given the warning that it may take a few seconds:
1day9 $ time ./day9 input.txt 1 2
22494485073
344997
4
5real 0m0.110s
6user 0m0.103s
7sys 0m0.016s
Edit: I came back and made a couple of small changes: first I switched to explicit int64 types in the machine, so that big number handling isn't machinespecific, and then I moved the program counter into the machine state and moved updating it into the opcode handlers, which makes it easier to inspect the machine state (in case we need  or I want  to do some sort of singleinstruction stepping or debugging.
Full code for the updated intcode machine is on GitHub, as is the Day 9specific code
Intcode refactoring round 2
After talking with a colleague it was clear that the parameter modes for
writes can be tricky. My implementation works but is inconsistent in what
arguments should be provided to get
and set
, so I've refactored a bit and
added some comments. This affects
machine.go
as well as its callers in
opcodes.go:
1// Retrieves a value according to the specified mode.
2//
3// * In immediate mode, returns the value stored at the given address.
4//
5// * In position mode, the value stored at the address is interpreted
6// as a *pointer* to the value that should be returned.
7//
8// * In relative mode, the machine's current relative base is interpreted
9// as a pointer, and the value stored at the address is interpreted
10// as an offset to that pointer. The value stored at the *resulting*
11// address is returned.
12//
13func (m *machine) get(addr int64, md mode) int64 {
14 v := m.data[addr]
15 switch md {
16 case pos:
17 return m.data[v]
18 case imm:
19 return v
20 case rel:
21 return m.data[v+m.relbase]
22 }
23 log.Fatalf("unknown mode: %d", md)
24 return 0
25}
26
27// Sets a value according to the specified mode.
28//
29// * In position mode, the value stored at the given address specifies
30// the address to which the value should be written.
31//
32// * In relative mode, the value stored at the given address specifies
33// an offset to the relative base, and the sum of the offset and the
34// base specifies the address to which the value should be written.
35//
36func (m *machine) set(addr, val int64, md mode) {
37 v := m.data[addr]
38 switch md {
39 case pos:
40 m.data[v] = val
41 case rel:
42 m.data[v+m.relbase] = val
43 default:
44 log.Fatalf("bad mode for write: %d", md)
45 }
46}
I think that's a bit clearer, but it changes the callers of set
to provide
the simple argument location instead of dereferencing it first. For example:
1var handlers = map[opcode]handler{
2 /* snip */`
3 mul: func(m *machine, instr instruction) bool {
4 l := m.get(m.pc+1, instr.modes[0])
5 r := m.get(m.pc+2, instr.modes[1])
6 m.set(m.pc+3, l*r, instr.modes[2])
7 m.pc += instr.arity + 1
8 return true
9 },
10 /* snip */`
11}
Day 10
This took me three hours and I haven't had time to write up this post, but I'll go ahead and link to the code on GitHub. I also wrote a cleanedup version that uses angles instead of precomputed integral diffs, which is here.
Edit: Okay, off work now, waiting to head over to our annual holiday party and finally have time to write this up. To summarize: I made two bad choices in the first ten minutes, struggled along with it for more than an hour and a half, then ended up having to abandon it in part 2 anyway. Those choices were (1) using integral (dx,dy) pairs instead of float64 slopes for finding line intersections, and (2) trying to avoid an O(n^2) algorithm by precomputing those possible pairs ahead of time. That is, instead of:
for each point1 in graph:
for each point2 in graph:
a = angle(point1, point2)
if already visited a point with angle a from point1:
if point2 is closer than the previous point:
save point2 for angle a of point1
I was doing:
diffs = all possible (dx,dy) pairs
for each point in graph:
for each diff in diffs:
continue along diff from point until hitting a point
store that point
The second approach looks deceptively simple, but all the complexity is in the "all possible (dx,dy) pairs" and "continue along diff" lines. Look at all this complexity just for finding the (dx,dy) pairs:
1func allStepsFrom(g grid, from geom.Pt2, dx, dy int) []geom.Pt2 {
2 var steps []geom.Pt2
3 reachable := make(map[geom.Pt2]bool)
4 for i := 0; i < g.height; i++ {
5 for j := 0; j < g.width; j++ {
6 add := false
7 d := geom.Pt2{dx * j, dy * i}
8 if d == geom.Zero2 {
9 continue
10 }
11 for p := from.Add(d); inBounds(g, p); p = p.Add(d) {
12 if !reachable[p] {
13 add = true
14 reachable[p] = true
15 }
16 }
17 if add {
18 steps = append(steps, d)
19 }
20 }
21 }
22 return steps
23}
24
25func allSteps(g grid) []geom.Pt2 {
26 var steps []geom.Pt2
27 steps = append(steps, allStepsFrom(g, geom.Pt2{0, g.height}, 1, 1)...)
28 steps = append(steps, allStepsFrom(g, geom.Pt2{0, 0}, 1, 1)...)
29 steps = append(steps, allStepsFrom(g, geom.Pt2{g.width, g.height}, 1, 1)...)
30 steps = append(steps, allStepsFrom(g, geom.Pt2{g.width, 0}, 1, 1)...)
31 return steps
32}
So here we're starting at each of the four corners of the map and finding the (dx,dy) pairs that result in being able to visit every point from each corner.
Needless to say, this also burned a ton of time in debugging. Now, once we have all those steps computed, we visit each point and proceed along the step lines:
1func visit(visible map[geom.Pt2]int, g grid, p geom.Pt2, steps []geom.Pt2) {
2 for _, d := range steps {
3 var cur geom.Pt2
4 for cur = p.Add(d); inBounds(g, cur) && !g.points[cur]; cur = cur.Add(d) {
5 }
6 if g.points[cur] {
7 visible[p]++
8 }
9 }
10}
11
12func countVisible(g grid, steps []geom.Pt2) map[geom.Pt2]int {
13 counts := make(map[geom.Pt2]int)
14 for p, _ := range g.points {
15 visit(counts, g, p, steps)
16 }
17 return counts
18}
To find the "best" point for placing the space station, we just find the point with the highest visible count:
1func bestPoint(g grid, counts map[geom.Pt2]int) (geom.Pt2, int) {
2 var best geom.Pt2
3 count := 0
4 for p, _ := range g.points {
5 if c := counts[p]; c > count {
6 best = p
7 count = c
8 }
9 }
10 return best, count
11}
Part 2 required switching to angles anyway. For each vaporize loop around the chosen space station position, we find all the points in the grid that are visible using the steps that we already computed, and do this in a loop until we've eliminated every asteroid position from the grid:
1func vaporizeAll(g grid, from geom.Pt2, steps []geom.Pt2) []geom.Pt2 {
2 ordered := make([]geom.Pt2, len(g.points))
3 for vaporized := 0; vaporized < len(g.points)1; {
4 toVaporize := reachableFrom(g, from, steps)
5 for _, p := range toVaporize {
6 g.points[p] = false
7 ordered[vaporized] = p
8 vaporized++
9 }
10 }
11 return ordered
12}
We need the reachable points in increasing angle order starting from vertical, though, and my angle calculation doesn't produce an angle of zero for points directly above the starting point, so we have to (1st) find all reachable points, (2nd) sort them by angle from the space station, and (3rd) reorder them starting from the first one that has an angle greater than pi/2:
1type byAngleFrom struct {
2 p geom.Pt2
3 ps []geom.Pt2
4}
5
6func (points byAngleFrom) Len() int {
7 return len(points.ps)
8}
9
10func angle(p1, p2 geom.Pt2) float64 {
11 return math.Atan2(float64(p2.Yp1.Y), float64(p2.Xp1.X))
12}
13
14func (points byAngleFrom) Less(i, j int) bool {
15 to1, to2 := points.ps[i], points.ps[j]
16 a1 := angle(points.p, to1)
17 a2 := angle(points.p, to2)
18 return a1 <= a2
19}
20
21func (points byAngleFrom) Swap(i, j int) {
22 points.ps[j], points.ps[i] = points.ps[i], points.ps[j]
23}
24
25func reachableFrom(g grid, from geom.Pt2, steps []geom.Pt2) []geom.Pt2 {
26 var to []geom.Pt2
27 for _, d := range steps {
28 var p geom.Pt2
29 for p = from.Add(d); inBounds(g, p) && !g.points[p]; p = p.Add(d) {
30 }
31 if g.points[p] {
32 to = append(to, p)
33 }
34 }
35 sort.Sort(byAngleFrom{from, to})
36 var j int
37 for j = 0; j < len(to) && angle(from, to[j]) < math.Pi/2.0; j++ {
38 }
39 ordered := make([]geom.Pt2, len(to))
40 for i := 0; i < len(to); i++ {
41 ix := (j + i) % len(to)
42 ordered[i] = to[ix]
43 }
44 return ordered
45}
This is just too much. When I finally finished and got the right answer I knew I had to come back and improve it. At work I talked it through with a colleague, who mentioned normalizing angles using greatest common denominators  which addresses a concern I had, namely that storing points in a map by the slope alone could have issues with floating point equality. Normalizing the slopes using greatest common denominators addresses this: 3/15 and 5/25 produce the same slope, 1/5, and so turning it into a floating point number and doing trigonometry on that, even with the loss of precision, will produce one consistent angle.
So I came back at lunch and improved things drastically. First, avoiding all
the allSteps
precomputation complexity, reachability from a given point is a
simple matter of iterating over every point, finding the angle from the
starting one, and storing it as reachable as long as that angle has never been
seen or it's closer than the previous one for that angle:
1func angle(dy, dx int) float64 {
2 g := ints.Abs(ints.Gcd(dx, dy))
3 if g == 0 {
4 return math.NaN()
5 }
6 return math.Atan2(float64(dy/g), float64(dx/g))
7}
8
9func reachable(g grid, from geom.Pt2) map[float64]geom.Pt2 {
10 ps := make(map[float64]geom.Pt2)
11 for p1, ok1 := range g.points {
12 if !ok1 {
13 continue
14 }
15 if p1 == from {
16 continue
17 }
18 s := angle(p1.Yfrom.Y, p1.Xfrom.X)
19 if p2, ok2 := ps[s]; !ok2  from.Dist(p1) < from.Dist(p2) {
20 ps[s] = p1
21 }
22 }
23 return ps
24}
Finding the best location for the space station now involves picking the point that has the most reachable points (we just ignore the angles for this):
1func maxReachable(g grid) (geom.Pt2, int) {
2 maxPt, max := geom.Zero2, 0
3 for p, ok := range g.points {
4 if !ok {
5 continue
6 }
7 to := reachable(g, p)
8 if l := len(to); l > max {
9 max, maxPt = l, p
10 }
11 }
12 return maxPt, max
13}
Sorting by angle can avoid the sort.Interface implementation and just sort an angle slice directly, since we have a map from angles to points, and then we can easily retrieve the points in anglesorted order as before:
1func sortedByAngle(ps map[float64]geom.Pt2) []geom.Pt2 {
2 sorted := make([]float64, 0, len(ps))
3 for a := range ps {
4 sorted = append(sorted, a)
5 }
6 sort.Float64s(sorted)
7 var j int
8 for j = 0; j < len(sorted) && sorted[j] < math.Pi/2.0; j++ {
9 }
10 byAngle := make([]geom.Pt2, len(ps))
11 for i := 0; i < len(sorted); i++ {
12 a := sorted[(i+j)%len(sorted)]
13 byAngle[i] = ps[a]
14 }
15 return byAngle
16}
Now we can vaporize, as before, by repeatedly finding reachable asteroid points from the space station and removing them from the grid in angleorder:
1func vaporize(g grid, from geom.Pt2) []geom.Pt2 {
2 vaporized := make([]geom.Pt2, len(g.points)1)
3 for i := 0; i < len(g.points)1; {
4 ps := reachable(g, from)
5 sorted := sortedByAngle(ps)
6 for _, p := range sorted {
7 vaporized[i] = p
8 g.points[p] = false
9 i++
10 }
11 }
12 return vaporized
13}
This is a lot simpler: compare before with after.
I think today was revenge for finishing day 9 in just a half an hour :)
Day 11
Using all three of my extracted libraries today (intcode, ints, and geom) to paint the tiles using the programmable robot. I started with some enums (not really necessary and it inflates the code quite a bit, but who cares :) I'll omit those here because they're clear below.
The core of the solution is an I/O loop, providing the intcode machine with
inputs according to the color of the tile it's on and reading the (color,
direction) outputs until the channel is closed. I store the current colors in
a map[geom.Pt2]color
and the current position and orientation of the robot.
1func run(data []int64, initial color) grid {
2 in := make(chan int64)
3 out := intcode.RunProgram(data, in)
4 g := grid(make(map[geom.Pt2]color))
5 p := geom.Zero2
6 g[p] = initial
7 o := UP
8loop:
9 for {
10 select {
11 case c, ok := <out:
12 if !ok {
13 break loop
14 }
15 g[p] = color(c)
16 dir := direction(<out)
17 o = turn(o, dir)
18 p = move(p, o)
19 case in < int64(g[p]):
20 }
21 }
22 return g
23}
Channels are fun :)
Okay, so for part 1 we just need the number of tiles that were painted:
1func main() {
2 data, err := intcode.ReadProgram(os.Args[1])
3 if err != nil {
4 log.Fatal(err)
5 }
6 g := run(data, BLACK)
7 fmt.Println(len(g))
8}
Using the map makes this easy because it only contains values that were
explicitly written. Note that we kick off the machine with the color that
should be stored at position (0,0), where is the position we use for the
robot's initial tile. For part 1 we use BLACK
.
For part 2 we kick it off with WHITE
instead and print the resulting grid.
This requires finding the bounds of the space explored by the robot, after
which we can iterate over every position within those bounds and retrieve its
color from the map we created above. Since Go maps return a zero value when a
key isn't present, and the zero value for a color is BLACK
, this is easy.
1func printGrid(g grid) {
2 minX, minY := math.MaxInt64, math.MaxInt64
3 maxX, maxY := math.MinInt64, math.MinInt64
4 for p, _ := range g {
5 minX, maxX = ints.Min(minX, p.X), ints.Max(maxX, p.X)
6 minY, maxY = ints.Min(minY, p.Y), ints.Max(maxY, p.Y)
7 }
8 for row := maxY; row >= minY; row {
9 for col := minX; col <= maxX; col++ {
10 p := geom.Pt2{col, row}
11 switch g[p] {
12 case BLACK:
13 fmt.Print(" ")
14 case WHITE:
15 fmt.Print("X")
16 }
17 }
18 fmt.Println()
19 }
20}
That's it :) Code is on GitHub.
Day 12
Another revenge day for yesterday's easy one. I think it took me five hours. The first part only took about a half hour, after which I gave part 2 a shot without changing anything  which did not work, of course. Let me walk through part 1 before getting to part 2 and the mistakes I made and the problems I ran into there.
Okay, so for part 1, each moon has position and velocity:
1type moon struct {
2 p, v geom.Pt3
3}
We simulate the system by repeatedly stepping through the gravity and velocity updates:
1func step(ms []moon) {
2 applyGravity(ms)
3 applyVelocity(ms)
4}
5
6func simulate(ms []moon, n int) []moon {
7 ms2 := make([]moon, len(ms))
8 copy(ms2, ms)
9 for i := 0; i < n; i++ {
10 step(ms2)
11 }
12 return ms2
13}
applyGravity
is a bit verbose:
1func applyGravity(ms []moon) {
2 for i := 0; i < len(ms)1; i++ {
3 for j := i + 1; j < len(ms); j++ {
4 applyGravityPair(&ms[i], &ms[j])
5 }
6 }
7}
8
9func applyGravityPair(m1, m2 *moon) {
10 if m1.p.X < m2.p.X {
11 m1.v.X, m2.v.X = m1.v.X+1, m2.v.X1
12 } else if m1.p.X > m2.p.X {
13 m1.v.X, m2.v.X = m1.v.X1, m2.v.X+1
14 }
15 if m1.p.Y < m2.p.Y {
16 m1.v.Y, m2.v.Y = m1.v.Y+1, m2.v.Y1
17 } else if m1.p.Y > m2.p.Y {
18 m1.v.Y, m2.v.Y = m1.v.Y1, m2.v.Y+1
19 }
20 if m1.p.Z < m2.p.Z {
21 m1.v.Z, m2.v.Z = m1.v.Z+1, m2.v.Z1
22 } else if m1.p.Z > m2.p.Z {
23 m1.v.Z, m2.v.Z = m1.v.Z1, m2.v.Z+1
24 }
25}
But velocity is simple:
1func applyVelocity(ms []moon) {
2 for i := range ms {
3 ms[i].p.TranslateBy(ms[i].v)
4 }
5}
Finding the total system energy is just a loop, vector norms, and arithmetic:
1func moonEnergy(m moon) int {
2 return m.p.ManhattanNorm() * m.v.ManhattanNorm()
3}
4
5func energy(ms []moon) int {
6 total := 0
7 for _, m := range ms {
8 total += moonEnergy(m)
9 }
10 return total
11}
That's it for part 1.
1func main() {
2 ms := readPoints(os.Args[1])
3 fmt.Println(energy(simulate(ms, 1000)))
4}
Let me start part 2 by saying that my approach for part 2 was to track every state that we've seen so far, in case the cycle we eventually find starts from some noninitial state; that is, I thought that it could a while for all the moons to settle into a rhythm, and the cycle would be some subset (s_m, s_m+1, ... s_n) of states from the entire chain (s_1, s2, ..., s_n). More on that later.
When I tried to use this for part 2, it ran out of memory. Not surprising,
considering we were warned in the problem description that we'd need to come
up with a more efficient simulation. I was keeping each previous total system
state (a [4]moon
) in a map[[4]moon]bool
, but each moon is modeled as 6
int64
s (three for each of position and velocity), which is 48 bytes per
moon, which is 192 bytes per state, and once we're tracking 4 billion states
(per test case 2), this is already about a terrabyte of data. Not feasible for
a single hashmap on a Chromebook.
My next idea was to try to write out the update equations and see if there was some sort of trick. I noticed, for example, that
pos(moon, n) = pos(moon, 0) + sum(vel(moon, k) for k = 1..n)
vel(moon, n) = sum(diffs(moon, moons, k) for k = 1..n)
but this didn't help. Ideally this would lead to some sort of linear equation,
something to be solved with matrices, but the fact that the diffs depend on
sign(pos(moon1)  pos(moon2))
makes this hard, I think. I don't think that
can be modeled with a linear equation.
After this I spent a bit of time trying to reduce the size of the state. If
every moon could fit into a single int64
, for example, then the entire state
would only be 32 bytes, but this is still dominated by the need to potentially
store 4+ billion states: still >100 GB, still not feasible for my Chromebook.
I talked with a colleague of mine, Andrew, who is also doing Advent of Code. He argued that we don't actually need to track every previous state, because a cycle must necessarily return to the initial state. This wasn't clear to me, but he was pretty certain about it and solved the problem with it, so I went ahead and modified my code to assume this, hoping to patch it up later if necessary. But now, instead of running out of memory, the program just sat and churned. Adding some logging showed that there was no way it would solve even the second test case in a reasonable amount of time.
It was at this point that Andrew pointed out that we can simulate each (x,y,z)
dimension independently. This is nice because it means we can use goroutines
and simulate on three cores instead of one  and because each goroutine only
needs to find a cycle in that dimension, not all three, making the cycle
lengths much shorter. After making these changes, my program was able to find
the three dimensional cycle lengths in under 50ms  a drastic difference!
As implemented, the perdimension simulation never copies data that doesn't
fit into single machine words (as far as I can tell), as opposed to doing math
on geom.Pt3
values that each copy three int64
s when methods are called on
them for basic arithmetic. Additionally, finding the perdimension cycles
simply requires drastically fewer steps  in my case, by 10 or so
orders of magnitude fewer steps. This makes the problem tractable.
On to the code :)
First we flatten the state so that we can treat each coordinate's position and velocity separately:
1type state struct {
2 px, py, pz, vx, vy, vz [4]int64
3}
Each simulation step is pretty straightforward now:
1func applyGravity(px, vx *[4]int64) {
2 for i := 0; i < len(px)1; i++ {
3 for j := i + 1; j < len(px); j++ {
4 if px[i] < px[j] {
5 vx[i] += 1
6 vx[j] = 1
7 } else if px[i] > px[j] {
8 vx[i] = 1
9 vx[j] += 1
10 }
11 }
12 }
13}
14
15func applyVelocity(px, vx *[4]int64) {
16 for i := range px {
17 px[i] += vx[i]
18 }
19}
20
21func step(px, vx *[4]int64) {
22 applyGravity(px, vx)
23 applyVelocity(px, vx)
24}
To find a loop for a given coordinate, we step until we see a position and velocity vector that matches the initial one, then send the iteration count out on a channel. We do this for each coordinate, then pull the perdimension iteration counts out of the channel and compute the least common multiple  the smallest number that is a multiple of all three, which should be the number of steps it takes to cycle all three dimensions.
1func findLoopCoord(px, vx [4]int64, ch chan< int64) {
2 pi, vi := px, vx
3 for i := int64(1); ; i++ {
4 step(&px, &vx)
5 if px == pi && vx == vi {
6 ch < i
7 return
8 }
9 }
10}
11
12func findLoop(s state) int64 {
13 ch := make(chan int64)
14 defer close(ch)
15 go findLoopCoord(s.px, s.vx, ch)
16 go findLoopCoord(s.py, s.vy, ch)
17 go findLoopCoord(s.pz, s.vz, ch)
18 return lcm(<ch, <ch, <ch)
19}
That finally does it.
I should mention that, after this code appeared to be working (i.e. it solved
test case 1 from the problem description), it still seemed to have problems.
The answer it gave was wrong, and it turned out that it wasn't giving the
right answer for test case 2, either. I spent about an hour debugging this,
mostly fiddling with the least common multiple implementation and adding and
removing printf statements. Well, it turns out that when I was trying my
bitpacking approach (mentioned above), I switched the type of each coordinate
to int8
. Suddenly, while staring at debugging output, I noticed that many of
the values were bigger than I'd expected, outside of the range [128,127], and
it dawned on me that I had integer overflow. s/int8/int64/ fixed the problem.
I can assure you that this mistake is now forever burned into my brain.
Okay, so now after all, it occurred to me on my commute home why the system will return to its initial state and we don't need to worry about some intermediate, noninitial state forming the beginning of the cycle. Here's why:
Suppose there's a chain of states (s_0, ..., s_m1, s_m, s_m+1, ..., s_n, s_m), so that s_m has two distinct parent states: s_m1, resulting from s_0, and s_n.
But as mentioned above,
state(n) = {pos(n), vel(n)}
pos(n) = pos(n1) + vel(n)
vel(n) = vel(n1) + diffs(n1)
diffs = /* something only involving pos(n1) */
So a state(n1) is always uniquely determined by state(n). We can't have two distinct parent states of any beginning of a cycle.
That was a bit convoluted; it's been a long time since I had to write a proof :)
Day 13
This was the coolest thing yet. For rendering the screen and getting key events I used the library tcell, which has a supersimple API and worked with zero issues. It took me maybe ten minutes from adding the import statement to drawing the entire screen properly. Handling key events properly took a bit longer, particularly since it seems that most cursesstyle libraries don't support keyup vs. keydown events, just "keypress." Anyway, on to the code, before I talk about how I implemented beating the games.
For tiles and joystick position I defined some enums, as usual:
1type TileId int
2
3const (
4 EMPTY TileId = 0
5 WALL TileId = 1
6 BLOCK TileId = 2
7 PADDLE TileId = 3
8 BALL TileId = 4
9)
10
11type JoystickPos int
12
13const (
14 NEUTRAL JoystickPos = 0
15 LEFT JoystickPos = 1
16 RIGHT JoystickPos = 1
17)
For overall game state and communication with the wrapper main I added a
GameState
struct with the score and the raw tiles in a map (since for part 1
we need how many tiles were written; probably not necessary but I'd added it
in part 1 and didn't remove it):
1type ScreenTiles map[geom.Pt2]TileId
2
3type GameState struct {
4 Joystick JoystickPos
5 Score int
6 Tiles ScreenTiles
7}
Before we go into the main loop, we set up the channels: input and output as
usual for the intcode machine, and then a channel for reading key events from
the screen  implemented with a goroutine that does a blocking read and emits
key events on a channel  and finally a time.Tick
channel for controlling
the i/o loop speed:
1func readEvents(screen tcell.Screen) chan *tcell.EventKey {
2 ch := make(chan *tcell.EventKey)
3 go func() {
4 for {
5 event := screen.PollEvent()
6 switch e := event.(type) {
7 case *tcell.EventKey:
8 ch < e
9 }
10 }
11 }()
12 return ch
13}
14
15func Play(
16 data []int64,
17 screen tcell.Screen,
18 frameDelay time.Duration,
19 joystickInit JoystickPos,
20) (GameState, error) {
21 in := make(chan int64)
22 defer close(in)
23 out := intcode.RunProgram(data, in)
24 var events chan *tcell.EventKey
25 if screen != nil {
26 screen.Clear()
27 events = readEvents(screen)
28 }
29 state := GameState{
30 Tiles: ScreenTiles(make(map[geom.Pt2]TileId)),
31 Joystick: joystickInit,
32 }
33 tick := time.Tick(frameDelay)
34 /* snip */
35}
After that we go into the main loop, which wraps a select over the events channel (to record the current joystick state), the tick channel (to send the current joystick channel if the machine is trying to read it, otherwise continue without blocking), and the output channel (to read the screen updates and detect halting):
1 /* snip */
2loop:
3 for {
4 select {
5 case e := <events:
6 switch e.Key() {
7 case tcell.KeyCtrlC:
8 break loop
9 case tcell.KeyLeft:
10 state.Joystick = LEFT
11 case tcell.KeyRight:
12 state.Joystick = RIGHT
13 }
14
15 case <tick:
16 select {
17 case in < int64(state.Joystick):
18 state.Joystick = joystickInit
19 default:
20 continue
21 }
22
23 case x, ok := <out:
24 if !ok {
25 break loop
26 }
27 y, z := <out, <out
28 if x == 1 && y == 0 {
29 if z > 0 {
30 state.Score = int(z)
31 }
32 } else {
33 tile := TileId(z)
34 state.Tiles[geom.Pt2{int(x), int(y)}] = tile
35 if screen != nil {
36 draw(screen, int(x), int(y), tile)
37 }
38 }
39 }
40 }
41
42 return state, nil
43}
That's the core of the game logic. Rendering the various tiles uses some maps with predetermined characters:
1func draw(screen tcell.Screen, x, y int, tile TileId) {
2 screen.SetContent(x, y, tileToRune[tile], nil, 0)
3 screen.Show()
4}
5
6var tileToRune = map[TileId]rune{
7 EMPTY: ' ',
8 WALL: '@',
9 BLOCK: 'X',
10 PADDLE: '',
11 BALL: 'o',
12}
The full code is on GitHub. The wrapper main function is pretty boring, but can be found there too, both headless to solve parts 1 and 2 and interactive to play the game.
Okay, so how to beat the game without having to do it manually (because I'm terrible at it)?
I had three ideas:
 Write an AI for the paddle
 Disassemble the input program and modify it so the ball always bounces
 Modify the machine so that the program thinks the ball should bounce
The AI idea sounded fun, but the other two sounded like more fun, considering I've never done anything like them before. I didn't want to write a disassembler, and even though I know some people already wrote them for intcode and posted them on Reddit, I wanted a selfcontained solution. So I settled on the third idea, to fool the program.
To do this I added logging to the machine so that it prints each instruction, including locations of memory reads and writes, at each step. I piped the logging to a file and played the game once (breaking a few bricks before losing), then opened the log. Looking through the logs, I saw that immediately before output instructions for redrawing the paddle and the ball the instructions looked very similar, with just a couple of addresses different between the two.
I looked for the instructions that precede the (x,y,z) output instructions, specifically the ones that precede updates for z=3 (PADDLE) and z=4 (BALL). Compare these lines (with minor changes), preceding each write of (x,y,3):
[573] adjrel imm(4)
[575] jmpif imm(1) rel(0)
[138] add pos(392) pos(384) pos(392)
[142] mul imm(1) pos(392) rel(1)
[146] add imm(0) imm(21) rel(2)
[150] mul imm(3) imm(1) rel(3)
[154] add imm(0) imm(161) rel(0)
[158] jmpif imm(1) imm(549)
[549] adjrel imm(4)
[551] mul rel(2) imm(42) pos(566)
[555] add rel(3) pos(566) pos(566)
[559] add imm(639) pos(566) pos(566)
[563] add imm(0) rel(1) pos(1538)
[567] print rel(3)
wrote value: 17
[569] print rel(2)
wrote value: 21
[571] print rel(1)
wrote value: 3
With these lines, preceding (with minor changes) each write of (x,y,4):
[573] adjrel imm(4)
[575] jmpif imm(1) rel(0)
[338] add pos(388) pos(390) pos(388)
[342] add pos(389) pos(391) pos(389)
[346] mul imm(1) pos(388) rel(1)
[350] mul pos(389) imm(1) rel(2)
[354] mul imm(4) imm(1) rel(3)
[358] add imm(0) imm(365) rel(0)
[362] jmpif imm(1) imm(549)
[549] adjrel imm(4)
[551] mul rel(2) imm(42) pos(566)
[555] add rel(3) pos(566) pos(566)
[559] add imm(639) pos(566) pos(566)
[563] add imm(0) rel(1) pos(1586)
[567] print rel(3)
wrote value: 23
[569] print rel(2)
wrote value: 22
[571] print rel(1)
wrote value: 4
Since we know the output order is (x,y,z), we can trace back from the print
instruction for the xcoordinate (it's the first one of the three) and see
that in both cases it's writing a value from rel(3), which, after the adjrel imm(4)
statement in both traces, should point to the value that was loaded
previously into rel(1) from position 392 (for the paddle) or position 388 (for
the ball). Okay, so it seems like the ball's xcoordinate is stored at address 388.
Why don't we just always return that value when retrieving the paddle's
xcoordinate, i.e. redirect reads of address 392 to address 388?
Well, I did that in my Intcode VM, and it looks like this:
diff git a/intcode/machine.go b/intcode/machine.go
index 9b2cc59..a40c516 100644
 a/intcode/machine.go
+++ b/intcode/machine.go
@@ 40,6 +40,9 @@ func (m *machine) get(addr int64, md mode) int64 {
v := m.data[addr]
switch md {
case pos:
+ if v == 392 {
+ v = 388
+ }
return m.data[v]
case imm:
return v
This worked! Here's a video:
The paddle drawing seems a bit wonky now, it never erases after it moves to a position, but who cares :)
Day 14
Revenge again for the intcode problems. This took me all day, on and off. Now granted, I spent all day with my daughter, who is teething (molars) and was incredibly clingy and crabby and was driving me crazy  but I still probably spent four hours on it without even solving part 1. I gave up after her naptime, and then read through some Reddit threads after her bedtime. I was, in fact, on the totally wrong track.
Okay, what was I doing wrong?
My initial thought was of something about linear programming, which I learned about back in university and haven't used since, and so I decided to avoid something that would require a bunch of research.
I latched on early to the idea of iteratively expanding and reducing the required chemicals to produce a FUEL until no more reductions were possible using the given reactions alone, then trying to work out a strategy for how best to produce unnecessary chemicals. My first idea was to do it essentially randomly, i.e. in Go's map iteration order. This produced inconsistent results, so my next idea was to recursively build excess for each remaining required chemical, recording the resulting total ore cost and then using the branch that minimized that cost. This didn't halt: also not a good strategy. Doing this recursively, particularly for the real input (which had something like ten unsatisfied quantities after nonwastefully applying rules alone), creates a combinatorial explosion. I spent a long time rewriting and staring at the above and trying to find a better way to select excess chemicals to build.
The easier approach, apparently, is to just go ahead and build the chemicals needed at each step and keep track of the excess, which can be used later to reduce the amount of chemicals to produce for some other reaction.
Types:
1type quant struct {
2 amt int
3 chem string
4}
5
6type reaction struct {
7 out quant
8 ins []quant
9}
To find the amount of ore we need to build a given amount of a chemical, we first use whatever excess we have, then build however much more we need to satisfy the appropriate reaction and store any excess, then recursively find the amount of ore required to satisfy that reaction.
1func oreNeeded(
2 chem string, amt int,
3 reacts map[string]reaction,
4 waste map[string]int,
5) int {
6 if chem == "ORE" {
7 return amt
8 }
9
10 // reuse excess production before building more
11 if avail := waste[chem]; avail > 0 {
12 reclaimed := ints.Min(amt, avail)
13 waste[chem] = reclaimed
14 amt = reclaimed
15 }
16 if amt == 0 {
17 return 0
18 }
19
20 // build as much as necessary and store the excess
21 react := reacts[chem]
22 k := 1
23 if amt > react.out.amt {
24 k = divceil(amt, react.out.amt)
25 }
26 waste[chem] += k*react.out.amt  amt
27
28 // recursively find the ore needed for the ingredients
29 ore := 0
30 for _, in := range react.ins {
31 ore += oreNeeded(in.chem, k*in.amt, reacts, waste)
32 }
33 return ore
34}
This is the first day that I had no idea where I was really going. Day 12, the moon simulation, I understood, and I solved part 1 pretty easily and had a few ideas in mind for part 2 before talking it over with my colleague Andrew. Today, though, I had no idea if I was on the right track, no new ideas to try, and I was frustrated all day.
The lesson I'll draw from this is that, if the problem asks "find the ore
required to build this chemical", then the code should do that: func oreRequired(chem string, amt int) int
. The trick of keeping track of excess
chemicals, though  I don't know that I'd have come up with that even if I'd
started off with a more straightforward approach. Well, it's good to get the
practice.
Code for this solution is on GitHub.
Day 15
Okay, something fun and straightforward again! I didn't get up early today, so I just had an hour or so at naptime and then again after bedtime, but as opposed to yesterday, the amount of time it took was productive instead of frustrating and seemingly boundless. I immediately recognized this as a graph traversal problem, but couldn't decide whether to do breadthfirst or depthfirst initially and ended up implementing part of both before starting over in the evening.
My problem earlier in the day was in trying to both map out the space and find the shortest path at the same time, instead of first generating the map using DFS and then finding shortest paths using BFS. The problem with trying to do both at the same time is that it adds a ton of complexity. If you try do both move the droid and keep track of the shortest path to the oxygen with DFS, you end up having a problem when you find a shorter path to a previouslyvisited node: do you then recursively update the distances for everything you already visited from that node, or do you just update the predecessor of that node and compute the length later  and then, why not just do BFS anyway? If you try to do both at the same time with BFS, then you have to constantly move the droid back to the starting position each time.
After it occurred to me to first build the map and then find the path, the problem simplified drastically  as did the code.
As usual, we start with some enums:
1type status int
2
3const (
4 WALL status = 0
5 OK status = 1
6 OXGN status = 2
7)
8
9type direction int
10
11const (
12 NORTH direction = 1
13 SOUTH direction = 2
14 WEST direction = 3
15 EAST direction = 4
16)
I also abstracted away the intcode I/O into a droid
struct that can move
about:
1type droid struct {
2 in chan< int64
3 out <chan int64
4}
5
6func (d *droid) step(dir direction) status {
7 d.in < int64(dir)
8 return status(<d.out)
9}
Okay, so to build a map of the area, we run a DFS starting from (0, 0). We assume we're already there, and then we move to each neighbor, marking the status of that move on the map, recursively visit it, and then we step back from that neighbor and move to the next one:
1func (d *droid) visit(p geom.Pt2, m map[geom.Pt2]status) {
2 // try to move to each unvisted neighbor, recurse, then return
3 for dir, dp := range directions {
4 next := p.Add(dp)
5 if _, ok := m[next]; ok {
6 continue
7 }
8 s := d.step(dir)
9 if m[next] = s; s == WALL {
10 continue
11 }
12 d.visit(next, m)
13 d.step(opposite(dir))
14 }
15}
16
17func explore(prog []int64) map[geom.Pt2]status {
18 in := make(chan int64)
19 out := intcode.RunProgram(prog, in)
20 d := droid{in, out}
21 m := map[geom.Pt2]status{geom.Zero2: OK}
22 d.visit(geom.Zero2, m)
23 return m
24}
The function opposite
just returns a direction's opposite (since we need to
move back to our original spot from a neighbor), and neighbors
is a map of
each direction to the necessary vectors:
1func opposite(dir direction) direction {
2 switch dir {
3 case NORTH:
4 return SOUTH
5 case SOUTH:
6 return NORTH
7 case WEST:
8 return EAST
9 case EAST:
10 return WEST
11 }
12 log.Fatal("bad direction:", dir)
13 return 0
14}
15
16var directions = map[direction]geom.Pt2{
17 NORTH: geom.Pt2{0, 1},
18 SOUTH: geom.Pt2{0, 1},
19 WEST: geom.Pt2{1, 0},
20 EAST: geom.Pt2{1, 0},
21}
When explore
returns, we have a map of point > status for each reachable
point in the area. Now we need to find the shortest path from (0, 0) to the
location of the oxygen system. To do this we use BFS over the coordinates,
with neighbors of a given node are the nonwall nodes within one Manhattan
distance step away on the grid. With BFS we always visit nodes at distance n
from the starting position before visiting nodes at distance n+1. We do this
using a queue, and we add neighbors of a given node to the end of the queue.
The code to find all shortest paths is simpler than finding a single one, so
here's the entire thing:
1type node struct {
2 p geom.Pt2
3 n int
4}
5
6func shortestPaths(from geom.Pt2, m map[geom.Pt2]status) map[geom.Pt2]int {
7 // track which nodes we've visited and how far away they are
8 visited := make(map[geom.Pt2]bool)
9 dist := make(map[geom.Pt2]int)
10
11 // keep a queue of the next nodes to visit, in sorted order, with closer
12 // nodes always before further ones
13 q := []node{{from, 0}}
14 var nd node
15
16 // continue as long as the queue is empty
17 for len(q) > 0 {
18 // pop the head off the queue and record its distance
19 nd, q = q[0], q[1:]
20 dist[nd.p] = nd.n
21
22 // add each unvisited neighbor to the end of the visit queue, with
23 // distance one greater than the distance of the current node
24 for _, dp := range directions {
25 nbr := nd.p.Add(dp)
26 if visited[nbr] {
27 continue
28 }
29 visited[nbr] = true
30 if m[nbr] != WALL {
31 q = append(q, node{nbr, nd.n + 1})
32 }
33 }
34 }
35 return dist
36}
Then, to find the length of the shortest path to the oxygen system, we just return the distance of the oxygen system's node:
1func findOxygen(m map[geom.Pt2]status) geom.Pt2 {
2 for p, s := range m {
3 if s == OXGN {
4 return p
5 }
6 }
7 log.Fatal("oxygen not found")
8 return geom.Zero2
9}
10
11func shortestPath(from, to geom.Pt2, m map[geom.Pt2]status) int {
12 return shortestPaths(from, m)[to]
13}
Hooking it all together, we have:
1func main() {
2 data, err := intcode.ReadProgram(os.Args[1])
3 if err != nil {
4 log.Fatal(err)
5 }
6 m := explore(data)
7 p := findOxygen(m)
8 fmt.Println(shortestPath(geom.Zero2, p, m))
9}
For part 2, we want to find the distance of the furthest node, since the time it takes to reach it is the time it takes for the entire area to fill with oxygen. Since we know all the shortest path distances already, we just find the longest one of those:
1func longestPath(from geom.Pt2, m map[geom.Pt2]status) int {
2 max := 0
3 for _, n := range shortestPaths(from, m) {
4 max = ints.Max(max, n)
5 }
6 return max
7}
This was a blast. I'd like to come back to this day after the year is over and create an animated GIF of the droid exploration and oxygen propagation. I'd like to do that for several of the problems, actually  there's a lot of nice visualizations on Reddit for any given day, and my limited experience with Go's image library makes it seem like this would be straightforward, if only I took the time to learn how :)
Full code for today is here.
Day 16
This is the longest I've spent on any day so far, but I did eventually get it on my own! The first part was straightforward  define the signal pattern and apply it to the input signal 100 times:
1func coef(row, col int) int {
2 switch (col / row) % 4 {
3 case 0: return 0
4 case 1: return 1
5 case 2: return 0
6 default: return 1
7 }
8}
9
10func fft(signal []int, phases int) []int {
11 signal = copied(signal)
12 scratch := make([]int, len(signal))
13 for ; phases > 0; phases {
14 for i := 0; i < len(signal); i++ {
15 sum := 0
16 for j := 0; j < len(signal); j++ {
17 sum += coef(i+1, j+1) * signal[j]
18 }
19 scratch[i] = ints.Abs(sum) % 10
20 }
21 signal, scratch = scratch, signal
22 }
23 return signal
24}
But the second part took me like five hours. I first tried to find some math trick that would make it easy, since reading 650 bytes (my input size) and repeating it 10,000 times requires at least 6.5 MB, assuming you somehow avoid expanding each integer into a multibyte int representation, and then trying to precompute the enormous coefficient matrix would take up something like that much memory squared (or at least half that, considering the matrix is triangular)  this is like 36 terrabytes of data! So I ended up on a Wikipedia exploration, relearning about Fast Fourier Transforms, diagonal matrices, determinants and so on, as well as revisiting basic matrix algebra using inverses and matrix decompositions and so on, before eventually abandoning a mathheavy approach as (1) unbounded, when I wanted to definitely solve the problem today, and (2) unlikely to be necessary, since the backgrounds of Advent of Code participants vary so widely. I returned to the naive approach and tried to make it work.
The first problem was running out of memory: the slices were simply too large,
as mentioned. At some point, though, while dumping stuff to the console, I
noticed that the offset specified by the first seven digits was very high,
like, most of the way through the repeated signal. That meant that keeping
everything in memory was more possible and there were many fewer elements to
compute, since to find the last k elements m[nk], m[nk+1], ... m[n]
elements of a vector m
, where A s = m
and A
is the coefficient matrix
and s
is the input signal, we only need to consider the last k rows of the
matrix A
 and since A
is triangular, we only need to consider half the
elements of each row of A
(i.e. only need to precompute those coefficients).
So far the code looked like this:
1func extractMessage(signal []int, reps, phases, offset, digits int) []int {
2 // allocate the [offset, end) slice for computing the message
3 msg := sliceSignal(signal, offset, len(signal)*reps)
4 n := len(msg)
5
6 // precompute the coefficients, skipping the leading zeroes
7 coefs := make([][]int, n)
8 for i := 0; i < n; i++ {
9 coefs[i] = make([]int, ni)
10 for j := i; j < n; j++ {
11 coefs[i][ji] = coef(offset+i+1, offset+j+1)
12 }
13 }
14
15 // repeatedly apply the coefficient matrix rows to the message vector
16 scratch := make([]int, n)
17 for ; phases > 0; phases {
18 for i := 0; i < n; i++ {
19 sum := 0
20 for j := i; j < n; j++ {
21 sum += (coefs[i][ji] * msg[j])
22 }
23 scratch[i] = ints.Abs(sum) % 10
24 }
25 msg, scratch = scratch, msg
26 }
27
28 return msg[:digits]
29}
This worked well, and it solved the part 2 examples in a reasonable
amount of time, but still outofmemoried on the real input during coefficient
precomputation. I removed the precomputation, just relying on the coef
function as defined above to compute each coefficient as needed, but it was
too slow: even computing a single phase took more than several minutes before
I stopped it.
At some point here I noticed another thing in the debug output I was dumping to the console: it seemed like the coefficients were all ones! While taking a break, something occurred to me that I noticed during my matrix math diversion: rows in the lower half of the matrix were all ones, regardless of what size matrix I wrote out for doing manual products (to look for a general formula). And then it was obvious: the first n1 elements of the nth row are zero, and then the next n elements are one, and so together this means that if we're looking at a row more than half way down the matrix, the zeroes and ones make up the entire row!
This means that we can forget about the coefficients entirely and just sum up
the vector elements  and if we do it starting from the last element, which
is just itself, we don't even need to start the sum over at each previous
element, since the sum for element a[nk]
is just sum(a[nk+1], ... a[n])
.
This was a pretty trivial change to the code above, and it works  and computes the answer very quickly! This makes sense: the only allocation now is for the repeated signal from the offset (something like 500k integers, which should be about 4 MB at 8 bytes per integer).
1func extractMessage(signal []int, reps, phases, offset, digits int) []int {
2 msg := sliceSignal(signal, offset, len(signal)*reps)
3 n := len(msg)
4 for ; phases > 0; phases {
5 sum := 0
6 for i := n  1; i >= 0; i {
7 sum += msg[i]
8 msg[i] = ints.Abs(sum) % 10
9 }
10 }
11 return msg[:digits]
12}
Even though this took me forever I'm proud of it! This felt like a typical problem solving process for something nontrivial: explore the problem space a bit with some simpler examples/implementation, read the theory and try to apply it a bit, produce a naive implementation, use some heuristics based on understanding the input data, make space/time tradeoffs to get something usably efficient, then eventually find another heuristic that makes the problem tractable based on data exploration and debugging and writing things out by hand. That kind of realization really requires having spent enough time with the problem and data, in my experience!
Looking at the global leaderboard for this problem, it seems like this was the hardest problem yet. So I'm very happy that I solved it myself! Day 14 remains the only one that I couldn't figure out at all.
Full code for today is here.
Day 17
Another straightforward part 1 and very long part 2 that required staring at the data until devising something based on the properties of the specific case :)
For part 1, finding intersections, we start by reading the grid from the Intcode machine.
1type grid struct {
2 height, width int
3 g map[geom.Pt2]rune
4}
5
6func readGridFrom(out <chan int64) (grid, bool) {
7 g := grid{g: make(map[geom.Pt2]rune)}
8 var width int
9 for ch := range out {
10 if ch == '\n' {
11 if width > 0 {
12 g.height++
13 g.width = width
14 width = 0
15 continue
16 } else {
17 return g, true
18 }
19 }
20 g.g[geom.Pt2{width, g.height}] = rune(ch)
21 width++
22 }
23 return grid{}, false
24}
Then we construct an adjacency list, where two points are adjacent if their Manhattan distance is 1 and neither is empty space (ASCII '.').
1func (g grid) neighbors(p geom.Pt2) []geom.Pt2 {
2 var nbrs []geom.Pt2
3 for _, nbr := range p.ManhattanNeighbors() {
4 if c, ok := g.g[nbr]; ok && c != '.' {
5 nbrs = append(nbrs, nbr)
6 }
7 }
8 return nbrs
9}
10
11func readGraph(g grid) map[geom.Pt2][]geom.Pt2 {
12 m := make(map[geom.Pt2][]geom.Pt2)
13 for i := 0; i < g.height; i++ {
14 for j := 0; j < g.width; j++ {
15 p := geom.Pt2{j, i}
16 if c := g.g[p]; c == '.' {
17 continue
18 }
19 var edges []geom.Pt2
20 for _, nbr := range g.neighbors(p) {
21 edges = append(edges, nbr)
22 }
23 m[p] = edges
24 }
25 }
26 return m
27}
Now we can find intersections by simply finding all points that have more than two neighbors, and the alignment sum is computed over those points:
1func intersections(m map[geom.Pt2][]geom.Pt2) []geom.Pt2 {
2 var ps []geom.Pt2
3 for p, edges := range m {
4 if len(edges) > 2 {
5 ps = append(ps, p)
6 }
7 }
8 return ps
9}
10
11func alignmentSum(g grid) int {
12 m := readGraph(g)
13 ps := intersections(m)
14 sum := 0
15 for _, p := range ps {
16 sum += p.X * p.Y
17 }
18 return sum
19}
For part 2, let me start with the framework for providing programs to the
robot and reading its responses. All I/O is linebased, so we need to be able
to read and write entire strings at a time, terminated by '\n'
:
1func writeLine(ch chan< int64, line string) {
2 for _, c := range line {
3 ch < int64(c)
4 }
5 ch < int64('\n')
6}
7
8func readLine(ch <chan int64) string {
9 var s []rune
10 for {
11 c := <ch
12 if c == '\n' {
13 return string(s)
14 }
15 s = append(s, rune(c))
16 }
17}
To run the program headless we have to read the grid once at the beginning, read the input prompt lines before each input line, and then ignore all output but the last digit. This took a bit of trialanderror to figure out.
1func computeDust(data []int64, prog [4]string) int64 {
2 data = ints.Copied64(data)
3 data[0] = 2
4 in := make(chan int64)
5 out := intcode.RunProgram(data, in)
6 readGridFrom(out)
7 for _, line := range prog {
8 readLine(out)
9 writeLine(in, line)
10 }
11 readLine(out)
12 writeLine(in, "n")
13 var answer int64
14 for c := range out {
15 answer = c
16 }
17 return answer
18}
Okay! So for part 2, in the end I had to figure out the program by hand. Let me walk through how that happened.
Initially I misread the problem and thought the program had to navigate the robot back to the starting position. So after writing a DFS traversal of the scaffolding that always walks back from successor nodes, to make sure we get back to the beginning, the path was very long. So at this point I assumed that I definitely had to figure out something clever, because there seemed to be simply too many different path components. So I read about different compression techniques for a while, read the problem again, and then noticed that the robot goes back to its initial position on its own.
Then I noticed by staring at the grid that actually it's possible to visit every point without doing anything clever at all:
..............#########..........................
..............#.......#..........................
..............#.......#..........................
..............#.......#..........................
..............#.......#..........................
..............#.......#..........................
..............#.......#..........................
..............#.......#..........................
..............#...#####.#######.....#############
..............#...#.....#.....#.....#...........#
..............#...#.....#.....#.....#...........#
..............#...#.....#.....#.....#...........#
..............#######...#...#############.......#
..................#.#...#...#.#.....#...#.......#
..................#.#...#############...#.......#
..................#.#.......#.#.........#.......#
..................#.#.......#.#.........#########
..................#.#.......#.#..................
..............#############.#.#...#######........
..............#...#.#.....#.#.#...#.....#........
############^.#...#############...#.....#........
#.............#.....#.....#.#.....#.....#........
#.............#.....#.....#.#.....#.....#........
#.............#.....#.....#.#.....#.....#........
#.............#######.....#.#############........
#.........................#.......#..............
#.....#########...........#.......#..............
#.....#.......#...........#.......#..............
#.....#.......#...........#.......#..............
#.....#.......#...........#.......#..............
#.....#.......#############.......#######........
#.....#.................................#........
#######.................................#........
........................................#........
........................................#........
........................................#........
........................................#........
........................................#........
........................................#........
........................................#........
........................................#........
........................................#........
................................#########........
You can visit every point by simply always going forward until hitting a wall and taking the only available turn. I modified the path generation to do this, which is much simpler than DFS. The resulting path has a lot of common subsequences (rewritten here in the form that I eventually was able to reduce, with three repeated sequences):
L,12,L,12,L,6,L,6,
R,8,R,4,L,12,
L,12,L,12,L,6,L,6,
L,12,L,6,R,12,R,8,
R,8,R,4,L,12,
L,12,L,12,L,6,L,6,
L,12,L,6,R,12,R,8,
R,8,R,4,L,12,
L,12,L,12,L,6,L,6,
L,12,L,6,R,12,R,8,
This is small enough to do by hand. I started by finding the longest element I could see immediately, "L,12,L,6", extracted it to a "variable", and proceeded from there. It didn't take long to get an answer:
A,B,A,C,B,A,C,B,A,C
A: L,12,L,12,L,6,L,6
B: R,8,R,4,L,12
C: L,12,L,6,R,12,R,8
This worked. I don't believe it's a coincidence that the problems today and yesterday required exploiting specific properties of the data rather than solving some hard, general problem. It's a good lesson for problem solving in general and also reflects the real world :)
Code is here.
Day 18
I've not finished this one yet, despite having spent all my free time across three days now on it. Now, that wasn't a lot of time, since my wife is sick and my toddler is sick, but still like eight hours in total. I finally got part 1 at midnight last night after reading a hint on Reddit that took literally like five lines of code to memoize my recursive algorithm by reducing the amount of state I was tracking, but I'll finish part 2 before writing this up with all my attempts.
Edit (21 Dec): Finally finished this one! I ended up tossing the precomputed adjacency listbased graph representation in favor of just redoing the BFS from each point every time, and the result is... extremely slow! But it works for both parts and eliminated some complexity around how I was keeping track of positions. I could bring it back and drastically speed up the program by not modifying the graph when keys/doors are removed and instead just skipping them when finding neighbors, but I'm honestly tired of working on it and would prefer to learn the lessons involved and move on :)
In the end this is just DFS from each current position (just the one position for part 1, then the four positions in part 2), where the neighbors and distances are found using BFS, and we use memoization on the DFS recursion (where the state is just the position vector and remaining keys) to avoid reexploring previous paths.
I don't feel like annotating the code this time, so I'll just link to the source: GitHub
Day 19
Straightforward part 1, we just repeatedly run the program with a given x,y pair and map out the 50x50 space:
1type drone struct {
2 prog []int64
3}
4
5func (d drone) test(x, y int, debug bool) state {
6 prog := ints.Copied64(d.prog)
7 in := make(chan int64)
8 defer close(in)
9 out := intcode.Run(prog, in, debug)
10 in < int64(x)
11 in < int64(y)
12 return state(<out)
13}
14
15type beamReadings struct {
16 x, y, width, height int
17 m map[geom.Pt2]state
18}
19
20func mapBeamReadings(prog []int64, x, y, width, height int) beamReadings {
21 d := drone{prog}
22 m := beamReadings{x, y, width, height, make(map[geom.Pt2]state)}
23 for j := x; j < x+width; j++ {
24 for i := y; i < y+height; i++ {
25 m.m[geom.Pt2{j, i}] = d.test(j, i, false)
26 }
27 }
28 return m
29}
30
31func countBeamReadings(m beamReadings) int {
32 affected := 0
33 for _, v := range m.m {
34 if v == pulled {
35 affected++
36 }
37 }
38 return affected
39}
For part 2, I initially started to take the same approach but maybe first validate that the beam has the same shape as it appeared to in the first 50x50 spots  that is, one solid beam emanating from the apparent starting point  but then, after mapping it for a larger search space, it seemed pretty slow, and instead of just hoping it had that shape, I decided to disassemble the drone Intcode program directly and see how it was determining points.
Intcode reverse engineering
First I wrote a disassembly procedure and small tool that wraps it, producing a readable raw program. I also logged machine instructions as they were being executed, producing a program execution dump.
By looking for repeated program counters and then repeated sequences of instructions in the dump and matching them up to the raw instructions in the disassembled program, I was able to separate the program into three sections: (1) the main program, (2) a data section, and (3) some procedures. Comparing the procedures, I was able to find a calling convention for the machine:
From address i0, before calling a function whose instructions start at memory address i1 and which should return to address i2:
 Push i2 onto the stack (i.e. write it to rel(0))
 Push the arguments arg0..argN onto the stack (i.e. write them to rel(1)..rel(N+1)
 Jump to i1
The called function will:
 Increment rel by #args+#locals=M
 Do work, accessing args at relM+1..rel#locals1 and locals at rel#locals..rel1
 Store the return value at rel1
 Decrement rel by M
 Jump to rel(0)=i2
Execution will then resume at address i2, and the return value from the function will be at rel(1).
Using this and walking through the raw assembly and the execution dump, I produced an annotated program to figure out what was going on.
Some interesting things in the program:

The first procedure is a higherlevel
apply
function with four parameters that specify the address of another procedure to invoke and the three arguments to forward to it.This is used e.g. here, where it computes abs(arg3) in a convoluted way: let
A
beapply
andB
beabs
; this computes
A(A, A, B, x)
= A(A, B, x)
= A(B, x)
= B(x)

An absolute value procedure.

An assertnonnegative procedure procedure that prints 0 and halts if the argument is less than zero.

A crazy procedure that returns
arg1*arg0*arg2
, which is used to compute 149x^2, 149y^2, as well as return 0 if149x^2127y^2 < 14xy
and 1 otherwise, which then becomes 1 and 0.
In the end, the value of an x,y input pair is given by
determining
whether it satisfies the equation 149x^2  127y^2 < 14xy
. I
stuck this in
desmos.com/calculator
to find a candidate range. Turns out the beam really is what it
looked like in that initial 50x50 space:
So I implemented the equation in Go to be able to run it faster and then iterated over the candidate space:
1func fastTest(x, y int) bool {
2 return 14*x*y > ints.Abs(149*x*x127*y*y)
3}
4
5func testRange(x, y, width, height int) bool {
6 return (fastTest(x, y) &&
7 fastTest(x+width1, y) &&
8 fastTest(x, y+height1) &&
9 fastTest(x+width1, y+height1))
10}
This works :) Full code is here and the annotated Intcode assembly is here.
Day 20
For Day 18 I wrote about a billion breadthfirst searches, and I have become exceedingly efficient at it. This was actually a pretty simple BFS, even with the part 2 "depth" twist, which despite being labeled "recursive" didn't involve any recursion for me.
Most of the hard work is in parsing the maze, finding the portals, and storing adjacent tiles for each portal. I'll skip all of that (it's on GitHub) and go to the path finding.
To begin with, we need some structs for the maze and for depthaware locations. For the maze, I keep the raw grid data, rectangles specifying the outer and inner donut maze boundaries, and the location of each portal and a list of its adjacent tiles.
1type maze struct {
2 g grid
3 outer geom.Rect
4 inner geom.Rect
5 adjs map[label][]geom.Pt2
6 lbls map[geom.Pt2]label
7}
8
9type point struct {
10 p geom.Pt2
11 depth int
12}
For a given tile, the adjacent tiles are Manhattanadjacent passage tiles and the other side of an adjacent portal  keeping in mind that outer tiles at the top level aren't passable. Then when returning the list of neighbors we make sure to update the depth appropriately for the portalneighbors.
1func (m maze) adjacent(from point) []point {
2 if m.g.g[from.p] == wall {
3 return nil
4 }
5 var nbrs []point
6 for _, nbr := range from.p.ManhattanNeighbors() {
7 c := m.g.g[nbr]
8
9 // don't go through walls
10 if c == wall {
11 continue
12 }
13
14 // go into passages
15 if c == passage {
16 nbrs = append(nbrs, point{nbr, from.depth})
17 continue
18 }
19
20 // go through portals
21 lbl, ok := m.lbls[nbr]
22 if !ok {
23 continue
24 }
25
26 // inner portal increases depth, outer decreases
27 depth := from.depth
28 if m.outer.Contains(nbr) {
29 depth++
30 } else {
31 // but don't go out at top level
32 if from.depth == 0 {
33 continue
34 }
35 depth
36 }
37
38 // go through portals and update depth
39 for _, adj := range m.adjs[lbl] {
40 if from.p != adj {
41 nbrs = append(nbrs, point{adj, depth})
42 }
43 }
44 }
45 return nbrs
46}
Now we do a standard breadthfirst search from "AA" to "ZZ". The only difference between part 1 and part 2 is the nodeequality function to determine when we've reached our destionation: for part 1 we ignore the depth.
1type bfsNode struct {
2 p point
3 d int
4}
5
6func eq(p1, p2 point) bool {
7 return p1.p == p2.p
8}
9
10func depthEq(p1, p2 point) bool {
11 return p1.p == p2.p && p1.depth == p2.depth
12}
13
14func shortestPath(
15 m maze,
16 from, to label,
17 eq func(p1, p2 point) bool,
18) int {
19 src := point{m.adjs[from][0], 0}
20 dst := point{m.adjs[to][0], 0}
21 q := []bfsNode{{src, 0}}
22 v := make(map[point]bool)
23 var first bfsNode
24 for len(q) > 0 {
25 first, q = q[0], q[1:]
26 if eq(first.p, dst) {
27 return first.d
28 }
29 v[first.p] = true
30 nbrs := m.adjacent(first.p)
31 for _, nbr := range nbrs {
32 if v[nbr] {
33 continue
34 }
35 q = append(q, bfsNode{nbr, first.d + 1})
36 }
37 }
38 log.Fatalf("path not found: %s > %s", from, to)
39 return 1
40}
To kick it off we just do:
1fmt.Println(shortestPath(m, lbl("AA"), lbl("ZZ"), eq)) // part 1
2fmt.Println(shortestPath(m, lbl("AA"), lbl("ZZ"), depthEq)) // part 2
That's it! Full code is here.
Day 21
I thought this one was pretty easy. The code to run the Intcode machine is uninteresting, so I'll skip it. It's on GitHub anyway.
The interesting part is the two Springscript programs for parts 1 and 2. For part 1, we always jump when there's a hole in front of us:
NOT A J
That's not always fast enough, though, so we need to look ahead. Looking four places ahead isn't good, though, because we can't make a decision based on that alone: we don't know if there's a safe spot to move to (either by walking or jumping) afterwards. So we wait an extra step and decide to jump if there's a hole at C and a spot to land on at D:
// snip
NOT C T
AND D T
OR T J
WALK
I didn't think much harder than that about it, and it worked for my input. For part 2, we jump if there's a hole at A or B or C:
NOT C T
NOT B J
OR T J
NOT A T
OR T J
But we also only want to jump if there's a space to land on at D and a safe spot to move from there, i.e. at E by stepping or H by jumping again:
// snip
OR E T
OR H T
AND D T
AND T J
RUN
That works! Scripts, input and output files, and the Go code to run it are here.
Day 22
I still haven't gotten part 2 for this one. I think I'm on the right track, but I'm not there yet. Here's what I have so far.
I started by framing the problem as trying to come up with an equation for the value at each index of the deck after each transformation. After a while it became clear that it was actually easier to figure out which index a specific value is moved to, and this was enough to solve part 1. The input was then just represented as a sequence of transformations in modular arithmetic that could be applied to any number:
redeal(x, n) = x  1 (mod n)
cut(x, with, n) = x  with (mod n)
deal(x, with, n) = x * with (mod n)
This means each operation can be represented by two values,
scale
and shift
, so that f(x, n) = scale * x + shift (mod n)
.
For part 2, I started by figuring out how to invert this sequence, which wasn't actually hard conceptually, since
scale * x + shift = y (mod n)
scale * x = y  shift (mod n)
x = scale^1 * (y  shift) (mod n)
means that, for any operation f
represented by scale
and
shift
, f^1(x, n) = scale^1 * (x  shift) (mod n)
. The only
difficulty here is finding scale^1
. I found some algorithms
online for finding modular inverses, then discovered that the Go
standard library includes an arbitraryprecision number library,
math/big
, which has a builtin
ModInverse
function! This made it easier to implement inversion.
This framing also makes it possible to read the entire input sequence into a single operation, since we can combine two operations:
let f(x, n) = scale1 * x + shift1 (mod n)
let g(x, n) = scale2 * x + shift2 (mod n)
then (g.f)(x, n) = scale2 * (scale1 * x + shift1) + shift2 (mod n)
= (scale2*scale1) * x + (scale2*shift1 + shift2) (mod n)
Which is also representable independently of x using just two
parameters scale
and shift
.
Okay, so this made it possible in testing to compute the initial value from the answer to part 1 and apply/invert the input transformation in a single step. Trying to use this for part 2, though, when applying it 101741582076661 is required, is still way too slow.
I suspect the clever solution has something to do with prime numbers and properties of modular arithmetic with a prime modulus, but I've not got it yet.
The full code so far is on GitHub. I'm tired of looking at it and won't walk through it, but I will come back and update here if I figure out part 2 (or end up looking for a hint on Reddit).
Edit: Okay, I finally came back after finishing Day 25 part 1 to finish up part 2, since the second star on Day 25 is just a completion trophy. It was much less challenging this time.
My major problem was figuring out how to precompute the net transformation to be applied after N iterations, where N is enormous, without going through it stepbystep. I had something like the following:
M^1 [1, 0] = [scale, shift] = 1 time
M^2 [1, 0] = M*[...] = [scale^2, scale*shift + shift]
M^3 [1, 0] = M*[...] = [scale^3, scale^2*shift + scale*shift + shift]
For the scale
parameter this was easy, since after N
transformations, it's just scale^N
. A closed formula for
shift
seemed more difficult, since it didn't seem easy to
precompute this sum. But it turns out that typing "sum of powers
mod prime" produced a few Stack Exchange links that mentioned
Geometric
Sums,
which I'd totally forgotten about, but which is exactly that
shift
polynomial above! I tried to implement this and was still
having trouble, and an hour of debugging later I realized that
the DivMod
method I was using doesn't do the division I wanted for computing
the closed Geometric Series formula (1r^n)/(1r)
! I actually
needed to use
ModInverse
on the denominator and multiply it by the numerator. I simply
didn't read the documentation for DivMod
:(
This fix made it work! Finally.
Day 23
This one was fun. Well, part 1 was fun, and then I spent several hours trying to figure out why it seemed like my network was stalling.
I think I overused channels and goroutines on this one, which too frequenty spun doing nothing and consumed a lot of CPU cycles that should have been used for the actuallycomputing Intcode machines. On top of that, I didn't figure out a good way to use channels to coordinate the idleness tracking, and ended up using a shared map wrapped in a mutex. All of this meant that I think there was way too much lock contention, goroutines consuming cycles needlessly, and general coordination overhead for what could have been a bunch of forloops.
As it is, my solution only seems to reach a solution with a specific combination of GOMAXPROCs, idlenessthresholdvalue, and idlenesscheckdelay. Not great! But I did eventually get the program to terminate with the right answer.
Again not feeling up to documenting this one, because I spent way too long with it and am getting a bit burnt out. Frankly, at this point, I really would rewrite it to use a bunch of forloops and shared block state.
Code is here.
Day 24
Nice Conway's Game of Lifestyle problem today. For part 1 I decided to use bit vectors to represent the board, since (a) the biodiversity ranking would then just be the integer value of the bit vector when stored rowwise starting at the lowermost bit, and (b) keeping track of every layout every seen in order to find the first repeat would be much faster (comparisons are a machine instruction, integer compare) and the hashmap of previous layouts much smaller.
So I wrote a small custom bitset implementation:
1type bitset int64
2
3func (b bitset) get(i int) bool {
4 return (b & (1 << i)) > 0
5}
6
7func (b *bitset) set(i int, value bool) {
8 if value {
9 (*b) = 1 << i
10 } else {
11 (*b) &= ^(1 << i)
12 }
13}
This backs a layout
struct that tracks the current infestation
state:
1type layout struct {
2 bits bitset
3 width int
4 height int
5}
6
7func (l *layout) alive(row, col int) bool {
8 return l.bits.get(row*l.width + col)
9}
Getting the neighboring bug counts is pretty straightforward, just checking each direction (if not at an edge):
1func (l *layout) adjacent(row, col int) int {
2 adj := 0
3 if row > 0 && l.alive(row1, col) {
4 adj++
5 }
6 if row < l.height1 && l.alive(row+1, col) {
7 adj++
8 }
9 if col > 0 && l.alive(row, col1) {
10 adj++
11 }
12 if col < l.width1 && l.alive(row, col+1) {
13 adj++
14 }
15 return adj
16}
Then updating the state from one minute to the next is just applying the given rules:
1func (l *layout) next() {
2 next := l.bits
3 for row := 0; row < l.height; row++ {
4 for col := 0; col < l.width; col++ {
5 adj := l.adjacent(row, col)
6 n := row*l.width + col
7 if l.bits.get(n) && adj != 1 {
8 next.set(n, false)
9 } else if !l.bits.get(n) && (adj == 1  adj == 2) {
10 next.set(n, true)
11 }
12 }
13 }
14 l.bits = next
15}
To find a repeat we just call next()
repeatedly and store the
intermediate states in a map
:
1func findRepeat(l layout) bitset {
2 m := map[bitset]bool{l.bits: true}
3 for {
4 l.next()
5 if _, ok := m[l.bits]; ok {
6 return l.bits
7 }
8 m[l.bits] = true
9 }
10}
Going to part 2 I abandoned this, since the board needs to "grow" in two directions, up and down. I decided on a representation similar to that from day 20, where each point in space is paired with a depth:
1type tile struct {
2 p geom.Pt2
3 depth int
4}
Then we keep these tiles in a map[tile]bool
indicating whether
the given tile has a bug. This makes it easy to count the number
of active bugs at any given time.
1type grid struct {
2 width, height int
3 g map[tile]bool
4}
5
6func (g grid) countBugs() int {
7 bugs := 0
8 for _, alive := range g.g {
9 if alive {
10 bugs++
11 }
12 }
13 return bugs
14}
Iterating the current state is similar to the bit vector approach above, but here we do two iterations through the current state, so that while considering currentlylive bugs we can expand the set of tiles in the "universe" to include the neighboring "dead" tiles that may come to life in this epoch:
1func (g grid) adjacentBugs(t tile) int {
2 bugs := 0
3 for _, nbr := range g.adjacent(t) {
4 alive, ok := g.g[nbr]
5 if !ok {
6 g.g[nbr] = false
7 }
8 if alive {
9 bugs++
10 }
11 }
12 return bugs
13}
14func (g grid) next() {
15 diff := make(map[tile]bool)
16
17 // iterate twice so that we can extend the grid to include
18 // consideration of neighboring empty tiles
19 for tile, alive := range g.g {
20 if alive && g.adjacentBugs(tile) != 1 {
21 diff[tile] = false
22 }
23 }
24 for tile, alive := range g.g {
25 adj := g.adjacentBugs(tile)
26 if !alive && (adj == 1  adj == 2) {
27 diff[tile] = true
28 }
29 }
30
31 // apply diffs
32 for tile, alive := range diff {
33 g.g[tile] = alive
34 }
35}
Most of the complexity here is in the adjacentBugs
method,
which calls the adjacent
method to find the adjacent tile set.
I'll omit most of adjacent
since I didn't take the time to
simplify the code and there's a lot of repetition. The basic idea
is that, on the edges of the grid, the neighbors across the edge
are those of the relevant grid above/below the current one. This
is easy by just incrementing/decrementing the depth
field of
tile
.
1func (g grid) adjacentBugs(t tile) int {
2 bugs := 0
3 for _, nbr := range g.adjacent(t) {
4 alive, ok := g.g[nbr]
5 if !ok {
6 g.g[nbr] = false
7 }
8 if alive {
9 bugs++
10 }
11 }
12 return bugs
13}
14
15func (g grid) adjacent(t tile) []tile {
16 var adj []tile
17
18 // left
19 if t.p.X > 0 && (t.p.X != 3  t.p.Y != 2) {
20 q := t.p.Go(geom.Left)
21 adj = append(adj, tile{p: q, depth: t.depth})
22 } else if t.p.X == 0 {
23 q := geom.Pt2{1, 2}
24 adj = append(adj, tile{p: q, depth: t.depth  1})
25 } else if t.p.X == 3 && t.p.Y == 2 {
26 for y := 0; y < g.height; y++ {
27 q := geom.Pt2{g.width  1, y}
28 adj = append(adj, tile{p: q, depth: t.depth + 1})
29 }
30 }
31
32 // snip
33
34 return adj
35}
So now we just call next
200 times and call countBugs
. Nice
and fun for Christmas Eve :)
Code is here.
Day 25
Today's problem was just part 1; part 2 was "did you finish all the other days?" I'd not finished Day 22 part 2 yet, so after rescuing Santa I had to go back and do that. Didn't take me long this time, after a few days' rest; I edited the entry above.
For the text adventure it's not really worth including the code here, since it's a pretty trivial wrapper around the Intcode machine. It can be found on GitHub and it's pretty fun to play!
I beat this manually, like I suspect most people did, since it was (a) fun and (b) would have been nontrivial to parse compared with seeminglylow payoff. It took maybe an hour onandoff to explore the entire ship and get through the security door. The door itself was of course the most challenging bit, and I just did it essentially bruteforce with case elimination on paper and repeated trials.
That's the end of this blog post. It's huge at this point, and I considered several times splitting it into smaller ones or perpost entries, but I think I prefer it like this. I'll be writing a retrospective in the coming week and will link to it here when it's done. For now I'll just say that this was, by a huge margin, the most fun I've ever had programming.
All the code for all the days is on my GitHub.
Merry Christmas and Happy New Year!
Edit: Retrospective is here