The global leaderboard is so fast that any AI assistance would literally slow them down, here's one of the guys who tends to score highly solving today's puzzle. (https://youtu.be/ym1ae-vBy6g), and on the more complicated days that's even more pronounced because anyone who is even somewhat decent doesn't need to ask chatgpt how to write Dijkstra.
Obviously if you're doing it recreationally you can cheat with AI but then again that's no different than copying a solution from reddit and you're only fooling yourself. I don't see it having an impact.
The thing is that the AI can read a puzzle faster than a human can. If someone put any effort towards an AI-based setup, it would easily beat human competitiors (well, up until the point the puzzles got too difficult for it to solve).
I've always done AoC "properly" but this year I've decided to actually use it as a learning experience for working with LLMs (and I don't get up early so will never sully the leaderboard) and trying some experiments along the way.
I think the strategy for the harder puzzles is to still "do" them yourself (i.e. read the challenge and understand it) but write the solution in English pseudocode and then have an LLM take it from there. Doing this has yielded perfect results (but less than perfect implementations) in several languages for me so far and I've learnt a few interesting things about how they perform and the "tells" that an LLM was involved.
Python looks excruciatingly slow to me. If you want fast I believe you need to think and write in vector languages like kdb+/q. I am not a kdb+ expert by any means and my code can probably use more q primitives, but here was my solution in ~2 minutes:
i1:("I I";" ")0: `:1.txt;
sum {abs last deltas x }each flip asc each i1 / answer 1
sum {x * sum x = i1[1]}each i1[0] / answer 2
from collections import *
xys = list(map(int, open(0).read().split()))
xs = xys[::2]
ys = xys[1::2]
xs.sort()
ys.sort()
print(sum(abs(x-y) for x,y in zip(xs,ys)))
yc = Counter(ys)
print(sum(((yc[x])*x for x in xs)))
data = { i+1 : sorted([ x for x in list(map(int, open('input').read().split()))[i::2]]) for i in range(2) }
total_distance = sum(list(map(lambda x: abs(x[0]-x[1]), zip(data[1], data[2]))))
print("part 1:", total_distance)
similarity_score = sum(list(map(lambda x: (x*data[2].count(x))*data[1].count(x), set(data[1]).intersection(data[2]))))
print("part 2:", similarity_score)
I'm actually pleasantly surprised by the results. I like to think that despite problem 1 being easily solvable by LLMs, just about everyone (sans qianxyz) read the FAQ, and decided that they would forego a leaderboard spot for the sake of this coding tradition.
Either that, or there were hundreds of people trying and none were able to get it working despite the basic problem. I like to imagine most people reading the rules and being a good sport.
It also doesn't make any sense for most of the people to compete with the geniuses on the public leader board. It's like signing up for the Olympics as an amateur athlete.
Are we using enshittification for everything we don't like these days? We invented calculators, those really enshittified manual arithmetic puzzles.
Private boards for this stuff makes sense anyway, it's the Internet afterall.
enshittification isn't "things become worse" - it's the specific process of how services worsen in 3 stages:
> Here is how platforms die: first, they are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves. Then, they die. I call this enshittification, and it is a seemingly inevitable consequence arising from the combination of the ease of changing how a platform allocates value, combined with the nature of a "two-sided market", where a platform sits between buyers and sellers, hold each hostage to the other, raking off an ever-larger share of the value that passes between them.
I think the intent is important. Using LLMs to do well on the public leaderboard is like using cheats/aim assist. But learning how to use LLMs to solve complex puzzles independent of any sense of "competition" is more like when people train neural networks to drive a car in GTA or something - it's not hurting anyone and it can be a real learning experience that leads to other interesting byproducts.
But, yeah, don't use LLMs to try and get 9 second solve times on the public leaderboard, it's not in the spirit of the thing and is more like taking a dictionary to a spelling bee.
No, we do not. Calculators are a whole different issue from LLMs, which plagiarize and spoonfeed whole paragraphs of thought.
Enshittification occurs when previously good or excellent things are replaced by mediocre things that are good enough for those susceptible for advertising and group think.
Examples are McDonalds vs. real restaurants, Disney theme parks vs. Paris, the interior of modern cars, search engine decline, software bloat etc.
> those susceptible for advertising and group think.
That's everyone, including you, no matter how edgelordy you post about 'normies' and how you are above that. See how quickly your brain hands you "McDonalds" and "Disney" when you need an example.
Yes you just used the first one that came to mind, the one that everyone would recognise, that's because billions of dollars keep McDonalds first in mind and universally recognised. And even if you make your personality "I wouldn't eat at McDonalds" that money is getting you to propagate the name on HN, just to remind people it exists and keep people talking about it.