Thanks! Hardest part was redesigning the data model.. GoatCounter does complex aggregations at query time, but D1's 10ms CPU limit means you have to pre-aggregate everything (hourly rollups, per-dimension stats tables). Session dedup was tricky too since Workers are stateless so I ended up using KV with IP+UA hash and 8hr TTL. The other annoyance is Pages doesn't support cron, so daily cleanup needs a separate Worker. Still some way to go implementing Zero Trust auth etc..
Thanks for sharing doener, and thanks everyone on HN for participating. We put a lot of work into this over the past week. Here some of the key findings:
- 69% now use Claude Code as their primary AI coding tool
- 90% report productivity gains from AI assistance
- 55% spend more than 75% of their coding time with AI tools
- 86% say their usage is increasing over the past 6 months
- Adoption is uniform across experience levels — veterans with 20+ years embrace AI at the same rate as newcomers
We plan to run this on a regular basis and track evolvement over time. Obviously all data is self reported and to be taken with a grain of salt but if you want to participate in a future survey feel free to leave you contact on the website.
That’s just the 1st thing that ocurred to me to test it. I think what most people are hyped about it is related to give it access to your reminders, notes, notion, obsidian and then treat it like an assistant that proactively helps you by running scheduled tasks that are useful to you. That’s why some are recommending running the thing on a Mac Mini if you are in the Apple ecosystem, so it can create reminders etc.
I’ll keep playing with it on a VM and see where this goes.
I feel like HN is quite divided about that actually, A couple of days I started a survey which I plan to run monthly to see how the community feels about "LLM productivity etc". Now I have ~250 answers, need a couple more to make it significant but as of now it looks like >90% report productivity gains from AI tools - happy if you participate, only takes a minute: https://agentic-coding-survey.pages.dev/
Note that self-reporting productivity gains is a completely unreliable and unscientific metric. One study[1], small in scope but a noteworthy data point, found that over the course of the study that LLMs reduced productivity by ~20% but even after the fact the participants felt that on average their productivity had increased by ~20%. This study is surely not the end-all be-all and you could find ways to criticise it or say it doesn't apply or they were doing it wrong or whatever reason you think the developers should have had increased productivity, but the point is that people cannot accurately judge their own productivity by vibes alone.
If you look at the survey it's not only about productivity it's also about usage, model choice etc. But I agree with you self reported productivity gains is to be taken with a grain of salt. But then what else would you propose? The goal is to not only rely on benchmarks for model performance but develop some kind of TIOBE Index for LLMs.
The ever-present rebuttal to all LLM failure anecdotes: you're using the wrong model, you're prompting it wrong, etc. All failures are always the user's fault. It couldn't possibly be that the tool is bad.
If it generated something that saved you weeks, I think it's almost certainly because it was used for something you have absolutely zero domain understanding for and would have had to study from scratch. And I, at least, repeatedly do note that LLMs lower the barrier to entry for making proof-of-concepts. But the problem is that (1) people treat that instant gratification as a form of productivity that can replace software engineers. At most, it can make something extremely rough that is suited to one individual's very specific use case, where you mostly work around the plentiful bugs by knowing the landmines are there and not doing the behaviour that trips them; and (2) people spam these low-effort proof-of-concepts, which have no value to other people on account of how rough and lacking in ability to be extended to cover more than one person's use case they are, and this drowns out the content people actually put effort into.
LLMs, when used like this, do not increase productivity on making software worth sharing with other people. While they can knock out the proof-of-concept, they cannot build it into something valuable to anyone but the prompter, and by shortcircuiting the learning process, you do not learn the skills necessary to build upon the domain yourself, meaning you still have to spend weeks learning those skills if you actually want to build something meaningful. At least this is true for everything I have observed out of the vibe-coding bubble thus far, and my own extensive experiences trying to discover the 10x boost I am told exists. I am open to being shown something genuinely great that an LLM generated in an evening if you wish to share evidence to the contrary.
There is also the question of the provenance of the code, of course. Could you have saved those weeks by simply using a library? Is the LLM saving you weeks by writing the library ""from scratch"", in actuality regurgitating code from an existing library one prompt at a time? If the LLM's productivity gain is that it normalized copying and pasting open-source code wholesale while calling it your own, I don't think that's the great advancement for humanity it is portrayed as.
I find your persistent, willful bullheadedness on this topic to be exhausting. I'd say delusional, but I don't know you and you're anonymous so I'm probably arguing with an LLM in someone's sick social experiment.
A few weeks ago I brought up a new IPS display panel that I've had custom made for my next product. It's a variant of the ST7789. I gave Opus 4.5 the registers and it produced wrapper functions that I could pass to LVGL in a few minutes, requiring three prompts.
This is just one of countless examples where I've basically stopped using libraries for anything that isn't LVGL, TinyUSB, compression or cryptography. The purpose built wrappers Opus can make are much smaller, often a bit faster, and perhaps most significantly not encumbered with the mental model of another developer's assumptions about how people should use their library. Instead of a kitchen sink API, I/we/it created concise functions that map 1:1 to what I need them to do.
I happen to believe that you're foolish for endlessly repeating the same blather about "vibe coding" instead of celebrating how amazing what you yourself said about lowering the barrier to entry for domains that are extremely rough and outside of their immediate skillset actually is and the incredible impact it has on project trajectory, motivation and skill-stacking for future projects.
Your [projected] assumption that everyone using these tools learns nothing from seeing how problems can be solved is painfully narrow-minded, especially given than anyone with a shred of intellectual curiosity quickly finds that they can get up to speed on topics that previously seemed daunting to impossible. Yes, I really do believe that you have to expend effort to not experience this.
During the last few weeks I've built a series of increasingly sophisticated multi-stage audio amplifier circuits after literal decades of being quietly intimidated by audio circuits, all because I have the ability to endlessly pepper ChatGPT with questions. I've gone from not understanding at all to fully grasping the purpose and function of every node to a degree that I could probably start to make my own hybrids. I don't know if you do electronics, but the disposition of most audio electronics types does not lend itself to hours of questions about op-amps.
Where do we agree? I strongly agree that people are wasting our time when they post low-effort slop. I think that easy access to LLMs shines a mirror on the awkward lack of creativity and good, original ideas that too many people clearly [don't] have. And my own hot take is that I think Claude Code is unserious. I don't think it's responsible or even particularly compelling to get excited about making never looking at the code as a goal.
I've used Cursor to build a 550k+ LoC FreeRTOS embedded app over the past six months that spans 45 distinct components which communicate via a custom message bus and event queue, juggling streams from USB, UART, half a dozen sensors, a high speed SPI display. It is well-tested, fully specified and the product of about 700 distinct feature implementation plan -> chat -> debug loops. It is downright obnoxious reading the stuff you declare when you're clearly either doing it wrong or, well, confirmation of the dead internet theory.
Yes exactly, it's a standalone cloudflare page with some custom html/css that writes to a D1 (Cloudflare SQL DB) for results and rate limits, thats's it. I looked at so many survey tools but none offered what I was looking for (simple single page form, no email, no signup, no tracking) so I built this (with claude) Thanks for the feedback!
This week I started to create a "TIOBE Index" for AI Agents. I'd really like to run a regular poll on HN that keeps track of which AI coding agents are actually being used by this community, rather than just which ones are winning benchmarks. I hope the results will provide a high-level view of what people are using in production and how that shifts month-over-month.
In the last 12 hours we had 225 submissions (Margin of Error ±6.2%) and need ~160 more to reach statistical significance (n=385, MOE ±5%). No tracking, email optional (only for results). I will post the results + the "January Index" here once we hit the threshold.
It just takes one minute of your time. Thank you for participating!!
imo it isn’t any tool, it’s institutions:
shared rules like property, contracts, and science that let billions of strangers coordinate, because without them none of the other mentioned inventions would scale
reply