Hacker Newsnew | past | comments | ask | show | jobs | submit | annjose's commentslogin

I love the description of the PR. This type of honest statement is the right thing to do - be transparent, be respectful of the time of the reviewer.

> This PR adds support for embedded Ruby (ERB) which is commonly used in Ruby on Rails projects. Note that I used heavy assistance from Claude Code and tried to ensure it didn't generate slop to the best of my abilities. All tests are passing and I also visually verified the end result which looks solid to me.

> Here's a screenshot that was generated by building the Chroma CLI with the ERB lexer and running it against the test data file with chroma --lexer=erb --style=monokai --html lexers/testdata/erb.actual


I tried Superpowers for my current project - migrating my blog from Hugo to Astro (with AstroPaper theme). I wrote the main spec in two ways - 1) my usual method of starting with a small list of what I want in the new blog and working with the agent to expand on it, ask questions and so on (aka Collaborative Spec) and 2) asked Superpowers to write the spec and plan. I did both from the working directory of my blog's repo so that the agent has full access to the code and the content.

My findings:

1. The spec created by Superpowers was very detailed (described the specific fonts, color palette), included the exact content of config files, commit messages etc. But it missed a lot of things like analytics, RSS feed etc.

2. Superpowers wrote the spec and plan as two separate documents which was better than the collaborative method, which put both into one document.

3. Superpowers recommended an in-place migration of the blog whereas the collaborative spec suggested a parallel branch so that Hugo and Astro can co-exist until everything is stable.

And a few more difference written in [0].

In general, I liked the aspect of developing the spec through discussion rather than one-shotting it, it let me add things to the spec as I remember them. It felt like a more iterative discovery process vs. you need to get everything right the first time. That might just be a personal preference though.

At the end of this exercise, I asked Claude to review both specs in detail, it found a few things that both specs missed (SEO, rollback plan etc.) and made a final spec that consolidates everything.

[0] https://annjose.com/redesign/#two-specs-one-project


I usually ask Gemini to review the spec as well. Sometimes it catches things I missed even after going through a few times.

I'm a big fan of Research Plan Implement like this peak build-in-public multi foundation model cross check approach:

https://x.com/i/status/2033368385724014827


Let's look at an example post in HN Companion. This is the post on singularity in the home page right now:

https://app.hncompanion.com/item?id=46962996

This post has 500+ comments with various viewpoints and you see the summary on the right side.

You are right that most of the time threads are organized into local groups. But in the above example, there are many comments that relate to the same topic, but are not under the same parent comment. HN Companion's summary surfaces this into a topic "Limitations of Current AI Models" which shows comments from up and down the post.

You can click on the author name in that topic in the summary panel, it will take you directly to the comment. This is what we meant by "continue the conversation there", i.e you are now in the main HN experience, so you can navigate to child/parent/sibling comments (through the link buttons or keyboard navigation).

We definitely don't want AI to write comments. Happy to elaborate if you need.


Honestly, after checking out the link, seems like something I'll personally never use/want.

I'm okay with crawling through comments and taking in the various viewpoints instead of having an LLM summarize it for me.

It basically kills the entire tone/vibe of the place and makes everything seem like robot-written with no personality. Also it's kind of weird you're taking other people's words and then reframing it for them/others.

Also nowhere does that thread seem to be "overwhelming with information" like you originally claimed. Basically solving a non-problem.


Fair enough. I completely understand that the experience and hunting for gems in the comments is the core appeal of HN for many and AI summaries definitely aren't for everyone.

That said, we are seeing a consistent daily user base who do find value in the summarization, so it seems to be solving a pain point for a specific segment of readers, even if not for all.

Apart from the AI features, we actually built HN Companion as a general power-user client. It supports keyboard-first navigation (vim-style J/K bindings for comment navigation), seeing context for parent/child comments without losing your place, and tracking specific authors across a thread.

You might find those utility features useful even if you ignore the summary sidebar entirely. In the browser extension, the summary panel is something the user have to activate - it doesn't show-up by default.


Good point. Can you elaborate a little bit more? Do you mean corroboration within the same discussion or across multiple discussions?


Co-author here. Forgot to share the link to the fine-tuned model [0].

[0] https://huggingface.co/georgeck/models


Genuine question to understand - have you tried this approach to build or break any habit for yourself? What were the learnings from it - what worked and what didn't? And how did you tweak the approach for the next habit?


Short answer: yes, I have--walking. I think the main learnings were (a) have faith in absurdly small steps, repeated, and (b) my anxious brain is always looking for the slightest excuse to skip it. No real tweaks, except keep trying to make the step smaller.


I came here to say

1) Amen 2) I wonder if this is isolated to junior dev only? Perhaps it seems like that because junior devs do more AI assisted coding than seniors?


I agree, though I would prefer to highlight the first half of the first item - transparency. Also, perhaps make Safety an independent principle than combining with Security.

These are a good set of principles for any company (or individual) can follow to guide them how they use AI.


I agree - the content you write about LLMs is informative and realistic, not hyped. I get a lot of value from it, especially because you write mostly as stream of consciousness and explains your approach and/or reasoning. Thank you for doing that.


Congrats — well deserved! I love the game and play it every day; it's actually the first thing I do in the morning. A big fan of Hard mode! My best friend has also started playing it and we share the results with each other.

Just one feedback - on desktop browsers, I can see the list of answered clues below the textbox, but on the phone (Brave or Firefox on Android), I don't see that list. I am not sure if this is a feature or a bug, but it’s a feature I miss when playing on my phone. Seeing those answers gives that little “aha!” moment of satisfaction.

I also made a custom GPT - Bracket GPT [0] that helps in solving the clues when I am stuck. It doesn’t directly give the answers, but offers hints to help nudge you to the solution. It’s a fun companion when you're totally blanking.

[0] https://chatgpt.com/g/g-67e0f124cd408191943faadb3d70c6df-bra...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: