+1 for highlighting that PR quality is the bottleneck. Garbage-in/garbage-out is exactly what I ran into and it’s why I’m planning to introduce PR templates so the why/what changed/impact is consistently present. For sparse PR bodies, I also optionally add truncated diff context for the LLM summary so the output isn’t just a long list of raw PR titles.
Also agree there is a split between dev-facing changelogs vs user-facing release comms that need to land where users are. What I built is aimed at the "developer-consumer" audience, people using the library, not contributing to it: it renders into our docs and is meant to be readable as a curated changelog, not a raw list of commits.
I agree that the best quality notes are the ones hand written by a thoughtful human. In my case we had about two years of history with no curated notes, and writing that by hand would have meant significant time investment vs shipping fixes and features. The generator helped us get coverage fast, organized the notes chronologically and categorically. I specifically designed the generator with your your concern in mind, in that it preserves manual edits as well as omissions, so we can gradually curate it into something we are proud of.
I agree with the philosophy of curating release notes for the consumer of the release. When I first started looking for a release notes strategy, I was considering towncrier for that exact reason. You are also right that commit messages are not intended for the consumer of the release, but a dialogue between developers.
Your points are well received and largely why I went PR-based (title/body with optional GitHub metadata) instead of commit-based. A PR title and body tend to be focused on the deliverable, whereas commit messages are narrowly focused on the code change at that moment with developers as the intended audience.
Re: git-cliff, I honestly hadn’t evaluated this one, but it looks solid for commit-driven changelogs. I like the rationale behind conventional commits being parsable and templates enforcing consistency. What constraints pushed you toward git-cliff vs writing release notes by hand, and do you have a config/template you have found works well for surfacing breaking changes?
Yeah, that matches what I have seen: if the upstream metadata isn’t reliable, automation can amplify the mess.
I tried to avoid relying solely on contributors to accurately label or tag things correctly. The script is tag-driven only for release boundaries (version tags), while categorization is derived from PR title & body with optional GitHub metadata. The script is idempotent and preserves edits/omissions so you can correct the few bad ones post-generation.
If you are curious, I am happy to share my script and would be genuinely interested whether it reduces the manual cleanup for your workflow. Also, if you run it with `--ai --github` and a PR body is sparse, it fetches a truncated PR diff and uses that as extra context for the LLM summary.
+1 on separating "how to upgrade" due to breaking changes from "what’s new". A dedicated BREAKING.md / MIGRATIONS.md is a really good idea.
One thing I am trying to do is make the generator surface breaking/migration items explicitly, but I still think anything that requires human judgment (migration steps, caveats) should be hand-curated in a dedicated document like you suggested.
I hear what you are saying, there is a risk that auto-generated release notes end up as PR-title soup. I put a lot of effort in my script to mitigate against that.
If you are willing and interested enough to take a quick look, here is what my script generated for our 2025 changelog (no hand-curation yet, this is the raw output):
I am curious: does this still seem too noisy in your opinion, or is it getting closer? And what would you want to see for breaking changes/migrations to make it actually useful?
I now have 2024 & 2025 generated; to fully hand-curate two years of history just wasn’t practical, so I’m trying to get the "80% draft" automatically and then curate over time.
That has been my concern as well. The script I wrote tries to bucket entries into categories, including "Backward Incompatible Change" so those are easier to spot. Since it is automated I am trading some accuracy for time saved, which seemed like the only practical choice for me, since I had to backfill a lot of history, but it’s been surprisingly decent so far.
I am also planning to add some PR templates so contributors include the context up front, which should make any release note generation more accurate.
Are you using any tooling to help with changelog curration? I know towncrier is all about fragments, so contributors must had write a brief summary of their contribution, which would be more in-line with your preference.
I am curious what are people using for release notes in their own projects? Towncrier, GitHub Releases, something else?
If anyone tries my script on their repo and runs into issues, I am happy to help troubleshoot. Also, the output is actually plain Markdown (no JSX). The only Docusaurus specific bit is the YAML frontmatter header. If you are not using Docusaurus, you can just strip that header and rename .mdx to .md.
I wonder if there is valuable information that can be learned by studying a companies prompts? There may be reasons why some companies want their prompts private.
I realize cache segregation is mainly about security/compliance and tenant isolation, not protecting secret prompts. Still, if someone obtained access to a company’s prompt templates/system prompts, analyzing them could reveal:
- Product logic / decision rules, such as: when to refund, how to triage tickets
- Internal taxonomies, schemas, or tool interfaces
- Safety and policy guardrails (which adversaries could try to route around)
That plate wouldn’t be allowed in Illinois where there is a hard requirement that all digits follow any letters on the plate.¹ The thing that I find mystifying is that they charge more for a vanity plate that’s all letters than one that’s letters and digits.
⸻
1. Although some specialty plates end up having suffixed letters, usually shown on the plate stacked.
+1 for highlighting that PR quality is the bottleneck. Garbage-in/garbage-out is exactly what I ran into and it’s why I’m planning to introduce PR templates so the why/what changed/impact is consistently present. For sparse PR bodies, I also optionally add truncated diff context for the LLM summary so the output isn’t just a long list of raw PR titles.
Also agree there is a split between dev-facing changelogs vs user-facing release comms that need to land where users are. What I built is aimed at the "developer-consumer" audience, people using the library, not contributing to it: it renders into our docs and is meant to be readable as a curated changelog, not a raw list of commits.
reply