I agree with your sentiment here. imo they chose the wrong words to describe what they meant e.g. ~0.5-1GB memory usage is more like a hobby-setup (assuming you run it on the same hw as the services you monitor).
However, after scrolling through the GitHub page, I feel like this is not a service which aims at people (might be completely mistaken) who either have a small set of services to monitor (and/or understand the logs and/or the interest to do it) or have their homelab at a relatively low financial priority (1x 8GB 2400 CL17 is €15 here).
8GB for a single service in a (home) environment is, imo, still a lot, but I think it is a sort-of reasonable figure for what it does and need to do to make that happen
We have had folks in our Discord successfully run highlight on a Raspberry Pi with 4GB RAM, so our recommendation is definitely on the safe side. We're running multiple infra services in the docker stack (postgres, opensearch, clickhouse, influxdb, kafka, redis) that we would look to consolidate in the future to help with running on leaner instances.
Unlikely to happen if there are still Java services running in that stack. For instance ElasticSearch/OpenSearch is good at what it does (which is full-text search) but pumping massive amounts of logs into it is never going to be a light solution. Solutions such as Loki which just index tags and dump the raw content into an object storage bucket are the cheapest, and I guess ClickHouse ends up somewhere in the middle between those two depending on how well you configure it to fit the data. Or vice versa how well you can configure the data to fit the technical solution.
> Just about anything with only one instance is essentially hobby.
Not high availability? Sure.
However, I've seen software out there that ran as a monolith with a single deployment unit, facilitated lots of business processes and entire teams of people for continued development. Not everything necessarily needs high uptime, either. Some software can also serve particular time zones and have ample windows for scheduled maintenance, upgrades and so on.
There's probably at least a few classifications between hobby projects on one end and HA distributed systems on the other.
>> Cant say I would call these specs “hobby” at all
With this, however, I'm inclined to agree. In my eyes, "hobby" would imply something more along the lines of: "Just give this half a CPU core and about 512 MB of RAM, maybe up to a GB of storage depending on what you'll use it for, it'll probably work well enough for a few users."
Some software that mostly fits that definition, in my experience: Nextcloud, Apache2/Nginx/Caddy, Grav, Mattermost, Gitea, Heimdall, YOURLS, PrivateBin, phpBB, Uptime Kuma, Zabbix, PostgreSQL, MySQL/MariaDB, Redis, RabbitMQ, Docker Swarm and plenty others.
Some software that needs more resources: SonarQube, PeerTube (for encoding), OpenProject (Ruby app), Sonatype Nexus (bloated Java app, but lots of functionality), Matomo (issues with displaying historic data with low resources), BackupPC (compresson of backups), K3s and other Kubernetes cluster distros and plenty others, too.
Not to say that it somehow makes the software worse, just that people have different expectations. Perhaps more realistic expectations on my part for hobby software should be: "You should be able to launch it with whatever spare resources your laptop has."
Cant say I would call these specs “hobby” at all