Where I have used SQLite most successfully is really two use cases. First, I use it for data processing. Say I need to retrieve lots of data and transform it to a different setup. I could do that in something like Python but SQL is just more expressive for that and I can also create a new database, populate it with data I have, fetch new data, combine it together, export the update to a permanent data store (usually Postgres).
Second, when I need a local save file. Sometimes small local apps are better served by a save file and they save file might as well have an extensible format that I can update as I go. This is more rare but still can be useful.
The first use case is very powerful. A temporary SQL database that can be blown away with zero trace of it is great. And the ability to run complex queries on it can really help.
But 99% of the time I just use Postgres. It works, it has sane defaults, it is crazy extensible, and it has never not met my needs, unlike Oracle or MySQL.
So I have gotten pretty good at managing context such that my $20 Claude subscription rarely runs out of its quota but I still do hit it sometimes. I use Sonnet 99% of the time. Mostly this comes down to giving it specific task and using /clear frequently. I also ask it to update its own notes frequently so it doesn’t have to explore the whole codebase as often.
But I was really disappointed when I tried to use subagents. In theory I really liked the idea: have Haiku wrangle small specific tasks that are tedious but routine and have Sonnet orchestrate everything. In practice the subagents took so many steps and wrote so much documentation that it became not worth it. Running 2-3 agents blew through the 5 hour quota in 20 minutes of work vs normal work where I might run out of quota 30-45 minutes before it resets. Even after tuning the subagent files to prevent them from writing tests I never asked for and not writing tons of documentation that I didn’t need they still produced way too much content and blew the context window of the main agent repeatedly. If it was a local model I wouldn’t mind experimenting with it more.
I did something similar last summer. My Craftsman LT1400 uses the standard 500cc Briggs motor and that motor has some tragic design flaws that make it grenade itself roughly once a season. I went through a couple of these motors rebuilding them (correctly) until I gave up.
I ripped the tractor down the the frame and removed most parts. Got $40 Ryobi walk behind mower motors (42V which is really 36V), some scooter controllers, and pulleys. I used two scooter Li ion batteries but I should have just gotten three large lead acid 12V batteries for more capacity. Still, I can mow for an hour or so and get almost an acre done which includes some hills per charge. It took about 8 total days to build and about $800.
The way I set it up is that I have one motor drive the wheels and two more motors on the deck directly driving the blades. The belt system the ICE motor version had was insanely inefficient. This system has like 20% of the power but mowed better and is way more reliable. For $150 I could get a solar array and controller to charge the batteries and never pay for anything but belt and blade replacements for life.
The hardest part of the build was lining up the mounting of the drive motor and wiring up all the safety systems (brake sensor, seat sensor, etc). The kicker is that this is a way better product than what I can buy commercially unless I get into the $5k+ territory and is completely user serviceable. No part here is more than $100 and they all readily available. The tractor has enough torque to push my huge picnic table around while I am riding it. I might try seeing if I can plow snow with it next winter.
I was just happy I could start mowing again. But since it is a Craftsman they have a ton of accessories available on the used market for very cheap so I might pick up a plow for like $100 to see how it does for next winter.
Very cool, I love electric conversions. I will confess though, that removing the belt drive makes me nervous - they're often important to protect either the machine, or people, when the blade meets an obstacle.
The Ryobi walk behind mowers are direct drive and everything is underneath the deck so there isn’t much that can go wrong that you wouldn’t contend with if you were standing with your toes next to it except you are actually riding it. The wheels still do have a short belt and a tensioner but not the really long belt with the clutch pulley. The only safety thing I could have added is the electronic brake on the deck motors but for my use I am not super concerned about letting things slow down unassisted. As soon as I leave the seat everything shuts off and the light blades don’t actually have a ton of momentum without power.
They compared it to industrial accidents. I don't think a software company would try to shift liability by comparing themselves to factories explosions and chemical spills.
I find it important to include system information in here as well, so just copy-pasting an invocation from system A to system B does not run.
For example, our database restore script has a parameter `--yes-delete-all-data-in` and it needs to be parametrized with the PostgreSQL cluster name. So a command with `--yes-delete-all-data-in=pg-accounting` works on exactly one system and not on other systems.
Totally agree it shouldn't be for basic tools; but if I'm ever developing a script that performs any kind of logic before reaching out to a DB or vendor API and modifies 100k user records, creating a flag to just verify the sanity of the logic is a necessity.
For most of these local data manipulation type of commands, I'd rather just have them behave dangerously, and rely on filesystems snapshots to rollback when needed. With modern filesystems like zfs or btrfs, you can take a full snapshot every minute and keep it for a while to negate the damage done by almost all of these scripts. They double as a backup solution too.
Yeah, but that's because it's implemented poorly. It literally asks you to confirm deletion of each file individually, even for thousands of files.
What it should do is generate a user-friendly overview of what's to be deleted, by grouping files together by some criteria, e.g. by directory, so you'd only need to confirm a few times regardless of how many files you want to delete.
Not necessarily. On a design that requires being new to look good, all weathering will be perceived as rot, never as patina.
The point is that some approaches to architectural beauty make it more or less impossible that any amount of weathering could ever be perceived as patina, while others look good both new and old.
My first and last Android phone was the Motorola Atrix, which at the time was supposed to be quite good. One of its benefits was the idea that you could pop it into a laptop type dock and have it act as a terminal of sorts.
You can also slap a keyboard onto an existing phone. I have tried vibe coding via ssh from my iPhone and honestly it’s not terrible at all. Instead of doom scrolling I can build things.
Lenovo T and X series are excellent and cheap as dirt used. There is also System 76. Or you could get a MacBook and boot Linux on that. Some older ones work well, I hear.
> Or you could get a MacBook and boot Linux on that. Some older ones work well, I hear.
Is linux support on the M1/M2 models as good as linux support on x86 laptops? My understanding was that there's still a fair bit of hardware that isn't fully supported. Like, external displays and Bluetooth.
I use an old Lenovo AIO PC to dual boot Linux Mint and Windows 10. It works well from a hardware and firmware perspective, but I've deliberately avoided Windows 11 as it is crapware.
I have done triple booting of MacOS, Linux and Windows on an old Mac Mini, and it was a nightmare to get them working, but worked well once set up.
I think well known brands and models of PCs are better for such alternative setups, rather than obscure PCs.
Second, when I need a local save file. Sometimes small local apps are better served by a save file and they save file might as well have an extensible format that I can update as I go. This is more rare but still can be useful.
The first use case is very powerful. A temporary SQL database that can be blown away with zero trace of it is great. And the ability to run complex queries on it can really help.
But 99% of the time I just use Postgres. It works, it has sane defaults, it is crazy extensible, and it has never not met my needs, unlike Oracle or MySQL.
reply