I agree this can be a bit of a pain if you're used to that. There are ways to partially reduce it:
1. use the timeline panel for an individual file to see all the historical changes to a file, and you can highlight multiple ones to see cumulative changes, and you can filter to only git commits or only local changes etc.
2. use the commit history panel in the source control area to do the same across commits, but it doesn't allow you to highlight across commits for cumulative changes
It does require a bit of a paradigm shift sometimes to not rely as much on seeing all cumulative changes for the ticket highlighted as you code, and instead compartmentalize your immediate view to the current commit's task, but often the above 2 alternatives help suffice. Of course, you did mention that you'll commit stuff you're not likely to touch again, which helps a lot too
Sounds like someone who genuinely wants to make an impact, donated his time to do so, and wrote some stuff to potentially help while giving us a glimpse into the daily life of a high-profile group. I really appreciate this.
I think the intent, as stated, appears to match what you're saying. It's hard to ignore that there doesn't appear to be any display of critical thinking involved, though.
He wants recognition for quickly building simple tools (e.g. visual org chart) without the responsibility of what the tool was used for: to fire half a million people. Where are the efficiency gains in this? It's very telling that the interview that he got fired for included his praise that the government was actually more efficient than he expected.
Given all that, I can't take the writing as being all that sincere.
FYI this is Sahil Lavingia, who has a long history in the tech world and HN. He's definitely not a villain and has done some good things in the tech world over the years.
However, he's also very good at PR and spinning things in his favor. He has a long history of going full hustle-culture on trends as they come along, including everything from NFTs to LLMs and now DOGE. He's very good at wiggling out of difficult situations and rewriting history about himself, and this article is a good example of that.
This isn't ancient history, all of this /just/ (is) happened. We were all there. How is it that so many people took one look at DOGE (_before Trump took office even_) and saw it for what it is but this person with "long history in the tech world" couldn't see it? I don't care how many "good things" they've done in tech, they were a willful and active participant all of this.
> He's very good at wiggling out of difficult situations and rewriting history about himself, and this article is a good example of that.
Is it? It makes them look like an idiot. An idiot for not seeing what so many people saw before or an idiot for thinking we would buy this load of BS. In fact, I'm not even sure what this is supposed to accomplish, they do not come off looking smart or any positive trait I can think of.
> he materially contributed to seriously harming people.
There are 2 points:
1. Umm he wasn't an exec who had any power to decide whether to lay off somebody or not.
He built some prototypes and launched 1-2 improved UI. Was improving a website harming people?
2. Laying off unnecessary workforce isn't a net harmful to the society.
Are you saying we should never ever fire or stop hiring anybody ever?
Have you ever stop hiring someone in your life? A house cleaner. A babysitter? Stop going to a restaurant? Did you harm them when you stop paying for their services?
Yeah I think I would say you're right to doubt if this resonates on HN. You're posing it to an audience which has very little GED-level representation. HN more often has people who did well in school and are at a much better disposition for higher-salary jobs.
I'm not part of the target population but my guess is that a large factor has to do with people's tendency to go down the path of life that is most similar to the path they've already tread. If you grew up in a 'cultural center' it's less of a paradigm shift to take the crappy job around the corner rather than move somewhere slightly more remote to start a new career even if in the long run it could actually lead to a more decent life.
This seems like a really political article with not really any tech-related content...
Also note the article starts with the qualifier "in the first few hours", meaning it's not like they're sitting there all day every day with no lights/wifi. This seems like an exaggerated, politically motivated piece that doesn't belong on HN.
When they were first proposed back in 2008 they made a big splash, but afaik they never got past the prototype phase. There was even a push to add them to the SVG spec.
Were they maybe just wanna-be volunteers that were bringing an extra engine down to help and didn't know the formal process to volunteer for something like that? Sounds like the news has no evidence of actual malevolence, so is it fair to give them the benefit of the doubt here?
> Sounds like the news has no evidence of actual malevolence, so is it fair to give them the benefit of the doubt here?
If they were wanna-be volunteers, why would they lie about where they're from?
> “The occupants claimed to be from the ‘Roaring River Fire Department’ in Oregon,” the release stated. “Upon further investigation, the deputies learned that the department name was not a legitimate agency, and the truck was purchased at an auction.”
On that note, the article states that it donates more to higher risk projects, and risk increases by OpenSSF score. One question I had about the article is does that mean that projects with more security vulns get a higher donation? If so, then that might become a perverse incentive to leave security gaps in your code.
So after downloading from the official downloads page and stripping away all the mjs files and "bundler-friendly" files, a minimal sqlite wasm dependency will be about 1.3MB.
For an in-browser app, that seems a bit much but of course wasm runs in other places these days where it might make more sense.
It's pretty compressible at least, sqlite3.js+wasm are 1.3MB raw but minifying the JS and then compressing both files with Brotli gets them down to 410KB.
A lot of HTML's nowadays have 100 - 300 kb. That's only the HTML (!!).
Adding 400 for such a high quality piece of DB actually borders reasonability.
And makes me think: what the hell are frontend devs thinking!? Multiple MB's in JS for a news website. Hundreds of KB's for HTML. It's totally unreasonable.
> what the hell are frontend devs thinking!? Multiple MB's in JS for a news website. Hundreds of KB's for HTML. It's totally unreasonable
They're thinking, "adding [some fraction of existing total payload] for such a high quality [feature] actually borders reasonability". Wash. Rinse. Repeat.
> They're thinking, "adding [some fraction of existing total payload] for such a high quality [feature] actually borders reasonability". Wash. Rinse. Repeat.
Context makes all the difference here. If you're considering a big chunk of size for a relational database engine, you need to ask: are you making a complex application, or a normal web page? If it's the latter, then it's not reasonable at all.
And anything that makes the HTML itself that big is almost certainly bloat, not "high quality", and shouldn't be used in any context.
1.3MB seems perfectly reasonable in a modern web app, especially since it will be cached after the first visit to the site.
If you’re just storing user preferences, obviously don’t download SQLite for your web app just to do that… but if you’re doing something that benefits from a full database, don’t fret so much about 1MB that you go try to reinvent the wheel for no reason.
If the other comment is correct, then it won’t even be 1.3MB on the network anyways.
Given how hefty images are, a full database doesn't seem too bad for the purpose of an "app" that would benefit from it, especially when compression can being the size down even lower.
We are past the stage where every piece of JS has to be loaded upfront and delay the first meaningful paint. Modern JS frameworks and module are chunked and can be eager/lazy loaded. Unless you make the sqlite DB integral part for your first meaningful page load, preloading those 1.3MB in the background/upon user request is easy.
It's a good consideration, together with the fact browsers already have IndexedDB embedded. One use case still for in-browser apps like Figma / Photoshop-like / ML apps, where the application code and data is very big anyway, 1.3Mb may not add that much
Also worth considering parsing of wasm is significantly faster than JS (unfortunately couldn't find the source for this claim, there is at lease one great article on the topic)
When we built our frontend sync system we tried a few different options. We had a fairly simple case of just trying to store entities so we could pull incremental updates since you were last online. The one we ran in production for a while was IndexedDB but found the overhead wasn’t worth it.
I played around with warm sqlite too. That was really nice but I decided against it due to the fact that it was totally unsupported.
The thing to keep in mind is that the WebAssembly sandbox model means that in theory the program (SqlLite in this case) can run wherever it makes sense to run it. That might mean running it locally or it might mean running on a central server or it might mean running nearby on the “edge”.
Is there a way to statically compile an application with SQLite and the result WASM was smaller. So for example I have an app that would use only a specific subset of SQLite. Could the SQLite's WASM be built with this in mind cutting down on code that is not used? Or is there a way to prune it having the used API surface?
In a regular compiler/linker scenario it would just be a static compilation. Here we have a JS app and WASM library.
> Could the SQLite's WASM be built with this in mind cutting down on code that is not used?
The pending 3.47 release has some build-side tweaks which enable a user to strip it down to "just the basics," but we've not yet been able to get it smaller than about 25-30% less than it otherwise is:
cd ext/wasm
make barebones=1 ; # requires GNU Make and the Emscripten SDK
Doing that requires building it yourself - there are no plans to publish deliverables built that way.
The build process also supports including one's own C code, which could hypothetically be used to embed an application and the wasm part of the library (as distinct from the JS part) into a single wasm file. Its primary intended usage is to add SQLite extensions which are not part of the standard amalgamation build.
> Or is there a way to prune it having the used API surface?
Not with the provided JS pieces. Those have to expose essentially the whole C library, so they will not be pruned from the wasm file.
However, you could provide your own JS bindings which only use a small subset of the API, and Emscripten is supposedly pretty good about stripping out C-side code which neither explicitly exported nor referenced anywhere. You'd be on your own - that's not something we'll integrate into the canonical build process - but we could provide high-level support, via the project's forum, for folks taking that route.
Since SQL takes arbitrary strings as input, this would require explicit compiler flags to disable the knobs you don't want. Can't rely on excluding unused symbols really.
That's correct, people in this thread are comparing single compressed dependency of sqlite+wasm of 400KB to the total size of web pages which run in MB. I did some actual tests while trying to use sqlite and it does adds noticeable delay on first page load on mobile due to big size+decompression+ additional scaffolding of wasm.
Pages that run into MB have small files that are downloaded concurrently so the delay is not noticeable. I wrote about this and my other expriments with in browser db in my last article but it did not get any traction here.
Thanks! That reminds me I did bring up the thing about long lists and he said that's even more reason to use checkboxes since the multi-select has such a tiny scroll window it makes it hard to scroll through to find what you need when there are a bunch of options.
It does require a bit of a paradigm shift sometimes to not rely as much on seeing all cumulative changes for the ticket highlighted as you code, and instead compartmentalize your immediate view to the current commit's task, but often the above 2 alternatives help suffice. Of course, you did mention that you'll commit stuff you're not likely to touch again, which helps a lot too