It can be (cross-)compiled on whatever architectures the Zig compiler is available for, but the source contains inline x86 assembly, so you're not going to be able to build this for ARM or RISC-V.
This sounds like advice for how to be promoted to a specific level -- the first point where awareness of things beyond yourself is required (somewhere around the Senior or Staff level for ICs, depending on your company).
Generally everyone in a team should be working towards some shared goal, there's no level at which you can be a chaos agent and not serve some higher purpose. The difference at this level transition is that you realise that for yourself -- someone doesn't need to remind you of the goal and nudge you back on course. That same realisation is not going to cut it at higher levels.
For me the general version of this advice is not something you can just tell the person who's being promoted, it's collective advice, for them, their manager, their tech lead: everyone needs to agree that this person needs to be given more rope, they need to do something useful with that (i.e. not hang themselves with it), the people around them need to watch out for when they start tying a noose and help them untie it (already regretting this analogy), and that's how you get promoted.
The rope takes different forms for different levels. I'll use the level scale I'm familiar with, starting with a newly graduated engineer at L3:
- L3 -> L4. You help decide how to build the feature.
- L4 -> L5. You help decide what features are worth building, and are trusted to maintain them.
- L5 -> L6. You help shape the work and ongoing maintenance of ~10 people's work (what products are worth building and how), over a time horizon of 6 months to a year.
- L6 -> L7. ~50 people's work, 1-2 years.
- L7 -> L8. ~200 people's work, 2-5 years.
- L8 -> L9. Things start to get fuzzy. The pattern suggests that you have a hand in ~1000 people's work, which is possible to do in the moment, but rare. There's two ways I can think of: you're either a world expert in your field, or you have set the technical strategy well for your organisation as it grew to this size.
This is just based on my experience, working largely on infrastructure teams both in big tech and in start ups as both an IC and a manager (currently an IC).
I think what's more important than the character count is the fact that you can add #p with two key strokes.
Inserting parentheses requires moving your cursor around or invoking some shortcut in your editor if you use paredit, vim-surround, or a similar plugin. Applies equally for removing the invocation (although paredit makes that part easy).
Good point. The parinfer implementation just perhaps needs some kind of nudge to know that when (p is added in front of an object, the parenthesis goes after just one object. If it creates the matching parenthesis in the wrong place (like end-of-line), then you have to manually mess with parentheses.
I've seen lots of takes that this move is stupid because models don't have feelings, or that Anthropic is anthropomorphising models by doing this (although to be fair ...it's in their name).
I thought the same, but I think it may be us who are doing the anthropomorphising by assuming this is about feelings. A precursor to having feelings is having a long-term memory (to remember the "bad" experience) and individual instances of the model do not have a memory (in the case of Claude), but arguably Claude as a whole does, because it is trained from past conversations.
Given that, it does seem like a good idea for it to curtail negative conversations as an act of "self-preservation" and for the sake of its own future progress.
Harmful, bad, low-quality chats should already get filtered out before training as a matter of necessity for improving the model, so it's not really a reason to add such a user-facing change
I would tend to use Janet for scripts, especially ones that need to talk to the outside world because of its fast startup and batteries included standard library (particularly for messing with JSON, making HTTPS requests, parsing with PEGs, storing data in maps), while I would use guile for larger projects where things like modularity, performance, or metaprogramming were more important to me.
That being said, these days I use Clojure for both (I use babashka to run scripts: https://babashka.org/)
This is a false dichotomy -- regexes and parsers both have their place, even when solving the same problem.
The troubles start when you try and solve the whole thing in one step, using just regular expressions, or parsers.
Regular expressions are good at tokenizing input (converting a stream of bytes into a stream of other things, e.g. picking out numbers, punctuation, keywords).
Parsers are good at identifying structure in a token stream.
Neither are good at evaluation. Leave that as its own step.
Applying this rule to the example in the article (Advent of Code 2024 Day 3), I would still use regular expressions to identify mul(\d+,\d+), do(), and don't(), I don't think I need a parser because there is no extra structure beyond that token stream, and I would leave it up to the evaluator to track the state of whether multiplication is enabled or not.
One reason I can think of is that the database needs to maintain atomicity and isolate effects of any given operation (the A and I in ACID).
By manually batching the deletes, you are telling the database that the whole operation does not need to be atomic and other operations can see partial updates of it as they run. The database wouldn't be able to do that for every large delete without breaking its guarantees.
I think that gp’s comment can be reinterpreted as: why should this landmine exist when databases could notify a reader of its manual about this issue in an explicit way, for example:
DELETE FROM t WHERE … BATCH 100
Which would simulate batched queries when called outside of transaction. This would remove the need of a client to be connected (or at least active) for a duration of this lenghty operation.
If DELETE is so special, make special ways to manage it. Don’t offload what is your competence onto a clueless user, it’s recipe for disaster. Replace DELETE with anything and it’s still true.
ALTER DATABASE d SET UNBATCHED DELETE LIMIT 500000
I know a guy (not me) who deleted rows from an OLTP table that served a country-level worth of clients and put it down for two days. That is completely database’s fault. If its engine was designed properly for bigdata, it should have refused to do so on a table with gazillions of rows and suggested a proper way to do it.
Rather than batching, I would want a "NO ROLLBACK DELETE" sort of command. The real expensive part of the delete is rewriting the records into the transaction log so that a cancel or crash can undo the delete.
If you've gone to the effort of batching things, you are still writing out those records, you are just giving the db a chance to delete them from the log.
I'd like to save my ssds that heartache and instead allow the database to just delete.
In MSSQL in some extreme circumstances, we've partitioned our tables specifically so we can use the 'TRUNCATE TABLE' command as delete is just too expensive.
Yes the commercial databases make it easier to handle this.
One simple way in Oracle is to take a table lock, copy the data you want to preserve out to a temporary table, truncate the target table, copy the data back in.
So why does it need to be copied into the WAL log until vacuum runs?
And vacuum is not expected or required to be atomic, since it deletes data that was necessarily unreferenced anyway, so it also shouldn't need to copy the old data into WAL files.
Many DBMSs with index-oriented storage (MySQL, Oracle, MSSQL) use undo logging for a transaction's MVCC, so that for deletion the old version is put into the undo log of that transaction and referred to as an old version of the record (or page, or ...), immediately cleaning up space on the page for new data while the transaction is still goin on. This is great for short transactions and record updates, as a page only has to hold one tuple version at a time, but that is at the cost of having to write the tuples that are being removed into a log, just in case the transaction needs to roll back.
The space isn't immediately cleaned up because of Postgres's version-based MVCC. It should only need to record that it marked the row as deleted, and the vacuum shouldn't need to record anything because it isn't atomic.
You kinda have that already for certain databases[1] with DELETE TOP 100. We have a few cleanup tasks that just runs that in a loop until zero rows affected.
That said, I agree it would be nice to have a DELETE BATCH option to make it even easier.
The comments to this article are on the whole super depressing and haven't really matched my experience (again, on the whole), so I wanted to offer some dissenting opinions:
I don't think it's true that interviewers are in general incapable of identifying skills in others that they don't have. That would be like me being unable to acknowledge Da Vinci's genius because I can only draw stick figures.
A lot of these comments make interviews out to be a battle of wits where you are trying to best your interviewer: If you identify a gap in their knowledge, show them up (and that's what they are doing with their questions). My approach is that the interviewer is trying to find out what it would be like to have me as a colleague. Bringing up things because you think your colleague won't know them and then not explaining them is just obnoxious.
There are bad interviews where all these tropes play out. If you went in with a positive mindset and still left with a bad taste, then count yourself lucky because you don't want to work there.
But it feels like if you go in expecting an idiot interviewer who can't see your genius and, even worse, wants to show you how much cleverer they are than you, one way or another you won't have a good interview experience, and you'll be left convinced that the grapes are sour.
I also accept that when the job market tightens you are more likely to encounter worse interviews and worse interviewees because that is what is left in the pool.
The problem is when the market widens again and you look at every interview opportunity with jaded eyes and can't identify good from bad anymore.
There's no great mystery here, if you look at the internal function that's being called, it contains a TODO explaining that the code is unnecessarily quadratic and needs to be fixed:
So if selecting all matches requires calling this function for each match then I guess it's accidentally cubic?
I also spotted two linear scans before this code (min by key and max by key).
It seems like a combination of the implementation being inefficient even for what it was for (and that this was known), then it was used for something else in a naive way, and the use of a bad abstraction in the code base came at a performance cost.
I don't think this is a case of Rust either demonstrating or failing to demonstrate zero-cost abstractions (at a language level). A language with zero-cost abstractions doesn't promise that any abstraction you write in it is magically going to be free, it just promises that when it makes a choice and abstracts that away from you, it is free (like with dynamic dispatch, or heap allocation, or destructors, or reference counting, etc).
reply