If slop doesn't get better, it would mean that at least I get to keep my job. In the areas where the remaining 10% don't matter, maybe I won't. I'm struggling to come up with an example of such software outside of one-off scripts and some home automation though.
The job is going to be much less fun, yes, but I won't have to learn from scratch and compete with young people in a different area (and which I will enjoy less, most likely). So, if anything slop gives me hope.
I find working with LLMs much more fun and frictionless comprated to the drudgery of boring glue code or tracking down nongeneralizable version-specific workarounds in github issues etc. Coding LLMs let you focus on the domain of you actual problem instead of the low level stumbling blocks that just create annoyance without real learning.
Just like the pro-AI articles, it reads to me like a sales pitch. And the ending only adds to it: the author invites to hire companies to contract him for training.
I would only be happy if in the end the author turns out to be right.
But as the things stand right now, I can see a significant boost to my own productivity, which leads me to believe that fewer people are going to be needed.
When coal powered engines became more efficient, demand for coal went UP. It went up because vastly more things could now be cost effectively be coal-powered.
I can see a future where software development goes the same way. My wife works in science and I see all kinds of things in a casual observation of her work that could be made more efficient with good software support. But not by enough to pay six-figures per year for multiple devs to create it. So it doesn’t get done and her work and the work of tens of thousands like her around the world is less efficient as a result.
In a world where development is even half as expensive, many such tasks become approachable. If it becomes a third or quarter as expensive, even more applications are now profitable.
I think far more people will be doing something that creates the outcomes that are today created by SWEs manually coding. I doubt it will be quite as lucrative for the median person doing it, but I think it will still be well above the median wage and there will be a lot of it.
Many HN users may point to Jevons paradox, I would like to point out that it may very well work up until the point that it doesn't. After all a chicken has always seen the farmer as benevolent provider of food, shelter and safety, that is until of course THAT day when he decides he doesn't.
It is certainly possible that AI is the one great disruptor that we can’t adapt to. History over millenia has me taking the other side of that bet, seeing the disruptions and adaptations from factory farming, internal combustion engines, moving assembly lines, electrification, the transistor, ICs, wired then wireless telecommunications, the internet, personal computing, and countless other major disruptions.
1. Fundamentals do change, Yuval Noah Harari made this point in the book Sapiens, but basically there are core beliefs (in fact the idea that things do change for the better is relatively new, “the only constant is change”. Wasn’t really true before the 19th century.
What does “the great disrupter we can’t adapt to” mean exactly? If humans annihilate themselves from climate change, the earth will adapt, the solar system will shrug it off and the universe won’t even realize it happened.
But like, I am 100% sure humans will adapt to the AI revolution. Maybe we let 7 billion people die off, and the 1% of the rest enslave the rest of us to be masseuses and prostitutes and live like kings with robot servants, but I’m not super comfortable with that definition if “adaptation”.
For most of human history and most of the world “the rest of us” don’t live all that well, is that adaptation? I think most people include a healthy large, and growing middle class in their definition of success metrics.
Isn’t this “healthy, large middle class” a reality that is less than 100 years old in the best of cases? (After a smaller initial emergence perhaps 100 years prior to that.) In 250K years since modern humans emerged, that’s a comparative blink of an eye.
There might be slight local dips along the timeline, but I think most Westerners (and maybe most people, but my lived experience is Western) would not willingly trade places with their same-percentile positioned selves from 100, 200, 500, 1000, 2000, 10K, 50K, or 250K years ago. The fact that few would choose to switch has to be viewed with some positive coefficient in a reasonable success metric.
Yes, my point was, if AI and automation in general are the start to the end of all that (and I do think there are some signs that these technologies could be leading us towards a fundamentally less egalitarian society) I think many would consider that a devastating impact that we did not adapt to, the way we did the Industrial Revolution, which ultimately led towards more middle class opportunities.
I agree with you on this feeling like a sales pitch, probably because ultimately it is. I've done a software training course led by this guy. It was fine, and his style and his lessons are all pretty decent and I found/find myself agreeing with his "takes". But it's nothing ground breaking, and he's not really adding anything to the debate that I've not read before.
I don't know how active he is as a developer, I assumed that he was more of a teacher of established practices than being on the cutting edge of development. That's not an insult, but it stands out to me in this article.
Ironically, like an LLM, this article feels like more like an amalgamation of plenty of other opinions on the growth of AI in the workplace rather than any original thoughts. There's not really anything "new" here, just putting together a load of existing opinions.
(I am not suggesting that Jason got an AI to write this article, though that would be funny).
It is true, but it's also not the case. The steep learning curve is flattened quite a bit by available "starter pack" configs and the amount of fresh articles. So you can get a functional editor and then gradually bend it to your needs.
Also, LLMs turned out to be quite good at generating working elisp and helping out in general.
I have been using emacs for around 7 years, but it never worked for me as the main editor, it just sucked too bad compared to IDE-like features of other editors and actual IDEs. So I only used it for org-mode, doing an attempt to use it for something else every couple of years.
I'm currently in the process of trying this again, and I have to say things feel very different this time. By adding native tree-sitter and LSP support, the IDE-like features are outsourced to where they should be done. It wasn't perfect, but I had issues of the same degree or worse with other editors. A proprietary IDE still would beat it in stability and features, but the experience is _crazy good_ for free software.
What I like the most is the hacker mentality it encourages. When I see something I don't like, I don't go like "I wish they did it differently", I ask "well how do I change that?".
The only thing that feels truly outdated is single-threaded nature and blocking UI when long-running operation (like an update) is happening. And maybe non-smooth scroll (there is a package but it makes text jump).
To add onto this, I really don't think emacs has that big of an initial learning curve nowadays.
If you enable cua-mode and get the LSPs working, you get the same experience as any other big editor like VSCode or Zed pretty close to out of the box. The arrow keys, mouse, and cut-copy-paste do exactly what you'd expect. There's menus, there's toolbars, there's scrollbars. Don't let the "emacs ricer" screenshots fool you; a lot of people disable those things for aesthetic reasons. Probably the kludgiest thing emacs has still is the default scrolling mode which scrolls through a page and then bumps the entire page forward by 1, like older editors. You can change these with a few lines in your config.
Alternatively you can get good out of the box experiences with an emacs distribution (like Doom Emacs) or one of the many minimal configs out there (I'm partial to [1])
Lumping this in with something like vim/neovim is a bit silly because the basic navigation commands and editing experience of emacs is mostly the same as other editors. Sure, underneath this is all run by an Elisp VM and an event loop which maps Elisp commands to keypresses, but as a user you only need to dive in when you feel comfortable.
My morale is extremely low. But I have different circumstances: I live under war, with my future life perspectives unknown. Software engineering, apart from being enjoyable, provided the sense of security. I felt that I could at least either relocate to some cheap country and work remotely, or attempt to relocate to an expensive country with good jobs.
With AI, the future seems just so much worse for me. I feel that productivity boost will not benefit me in any way (apart from some distant trickle down dream). I expect the outsource, and remote work in general to be impacted negatively the most. Maybe there's going to be some defensive measures to protect domestic specialists, but that wouldn't apply to me anyway unless I relocate (and probably acquire citizenship).
>Is your company hiring more/ have they stopped hiring software engineers
Stopped hiring completely and reduced workforce, but the reasons stated were financial, not AI.
>Is the management team putting more pressure to get more things done
with less workforce, there is naturally more work to do. But I can't say there is a change in pressure, and no one forces AI upon you.
Sorry for writing something a bit tangential, I'm mostly replying to the heading not the content.
I keep seeing the same point that argues against how "not fun, depressing, worse a <thing> has gotten these days". The most recent incarnation of that is how programming with AI feels worse than programming on your own.
I don't think the problem is inability to find a way to derive fun, the way you could previously. The problem is deriving fun while still getting paid for it.
To reiterate on the web-dev, you probably can make it fun again, given that you were able to have fun with it previously. But it probably will have to be done in your spare time after job.
Not sure about that, I've had great fun vibe coding like another commenter said, as I can simply write what I want in English and see a result immediately. Of course, I'd never use this for production, but for prototyping, it's nice. This is the opposite of industry, as you state.
I'm not talking about short-term gains like you having fun, but long-term effects on the industry of programming. Of course technology always provides some short-term fun in terms of elevating activity to higher industry levels in the long run.
At the end of the day, the people who put in the effort get ahead. I don't worry about the short or long term at all, as long as one is competent. If fewer are competent due to vibe coding their entire career, all the better for me as a competent professional, as with lower supply comes higher demand.
There is a parallel to this Celtic imitations that is found primarily in modern Ukraine[1], attribute to Cherniakhov culture[2].
The theory for them is that once the trade with Roman Empire ceased, the locals needed bigger supply of coins and started minting their own.
There is a curious thing with this "branch", I'm not sure if it's the same in the Celtic one. The last time I talked to people researching this, I was told that:
a. The findings are mostly unique, it's hard to find two copies of the same coin. Sometimes obverse of one coin could be found on another, but reverses don't match.
b. These coins are not cast, they are minted through "hammering", which requires a stamp. However, not a single stamp has been found so far.
A much easier way to make currency out of existing one would be to just slap existing coin into some clay, make a casting mold and just pour molten metal into it.
This of course is more of a curiosity/rumor level, I don't have any qualifications to back it up.
Maybe it's my learning limitations, but I find it hard to follow explanations like these. I had similar feelings about encapsulation explanations: it would say I can hide information without going into much detail. Why, from whom? How is it hiding if I can _see it on my screen_.
Similarly here, I can't understand for example _who_ is the owner. Is it a stack frame? Why would a stack frame want to move ownership to its callee, when by the nature of LIFO the callee stack will always be destroyed first, so there is no danger in hanging to it until callee returns. Is it for optimization, so that we can get rid of the object sooner? Could owner be something else than a stack frame?
Why can mutable reference be only handed out once? If I'm only using a single thread, one function is guaranteed to finish before the other starts, so what is the harm in handing mutable references to both? Just slap my hands when I'm actually using multiple threads.
Of course, there are reasons for all of these things and they probably are not even that hard to understand. Somehow, every time I want to get into Rust I start chasing these things and give up a bit later.
> Why would a stack frame want to move ownership to its callee
Rust's system of ownership and borrowing effectively lets you hand out "permissions" for data access. The owner gets the maximum permissions, including the ability to hand out references, which grant lesser permissions.
In some cases these permissions are useful for performance, yes. The owner has the permission to eagerly destroy something to instantly free up memory. It also has the permission to "move out" data, which allows you to avoid making unnecessary copies.
But it's useful for other reasons too. For example, threads don't follow a stack discipline; a callee is not guaranteed to terminate before the caller returns, so passing ownership of data sent to another thread is important for correctness.
And naturally, the ability to pass ownership to higher stack frames (from callee to caller) is also necessary for correctness.
In practice, people write functions that need the least permissions necessary. It's overwhelmingly common for callees to take references rather than taking ownership, because what they're doing just doesn't require ownership.
I think your comment has received excellent replies. However, no one has tackled your actual question so far:
> _who_ is the owner. Is it a stack frame?
I don’t think that it’s helpful to call a stack frame the owner in the sense of the borrow checker. If the owner was the stack frame, then why would it have to borrow objects to itself? The fact that the following code doesn’t compile seems to support that:
fn main() {
let a: String = "Hello".to_owned();
let b = a;
println!("{}", a); // error[E0382]: borrow of moved value: `a`
}
User lucozade’s comment has pointed out that the memory where the object lives is actually the thing that is being owned. So that can’t be the owner either.
So if neither a) the stack frame nor b) the memory where the object lives can be called the owner in the Rust sense, then what is?
Could the owner be the variable to which the owned chunk of memory is bound at a given point in time? In my mental model, yes. That would be consistent with all borrow checker semantics as I have understood them so far.
I believe this answer is correct. Ownership exists at the language level, not the machine level. Thinking of a part of the stack or a piece of memory as owning something isn’t correct. A language entity, like a variable, is what owns another object in rust. When that object goes at a scope, its resources are released, including all the things it owns.
I think it's funny how I had this kind of sort of "clear" understanding of Rust ownership from experience, and asking "why" repeatedly puts a few holes in the illusion of my understanding being clear. It's mostly familiarity of concepts from working with C++ and RAII and solving some ownership issues. It's kind of like when people ask you for the definition of a word, and you know what it means, but you also can't quite explain it.
>Rust uses a third approach: memory is managed through a system of ownership with a set of rules that the compiler checks.
This clearly means ownership is a concept in the Rust language. Defined by a set of rules checked by the compiler.
Later:
>First, let’s take a look at the ownership rules. Keep these rules in mind as we work through the examples that illustrate them:
>
>*Each value in Rust has an owner*.
>There can only be one owner at a time.
>*When the owner goes out of scope*, the value will be dropped.
So the owner can go out of scope and that leads to the value being dropped. At the same time each value has an owner.
So from this we gather. An owner can go out of scope, so an owner would be something that lives within a scope. A variable declaration perhaps? Further on in the text this seems to be confirmed. A variable can be an owner.
>Rust takes a different path: the memory is automatically returned once the variable that owns it goes out of scope.
Ok, so variables can own values. And borrowed variables (references) are owned by the variables they borrow from, this much seems clear. We can recurse all the way down. What about up? Who owns the variables? I'm guessing the program or the scope, which in turn is owned by the program.
So I think variables own values directly, references are owned by the variables they borrow from. All variables are owned by the program and live as long as they're in scope (again something that only exists at program level).
> Ownership exists at the language level, not the machine level.
Right. That's the key here. "Move semantics" can let you move something from the stack to the heap, or the heap to the stack, provided that a lot of fussy rules are enforced. It's quite common to do this. You might create a struct on the stack, then push it onto a vector, to be appended at the end. Works fine. The data had to be copied, and the language took care of that. It also took care of preventing you from doing that if the struct isn't safely move copyable.
C++ now has "move semantics", but for legacy reasons, enforcement is not strict enough to prevent moves which should not be allowed.
> Why can mutable reference be only handed out once?
Here's a single-threaded program which would exhibit dangling pointers if Rust allowed handing out multiple references (mutable or otherwise) to data that's being mutated:
let mut v = Vec::new();
v.push(42);
// Address of first element: 0x6533c883fb10
println!("{:p}", &v[0]);
// Put something after v on the heap
// so it can't be grown in-place
let v2 = v.clone();
v.push(43);
v.push(44);
v.push(45);
// Exceed capacity and trigger reallocation
v.push(46);
// New address of first element: 0x6533c883fb50
println!("{:p}", &v[0]);
The analogous program in pretty much any modern language under the sun has no problem with this, in spite of multiple references being casually allowed.
To have a safe reference to the cell of a vector, we need a "locative" object for that, which keeps track of v, and the offset 0 into v.
> The analogous program in pretty much any modern language under the sun has no problem with this, in spite of multiple references being casually allowed.
And then every time the underlying data moves, the program's runtime either needs to do a dynamic lookup of all pointers to that data and then iterate over all of them to point to the new location, or otherwise you need to introduce yet another layer of indirection (or even worse, you could use linked lists). Many languages exist in domains where they don't mind paying such a runtime cost, but Rust is trying to be as fast as possible while being as memory-safe as possible.
In other words, pick your poison:
1. Allow mutable data, but do not support direct interior references.
2. Allow interior references, but do not allow mutable data.
3. Allow mutable data, but only allow indirect/dynamically adjusted references.
4. Allow both mutable data and direct interior references, force the author to manually enforce memory-safety.
5. Allow both mutable data and direct interior references, use static analysis to ensure safety by only allowing references to be held when mutation cannot invalidate them.
It certainly doesn't guarantee it, this is just what's needed to induce a relocation in this particular instance. But this makes Rust's ownership tracking even more important, because it would be trivial for this to "accidentally work" in something like C++, only for it to explode as soon as any future change either perturbs the heap or pushes enough items to the vec that a relocation is suddenly triggered.
> Why would a stack frame want to move ownership to its callee, when by the nature of LIFO the callee stack will always be destroyed first, so there is no danger in hanging to it until callee returns.
It definitely takes some getting used to, but there's absolutely times when you could want something to move ownership into a called function, and extending it would be wrong.
An example would be if it represents something you can only do once, e.g. deleting a file. Once you've done it, you don't want to be able to do it again.
> Could owner be something else than a stack frame?
Yes. There are lots of ways an object might be owned:
- a local variable on the stack
- a field of a struct or a tuple (which might itself be owned on the stack, or nested in yet another struct, or one of the other options below)
- a heap-allocating container, most commonly basic data structures like Vec or HashMap, but also including things like Box (std::unique_ptr in C++), Arc (std::shared_ptr), and channels
- a static variable -- note that in Rust these are always const-initialized and never destroyed
I'm sure there are others I'm not thinking of.
> Why would a stack frame want to move ownership to its callee, when by the nature of LIFO the callee stack will always be destroyed first
Here are some example situations where you'd "pass by value" in Rust:
- You might be dealing with "Copy" types like integers and bools, where (just like in C or C++ or Go) values are easier to work with in a lot of common cases.
- You might be inserting something into a container that will own it. Maybe the callee gets a reference to that longer-lived container in one of its other arguments, or maybe the callee is a method on a struct type that includes a container.
- You might pass ownership to another thread. For example, the main() loop in my program could listen on a socket, and for each of the connections it gets, it might spawn a worker thread to own the connection and handle it. (Using async and "tasks" is pretty much the same from an ownership perspective.)
- You might be dealing with a type that uses ownership to represent something besides just memory. For example, owning a MutexGuard gives you the ability to unlock the Mutex by dropping the guard. Passing a MutexGuard by value tells the callee "I have taken this lock, but now you're responsible for releasing it." Sometimes people also use non-Copy enums to represent fancy state machines that you have to pass around by value, to guarantee whatever property they care about about the state transitions.
> Why would a stack frame want to move ownership to its callee
Happens all the time in modern programming:
callee(foo_string + "abc")
Argument expression foo_string + "abc" constructs a new string. That is not captured in any variable here; it is passed to the caller. Only the caller knows about this.
This situation can expose bugs in a run-time's GC system. If callee is something written in a low level language that is resposible for indicating "nailed" objects to the garbage collector, and it forgets to nail the argument object, GC can prematurely collect it because nothing else in the image knows about that object: only the callee. The bug won't surface in situations like callee(foo_string) where the caller still has a reference to foo_string (at least if that variable is live: has a next use).
The owned memory may be on a stack frame or it may be heap memory. It could even be in the memory mapped binary.
> Why would a stack frame want to move ownership to its callee
Because it wants to hand full responsibility to some other part of the program. Let's say you have allocated some memory on the heap and handed a reference to a callee then the callee returned to you. Did they free the memory? Did they hand the reference to another thread? Did they hand the reference to a library where you have no access to the code? Because the answer to those questions will determine if you are safe to continue using the reference you have. Including, but not limited to, whether you are safe to free the memory.
If you hand ownership to the callee, you simply don't care about any of that because you can't use your reference to the object after the callee returns. And the compiler enforces that. Now the callee could, in theory give you back ownership of the same memory but, if it does, you know that it didn't destroy etc that data otherwise it couldn't give it you back. And, again, the compiler is enforcing all that.
> Why can mutable reference be only handed out once?
Let's say you have 2 references to arrays of some type T and you want to copy from one array to the other. Will it do what you expect? It probably will if they are distinct but what if they overlap? memcpy has this issue and "solves" it by making overlapped copies undefined. With a single mutable reference system, it's not possible to get that scenario because, if there were 2 overlapping references, you couldn't write to either of them. And if you could write to one, then the other has to be a reference (mutable or not) to some other object.
There are also optimisation opportunities if you know 2 objects are distinct. That's why C added the restrict keyword.
> If I'm only using a single thread
If you're just knocking up small scripts or whatever then a lot of this is overkill. But if you're writing libraries, large applications, multi-dev systems etc then you may be single threaded but who's confirming that for every piece of the system at all times? People are generally really rubbish at that sort of long range thinking. That's where these more automated approaches shine.
> hide information...Why, from whom?
The main reason is that you want to expose a specific contract to the rest of the system. It may be, for example, that you have to maintain invariants eg double entry book-keeping or that the sides of a square are the same length. Alternatively, you may want to specify a high level algorithm eg matrix inversion, but want it to work for lots of varieties of matrix implementation eg sparse, square. In these cases, you want your consumer to be able to use your objects, with a standard interface, without them knowing, or caring, about the detail. In other words you're hiding the implementation detail behind the interface.
Yes, it’s part of the process of data augmentation, which is commonly used to avoid classifying on irrelevant aspects of the image like overall brightness or relative orientation.
The job is going to be much less fun, yes, but I won't have to learn from scratch and compete with young people in a different area (and which I will enjoy less, most likely). So, if anything slop gives me hope.
reply