Nailed it. I think about prescriptivism / descriptivism in terms of these archetypes:
- "Rule followers" think an org will be better off if everyone agreed on a set of rules to follow. At the boundaries, they will think about establishing new rules to clarify and codify new things. Charitably, I'd add that they might remove rules that are obsolete, but we all know this is not sufficiently true in practice: governments, for example, are much more likely to add new rules than to remove old ones.
- "Rule breakers" think that most rules are suggestions. At the boundaries, they will see rules other people are needlessly bound by, and translate those into strategic openings for whatever game they're playing. For better and for worse, start-up ecosystems are full of people like this.
Rule followers want to be told what's allowed, while rule breakers try to figure out what _should_ be allowed from first principles. At the extreme, they tug the world towards authoritarianism or towards anarchy.
This is obviously a spectrum, so everyone has both of these archetypes in them, albeit in different proportions (e.g. most people pay taxes, but almost no one drives the speed limit).
The main problem here is that real people operate in fuzzy domains. Snapping them into place "with code" won't magically resolve the gray areas inherent to the most valuable real workflows.
Think about the prized "high agency worker." What makes them desirable is the willingness and ability to make well informed, unilateral decisions on matters that are likely not yet organizationally codified, or codified in a way that is "wrong" for the task at hand.
Also, the reason terraform works is because it is _operational_. As in, it's actual code that runs. If it was mere documentation, it would drift like nobody's business. In order to make "organizational code" operational, you would need enforcement (a compliance team?) manually keeping the documentation in sync with reality in all of the meat and thought spaces where real work happens.
The only place where this can plausibly be automated is in digital spaces. In fact, I'm surprised the article doesn't go there: "organizational code" starts feeling way more plausible as definition for AI agents than for real people, specifically because agents are operationalized in digital spaces, where enforcement can be automated.
false; wolfram has been circling the topic of "small yet mighty" rule-based systems for decades, and this is his writing style. if you don't like the topic or the style, you are welcome to move on from it with whatever grace you can muster up.
Instead, the approach that will continue increasing in dominance is hiring referrals and finding jobs through personal networks.
In a world that increasingly resembles The Library of Babel,
- the main way to know what's true is to tune into news sources you trust (monolithic old school media, or personality driven new-school media, social media, etc.),
- the main way to learn what to watch/listen/read is to take recommendations from people you trust, or received through channels you trust,
- the main way to hire or get hired is, increasingly, by exploiting a network of people you trust.
All of this compensates for ambient oversaturation by using the best available (and tunable!) desaturation filter: your trust network.
Social affinity and reputation represent winning strategies that have served humans very well since the dawn of time. It shouldn't be surprising that they continue to be extremely effective even (or perhaps especially) in the age of AI.
Nepotism is because ‘what is the point of doing all this’ - aka passing things on to family.
It also enables a degree of aligned interests between what could otherwise be hard to align parties (trust, like you mention), but that not why someone gets a big name acting slot, or gets put on the board of a friends company.
Nepotism entangles organizational interests with personal interests, in both good and bad ways. It means that someone may hire a friend or family member because they know they're a) competent enough for the job, and b) they actually, personally know them, which significantly reduces a risk of the hire turning out bad, relative to a stranger with equal or better credentials. But it also means that someone may hire a friend or family member because they're trading favors, which is bad for the organization[0].
I suppose in practice the latter might be more common - I'd guess it could be the whole idea has structural dynamics similar to "the market for lemons". I haven't spent much time thinking about it and researching the problem in depth, so I can't say.
--
[0] - And may or may not be bad for the local community. I suppose the larger problem for organizations is simply that they're designed to be focused, and need to maintain alignment of incentives across the org chart. Nepotism is a threat because it attaches new edges to the org chart - edges that lead to much more complex and fuzzy graphs of family and community relationships, breaking the narrow focus that makes organizations work.
>that have served humans very well since the dawn of time.
Except none of this scales in the modern world beyond flat small orgs in homogenous high trust cultures, basically modern tribes.
If you're a large org with diverse people from everywhere and you empower everyone down the ladder to hire the people they trust, they'll just end up gaming the system or hiring their friends and family and the org fails from nepotism, corruption and cronyism.
It's not like we don't have enough examples of this happening everywhere in the world, and why most places have official hiring policies against this behavior, or policies to obfuscate connections from the hiring pipeline to make sure people get in exclusively on merit.
It's also why socialism is only financially viable in small homogeneous communities (like the Amish for example) where everyone adheres to the social contract of contributing to society more than they take out, and is kept accountable by the ingroup to be honest, but fails at a nation level where everyone including the government in charge of managing it tries to defraud it or game the system in their favor taking out more than they contribute, leading to constant budget deficit and ultimately collapse (see EU state pension systems)
But yes, fully eliminating nepotism and cronyism via rules and laws is nearly impossible due to human own-group bias, so networking will always be a huge asset.
Although I might know a solution, hear me out. I have fond memories of being part of this amazing private torrent tracker back in the day, that was 100% invite only, and the way the community was kept honest and accountable to the spirit and the rules, was that every person was responsible for the people they invite, so if their invites would commit a bannable offense, their parent who invited them would also got banned, meaning people would be very selective with their invites, biasing more towards meritocracy rather than nepotism or selling their invites online for cash which was common back then. Feels like something that could scale IRL as well. You hire your friend that turns out to be a shit employee, you're out the door along with him.
Unfortunately these have been bought up by billionaires that use them as play things to get richer.
>from people you trust,
In one particular area where they understand what is going on. I have lawyers I would trust with my life on legal matters, but should not be trusted around any digital device.
>y exploiting a network of people you trust.
agreed, but sucks for people that don't have that.
do LLMs arrive at these replies organically? Is it baked into the corpus and naturally emerges? Or are these artifacts of the internal prompting of these companies?
People like being told they are right, and when a response contains that formulation, on average, given the choice, people will pick it more often than a response that doesn't, and the LLM will adapt.
Efficient markets route around bottlenecks. Technological revolutions accelerate the speed at which that re-routing happens.
In software, we, the developers, have increasingly been a bottleneck. The world needs WAY more software than we can economically provide, and at long last a technology has arrived that will help route around us for the benefit of humanity.
Here's an excellent Casey Handmer quote from a recent Dwarkesh episode:
> One way to think about the industrial revolutions is [...] what you're doing is you're finding some way of bypassing a constraint or bypassing a bottleneck. The bottleneck prior to what we call the Industrial Revolution was metabolism. How much oats can a human or a horse physically digest and then convert into useful mechanical output for their peasant overlord or whatever? Nowadays we would giggle to think that the amount of food we produce is meaningful in the context of the economic power of a particular country. Because 99% of the energy that we consume routes around our guts, through the gas tanks of our cars and through our aircraft and in our grids and stuff like that.
> Right now, the AI revolution is about routing around cognitive constraints, that in some ways writing, the printing press, computers, the Internet have already allowed us to do to some extent. A credit card is a good example of something that routes around a cognitive constraint of building a network of trust. It's a centralized trust.
> In software, we, the developers, have increasingly been a bottleneck. The world needs WAY more software than we can economically provide, and at long last a technology has arrived that will help route around us for the benefit of humanity.
Everything you wrote here is directly contradicted by casual observation of reality.
Developers aren't a bottleneck. If they were, we wouldn't be in a historic period of layoffs. And before you say that AI is causing the layoffs -- it's not. They started before AI was widely used for production, and they're also being done at companies that aren't heavily using AI anyway. They're a result of massive over-hiring during periods of low interest rates.
Beyond that, who is demanding software developers? The things that make our lives better (like digital forms at the doctor's office) aren't complex software.
The majority of the demand is from enshittification companies making our lives worse with ads and surveillance. No one is demanding developers, but certainly individual humans aren't demanding them.
Yes, the layoffs are a market correction initiated by non-AI factors, such as the end of the ZIRP era.
The world is chock-full of important, society-scale problems that have been out of reach because the economics have made them costly to work on and therefore risky to invest in. Lowering the cost of software development de-risks investment and increases the total pool of profitable (or potentially profitable) projects.
The companies that will work on those new problems are being conceived or born right now, and [collectively] they'll need lots of AI-native software devs.
> important, society-scale problems that have been out of reach because the economics have made them costly to work on and therefore risky to invest in
What are examples of these projects and how will AI put them back into reach of investment?
reply