I read the title and thought: ha, somebody had the same idea. But no.
Instead of git training video, I did a platform that creates command line training videos from markdown, merging output from VHS, generated speech, seperation slides etc.
Instead of CRM, I started build a Lotus Agenda clone that can be used to build a CRM.
The first one is the classic. Don't know about the second one.
Power dynamics have been extensively investigated by the "Johnstone school" of improv, because humans are (mostly preconsciously i.e. usually are not but can become conscious about it) interested in power dynamics -- especially in situations where power balance is switching -- so this is the key if you want improvise acts that feel realistic and capture the audience attention.
To really understand it, I would recommend taking some improv classes that are based on Johnstone's teachings. But the book will give you the idea.
I tried digging around keith johnstone but i could only find theatrical improv which, unless it flies above my head, had very few to do with the workplace dynamics of real jobs. Unless the concept is to insist on the fact that adult life is just a play and treat your day like a space of randomness to disrupt the established roles ?
The book is not only about improvisation theatre but I don't want to describe the ideas, as they can spoil the experience of reading it for you. I have yet to find another source that can describe the power dynamics (also in life) as succintly as he can.
Anything can be used for the good or for the bad. Defining how the organization is structured and how it operates usually is usually not about how people really do their actual work -- unless there are safety etc. regulations that must be met. Many enterprises are in constant chaos, which stresses people out. Adding some structure to it helps to alleviate that stress. For example, if there is a good template to document something, you don't have to start from the scratch. Of course, you could also go all in automate all your "management", in order to avoid talking with your employees. I don't think that will end well.
USM tools is based on Unified Service Management (USM) method, which provides the necessary concepts to take the the vision one step further. The core idea is similar however: everything a company does is a service, and services can be defined as data. The surprising finding from USM is that in practice it is possible to meaningfully define any service only through five types of processes.
As services are data, you can have multiple views on that data. And as all data is in standardized format, it becomes possible to make generic cross-references between USM and for example ISO27K as rules that refer to your data, and those rules can be evaluated. As a result, you can see your ISO27K compliance on a dashboard in real-time.
Would you be able to share more? I lead a tiny non-profit org doing data literacy mentoring and I've been meaning to move more of our process docs to Logseq. Although I probably don't need a tool of the level of sophistication of usm.tools, I could take inspiration from your core ideas for our homegrown system.
To understand the approach, you need to first understand the method it is based on.
I have written a simple introduction about it that you can download for free from simpleusm.com, no sign-up required.
Simple homegrown system for processes is not that difficult to do. You basically model the USM process model, templates as instances which you then copy as a basis for editing and make a UI around the editing.
You could even just use JSON files and git, but while the data model is not complex, it is still not simple enough for editing by hand in an editor.
Then the question is what is the benefit. I would say that just using USM to define your services is helpful.
By this approach you can build various stakeholders views to your services that are always up to date and do not require manual labor.
I have been working on payment systems and it seems that in almost all discussions about transactions, people talk about toy versions of bank transactions that have very little to do with what actually happens.
You don't even need to talk about credit cards to have multiple kinds of accounts (internal bank accounts for payment settlement etc.), multiple involved systems, batch processes, reconciliation etc. Having a single atomic database transaction is not realistic at all.
On the other hand, the toy transaction example might be useful for people to understand basic concepts of transactions.
I don't have a lot of payment experience, but AFAIK actual payment systems work in an append-only fashion, which makes concurrency management easier since you're just adding a new row with (timestamp, from, to, value, currency, status) or something similar. However, how can you efficiently check for overdrafts in this model? You'd have to periodically sum up transactions to find the sender's balance and compare it to a known threshold.
Is this how things are usually done in your business domain?
> how can you efficiently check for overdrafts in this model?
You already laid the groundwork for this to be done efficiently: "actual payment systems work in an append-only fashion"
If you can't alter the past, it's trivial to maintain your rolling sums to compare against. Each new transaction through the system only needs to mutate the source and destination balances of that individual transaction.
If you know everyone's balance as of 10 seconds ago, you don't need to consider any of the 5 million transactions that happened before 10 seconds ago.
(If your system allowed you to alter the past and edit arbitrary transactions in the past, you could never trust your rolling sums, and you'd be back to summing up everything for every operation.)
At the beginning of time, all your accounts will have their starting value.
When the first transaction (from,to,value) happens, you will do one overdraft check, and if it's good, you will do 1 addition and 1 subtraction, and two of the accounts will have a new value.
On the millionth transaction, you will do one overdraft check, and if it's good, you will do 1 addition and 1 subtraction, and two of the accounts will have a new value.
At no point will you need to do more than one check & one add & one sub per arriving transaction.
(The append-only is what allows this: the next state is only ever a single, cheap step from the current state. But if someone insists upon mutating history, the current state is no longer valid because it no longer represents the history that led up to it, so it cannot be used to generate the next state - you need to throw it all away and regenerate the current/next states, starting from 0 and replaying every transaction again.
Ok so basically you have a Transactions table as well as a separate Accounts table which stores balances, and every time Alices wishes to pay Bob, a (database) transaction appends an entry to the Transaction table and updates balance in Accounts only if the sender’s balance is ok? Something like a “INSERT INTO…SELECT”?
Your bank statement has the event (A deposit or withdrawal) with details, and to one side the bank will say, your balance after this event can be calculated to be $FOO
The balance isn't a part of the event, it's a calculation based on the (cached) balance known from the previous event.
Further, your bank statements are (typically) for the calendar month, or whatever. They start with the balance bought forward from the previous statement (a snapshot)
> Is this how things are usually done in your business domain?
I don't know about "usually" and I cannot explain details. But many banks are migrating from batch-based mainframes to real-time systems. Maybe that answers your question about "efficiently".
And then they take that toy transaction model and think that they're on ACID when they're not.
Are you stepping out of SQL to write application logic? You probably broke ACID. Begin a transaction, read a value (n), do a calculation (n+1), write it back and commit: The DB cannot see that you did (+1). All it knows is that you're trying to write a 6. If someone else wrote a 6 or a 7 in the meantime, then your transaction may have 'meant' (+0) or (-1).
Same problem when running on reduced isolation level (which you probably are). If you do two reads in your 'transaction', the first read can be at state 1, and the second read can be at state 2.
I think more conversations about the single "fully consistent" db approach should start with it not being fit-for-purpose - even without considering that it can't address soft-modification (which you should recognise as a need immediately whenever someone brings up soft-delete) or two-generals (i.e. consistency with a partner - you and VISA don't live in the same MySql instance, do you? Or to put it in moron words - partitions between your DB and VISA's DB "don't happen often" (they happen always!))
RE: "All it knows is that you're trying to write a 6. If someone else wrote a 6 or a 7 in the meantime, then your transaction may have 'meant' (+0) or (-1)."
This is not how it works at all. This is called dirty writes and is by default prevented by ACID compliant databases, no matter the isolation level. The second transaction commit will be rejected by the transaction manager.
Even if you start a transaction from your application, it does not change this still.
Postgres as an example is ACID compliant if you want it to be. All those databases that have full serialization possible do utilize RC by default which is enough to prevent dirty writes and was my original point.
I have no problem with ACID the concept. It's a great ideal to strive towards. I'm sure your favourite RDBMS does a fine job of it. If you send it a single SQL string, it will probably behave well no matter how many other callers are sending it SQL strings (as long as the statements are grouped appropriately with BEGIN/COMMIT).
I'm just pointing out two ways in which you can make your system non-ACID.
1) Leave it on the default isolation level (READ_COMMITTED):
You have ten accounts, which sum to $100. You know your code cannot create or destroy money, only move it around. If no other thread is currently moving money, you will always see it sum to $100. However, if another thread moves money (e.g. from account 9 to account 1) while your summation is in progress, you will undercount the money. Perfectly legal in READ_COMMITTED. You made a clean read of account 1, kept going, and by the time you reach account 9, you READ_ what the other thread _COMMITTED. Nothing dirty about it, you under-reported money for no other reason than your transactions being less-than-Isolated. You can then take that SUM and cleanly write it elsewhere. Not dirty, just wrong.
2) Use an ORM like LINQ. (Assume FULL ISOLATION - even though you probably don't have it)
If you were to withdraw money from the largest account, split it into two parts, and deposit it into two random accounts, you could do it ACID-compliantly with this SQL snippet:
SELECT @bigBalance = Max(Balance) FROM MyAccounts
SELECT @part1 = @bigBalance / 2;
SELECT @part2 = @bigBalance - @part1;
..
-- Only showing one of the deposits for brevity
UPDATE MyAccounts
SET Balance = Balance + @part1
WHERE Id IN (
SELECT TOP 1 Id
FROM MyAccounts
ORDER BY NewId()
);
Under a single thread it will preserve money. Under multiple threads it will preserve money (as long as BEGIN and COMMIT are included ofc.). Perfectly ACID. But who wants to write SQL? Here's a snippet from the equivalent C#/EF/LINQ program:
// Split the balance in two
var onePart = maxAccount.Balance / 2;
var otherPart = maxAccount.Balance - onePart;
// Move one half
maxAccount.Balance -= onePart;
recipient1.Balance += onePart;
// Move the other half
maxAccount.Balance -= otherPart;
recipient2.Balance += otherPart;
Now the RDBMS couldn't manage this transactionally even if it wanted to. By the final lines, 'otherPart' is no longer "half of the balance of the biggest account", it's a number like 1144 or 1845. The RDBMS thinks it's just writing a constant and can't connect it back to its READ site:
info: 1/31/2026 17:30:57.906 RelationalEventId.CommandExecuted[20101] (Microsoft.EntityFrameworkCore.Database.Command)
Executed DbCommand (7ms) [Parameters=[@p1='a49f1b75-4510-4375-35f5-08de60e61cdd', @p0='1845'], CommandType='Text', CommandTimeout='30']
SET NOCOUNT ON;
UPDATE [MyAccounts] SET [Balance] = @p0
WHERE [Id] = @p1;
SELECT @@ROWCOUNT;
For example 1) Let's be clear about what we are doing.
If you are running in RC isolation, and perform a select sum() from table, you are reading values committed by other threads BEFORE the select statement began, you are not getting other threads committed values during the select, you are not breaking ACID.
If you are suggesting that running a simple BEGIN; select sum() from table; COMMIT is breaking acid in a default RC level, you are wrong and should best avoid commenting on isolation levels in RDBMS online, to not confuse people further.
If you are however suggesting that we are breaking ACID if we do app side stupidity such as:
value1=BEGIN; SELECT value from table where id=1;commit
value2=......
sum = value1+value2....+value10
Then yes obviously its not acid but nobody in their right minds should be doing that. Even juniors quickly learn that this is incorrect code.
If you are suggesting we do repeatable reads in RC then yes its obviously not ACID but your example does not mention repeatable summations only a single one.
The point is to give people who don't realise that they have been dealing with eventual consistency all along, that it's right there, in their lives, and they already understand it.
You're right I go into too much detail (maybe I got carried away with the HN audience :-) and you are right that multiple accounts is something else that people generally already understand and demonstrate further eventual consistency principles.
I wasn't criticizing you, just making the point that when people talk about toy example bank transactions, they usually want to just introduce the basic understanding. And I think it ok, but I would prefer that they would also mention that REALLY the operations are complex.
I modified my comment above that by multiple types of accounts I meant that banks have various accounts for settlements with the other banks etc. even in the common payment case.
Most Django projects just need a basic way to execute timed and background tasks. Celery requires seperate containers or nodes, which complicates things unnecessarily. Django 6.0 luckily has tasks framework -- which is backported to earlier django versions has well, which can use the database. https://docs.djangoproject.com/en/6.0/topics/tasks/
Django 6's tasks framework is nice but so far it is only an API. It does not include an actual worker implementaiton. There is a django-tasks package which does a basic implementation but it is not prod ready. I tried it and it is very unreliable. Hopefully the community will come out with backends for it to plug celery, oban, rq etc.
Could you say a bit more about "it is very unreliable"? I'm considering using django-tasks with an rq backend [1] and would like to hear about your experiences. Did you find it dropping tasks, difficult to operate, etc.
The biggest beef I have with microservice architectures is lack of transactions across service boundaries. We say that such systems-of-systems are "eventually consistent", but they are actually never guaranteed to be in a consistent state -- i.e. they are always inconsistent. That pushes the responsibility for consistency to the system that needs to use the data -- making implementing those either extremely complex -- or more typically -- ignore the problem and introduce distributed timing bugs that are difficult to find in testing. The benefits of microservices are offset by losing the ability to build on database functionality to make your systems robust.
It definitely seems like the value of losing of transactional integrity is massively under appreciated. I think there has been a progression of thinking over the last couple of decades that went from laying everything at the god of transactional integrity, to "planet scale" services proving that such designs cannot scale and hence promulgating distributed solutions, to everyone else cargo culting the idea that transactions never mattered in the first place and now even a simple app that never needed to scale in the first place is being carved up into 5 microservices and half the design complexity is dealing with regaining the lost transactional integrity that we had in the first place.
The biggest beef I currently have with microservice architectures is that they are more annoying to work with when working with LLM's. Ultimately that is probably the biggest limiting factor for microservices in 2026, the tooling for multi repo setups is there (i've been using RepoPrompt for this with really good effect), but fundamentally LLM's in their default state without a purpose designed too like this suck at microservices compared to a monorepo.
You could also turn around and say that it's a good context boundary for the LLM, which is true, but then you're back at the same problem microservices have always had: they push the integration work onto another team so that developers can make it Not Their Problem. Which is, honestly, just a restatement of the exact thing you just said framed in a different way.
I think your statement can also be used against event driven architecture - having this massive event bus that controls all the levers of your distributed system always sounds great in theory, but in practice you end up with almost the exact same problem as what you just described, because the tooling for offering those integration guarantees is just not nearly as robust as a centralized database.
I have found mostly the opposite but partly the same. With the right tooling LLMs are IMO much better in microservice architectures. If you're regularly needing to do multi-repo PRs or share code between repos as they work, to me that is a sign that you weren't really "doing microservices" before adding LLMs to your project, because there should be some kind of API surface that you can share with LLMs in other repos, and cross-service changes should generally probably not be done by the same agent
Even if the same dev is driving the work, it's like having a junior engineer do a cross-service staggered release and letting them skip the well-defined existing API surfaces. The entire point of microservices is that you are making that hard/introducing friction to that stuff on purpose so things can be released and developed separately. IMO it has an easy solution too, just direct one agent per repo/service the way you would if you really did need to make that kind of change anyway and wanted to do it through junior developers.
> hey push the integration work onto another team so that developers can make it Not Their Problem
I mean yes and no, this is oftentimes completely intended from the perspective of the people making the decision to do microservices. It's a way to constrain the way people develop and coordinate with each other precisely because you don't want all 50 of your developers running amok on the entire codebase (especially when they don't know how or why that code was structured some way originally, and they aren't very skilled or conscientious in integrating things maintainably or testing existing behavior).
> so that developers can make it Not Their Problem
IMO this is partially orthogonal to the problem. Microservices doesn't necessarily mean you can't modify another team's code. IMO that is a generally pretty counter productive mindset for engineering teams where codebase is jealously guarded like that. It just means you might need to send another team a PR or coordinate with them first rather than making the change unilaterally. Or maybe you just want to release the things separately; lately I find myself wanting that more and more because past a certain size agents just turn repos into balls of mud or start re implementing things.
This is never going to be the case, if you're finding it there's something really weird/wrong going on. Even with OpenAPI defs, if you're asking an agent to reason across service boundaries they have to do language translation on the fly in the generation, which is going to degrade attention 100%, plus LLMs are just worse at reasoning with openapi specs than language types. You also no longer have a unified stack, instead the agent has to stitch together the stack and logs from a remote service.
If your agent is reasoning across service boundaries you should be giving it whatever you'd normally use when you reason across service boundaries, whether that's an openapi spec or documentation or a client library or anything else. I don't see it as any different than a human reasoning across service boundaries. If it's too hard for your human to do that, or there isn't any actual structured/reusable way for human developers to do that, that's more a problem with how you're doing microservices/developing in general.
> they have to do language translation on the fly in the generation, which is going to degrade attention 100%,
I'm not completely sure what you're alluding to but if you don't have an existing client for your target service, microservices/developers going to have to do that anyway because they're serializing data to call one microservice from another. The only exception would be if you starting calling the other application's code directly from the other's in which case again you're doing microservices wrong or shouldn't even be doing microservices at all (or a lead engineer/other developers deliberately wanted to prevent you from directly integrating those two applciations outside of the API layer and it's WAI).
None of these seem like "microservices are bad for agents" problems to me, just "what I'm doing was already not a good fit for microservices/I should just not do microservices anymore". Forcing integration against service boundaries that are independently built/managed is almost the entire point as far as I'm concerned
Think of it like this. If you're multilingual but I ask you a hard question with sections in different languages, it's still going to tax you to solve the problem over having the question be asked in one language.
If you codegen client wrappers from your specs that can help, but if something doesn't work predictably the indirection makes debugging harder (both just from a "cognitive" standpoint and from inability to directly debug a unified system).
I prefer FaaS + shared libraries over microservices when I have to part things out, because it gives you the independence and isolation of microservices, but you're still sharing code across teams and working with a unified stack.
It is not unheard of to encounter a situation in enterprises where microservice architecture has been "too succesful".
There may be even 500 microservices and their number is growing rapidly. The situation might no longer be under control, sometimes even the responsibility for maintaining them is unclear or "shared". It is easier to implement a new microservice than track down who could implement something there.
I have encountered this problem several times, so I started a side project to bring such situations under control. It is still alpha, but first part -- scoping the problem -- is already pretty useful, allowing you to select, visualize and tag etc. microservices.
I have built many projects in hours that we can say would have reasonably taken me a month, to research the technology I did not know beforehand. 30 minutes is often enough to build a first version of the project. For example an audio book listener app, winter swimming iPhone/iWatch app combination, and markdown editor for OS X in Swift.
I have also added complex features in 30 minutes to existing projects, but I don't remember any that themselves would have taken me months though.
Has somebody written an analysis why Qt really sucks? It would be great to have a spec for a GOOD cross-platform (desktop) UI framework. It might be also possible to create a reference implementation of that spec on top of Qt.
Instead of git training video, I did a platform that creates command line training videos from markdown, merging output from VHS, generated speech, seperation slides etc.
Instead of CRM, I started build a Lotus Agenda clone that can be used to build a CRM.
etc.
reply