I take your example on multitasking operating systems as not being limited to only helping make friendly asynchronous I/O, but I do think a deeper consideration of Multics is coincidently appropriate.
The telephone and electrical power networks were vast in scope (and still are), enabling interstate communication and power utilities. Echoes of the transportation utilities enabled through railroads. Multics was architected partially with the commercial goal of scaling up with users, a computing utility. But in a time with especially expensive memory, a large always resident kernel was a lot of overhead. The hardware needed a lot of memory and would be contending with some communication network whose latency could not be specified at the OS design time. Ergo, asynchronous I/O was key.
Put differently, Multics bet that computing hardware would continue to be expensive enough to be centralized, thereby requiring a CPU to contend with time-sharing across various communication channels. The CPU would be used for compute and scheduling.
Unix relaxed the hardware requirements significantly at the cost of programmer complexity. This coincided roughly with lower hardware costs, favoring compute (in broad strokes) over scheduling duties. The OS should get out of the way as much as possible.
After a bunch of failed grand hardware experiments in the 1980s, the ascendant Intel rose with a dominant but relatively straightforward CPU design. Designs like the Connection Machine were distilled into Out of Order Execution, a runtime system that could extract parallelism while contending with variable latency induced by the memory subsystem and variable instruction ordering. Limited asynchronous execution mostly hidden away from the programmer until more recently with HeartBleed.
Modern SoCs encompass many small cores, each running a process or maybe an RTOS, along with multiple CPU cores, many GPU cores, SIMD engines, signal processing engines, NPU cores, storage engines, etc. A special compute engine for all seasons, ready to be configured and scheduled by the CPU OS, but whose asynchronous nature (a scheduling construct!) no longer hidden from the programmer.
I think the article reflects how even on a single computer, the duty of the CPU (and therefore OS) has tilted in some cases towards scheduling over compute for the CPU. And of course, this is without considering yet cloud providers, the spiritual realization of a centralized computing utility.
These are good points. I hadn't thought about the perspective that the central processor in a heterogeneous multicore system may spend a lot of its time orchestrating rather than computing—whether it's a GE 635 with its I/O controllers https://bitsavers.org/pdf/ge/GE-6xx/CPB-371A_GE-635_System_M..., an IBM 360 with its "channels" https://en.wikipedia.org/wiki/IBM_System/360_architecture#In..., or a SoC with DSP cores and DMA peripherals—but it's obviously true now that you say it. I've seen a number of SoCs like the S1 MP3 player and some DVD players where the "central processor" is something like a Z80 or 8051 core, many orders of magnitude less capable than the computational payload.
(One quibble: I think when you said "HeartBleed" you meant Meltdown and Spectre.)
I think there have always been significant workloads that mostly came down to routing data between peripherals, lightly processed if at all. Linux's first great success domains in the 90s were basically routing packets into PPP over banks of modems and running Apache to copy data between a disk and a network card. I don't think that's either novel or an especially actors-related thing.
To the extent that a computational workload doesn't have a major "scheduling" aspect, it might be a good candidate for taking it off the CPU and putting it into some kind of dedicated logic—either an ASIC or an FPGA. This was harder when the PDP-11 and 6502 were new, but now we're in the age of dark silicon, FPGAs, and trillion-transistor chips.
The telephone and electrical power networks were vast in scope (and still are), enabling interstate communication and power utilities. Echoes of the transportation utilities enabled through railroads. Multics was architected partially with the commercial goal of scaling up with users, a computing utility. But in a time with especially expensive memory, a large always resident kernel was a lot of overhead. The hardware needed a lot of memory and would be contending with some communication network whose latency could not be specified at the OS design time. Ergo, asynchronous I/O was key.
Put differently, Multics bet that computing hardware would continue to be expensive enough to be centralized, thereby requiring a CPU to contend with time-sharing across various communication channels. The CPU would be used for compute and scheduling.
Unix relaxed the hardware requirements significantly at the cost of programmer complexity. This coincided roughly with lower hardware costs, favoring compute (in broad strokes) over scheduling duties. The OS should get out of the way as much as possible.
After a bunch of failed grand hardware experiments in the 1980s, the ascendant Intel rose with a dominant but relatively straightforward CPU design. Designs like the Connection Machine were distilled into Out of Order Execution, a runtime system that could extract parallelism while contending with variable latency induced by the memory subsystem and variable instruction ordering. Limited asynchronous execution mostly hidden away from the programmer until more recently with HeartBleed.
Modern SoCs encompass many small cores, each running a process or maybe an RTOS, along with multiple CPU cores, many GPU cores, SIMD engines, signal processing engines, NPU cores, storage engines, etc. A special compute engine for all seasons, ready to be configured and scheduled by the CPU OS, but whose asynchronous nature (a scheduling construct!) no longer hidden from the programmer.
I think the article reflects how even on a single computer, the duty of the CPU (and therefore OS) has tilted in some cases towards scheduling over compute for the CPU. And of course, this is without considering yet cloud providers, the spiritual realization of a centralized computing utility.