That goes back to the 1980s, with UCLA Locus. This was a distributed UNIX-like system. You could launch a process on another machine and keep I/O and pipes connected. Even on a machine with a different CPU architecture. They even shared file position between tasks across the network. Locus was eventually part of an IBM product.
A big part of the problem is "fork", which is a primitive designed to work on a PDP-11 with very limited memory. The way "fork" originally worked was to swap out the process, and instead of discarding the in-memory copy, duplicate the process table entry for it, making the swapped-out version and the in-memory version separate processes. This copied code, data, and the process header with the file info.
This is a strange way to launch a new process, but it was really easy to implement in early Unix.
Most other systems had some variant on "run" - launch and run the indicated image. That distributes much better.
I worked at Locus back in the 90s, when this technology was part of the AIX 1.2/1.3 family. The basic architecture allowed for heterogeneous clusters (i386 and i370 -- PC's and IBM Mainframes all running on the same global filesystem.) Pretty sure you couldn't migrate processes to a machine with a different architecture though. It was awesome to be able to "kill -MIGRATE" a long-running make job to somebody else's idle workstation, or use one of the built-in shell primitives to launch a new job on the fastest machine in the cluster "fast make -j 10 gcc".
There's also an ergonomics to process creation APIs - rather than needing separate APIs for manipulating your child process vs manipulating your own process, fork() lets you use one to implement the other: fork(), configure the resulting process, then exec().
CreateProcess* on Windows is a relative monstrosity of complexity compared to fork/exec.
A big part of the problem is "fork", which is a primitive designed to work on a PDP-11 with very limited memory. The way "fork" originally worked was to swap out the process, and instead of discarding the in-memory copy, duplicate the process table entry for it, making the swapped-out version and the in-memory version separate processes. This copied code, data, and the process header with the file info. This is a strange way to launch a new process, but it was really easy to implement in early Unix.
Most other systems had some variant on "run" - launch and run the indicated image. That distributes much better.