The issue with such an OS is that there aren't won't be any libraries and tools. So the OS vendor might need a custom shell, custom utilities, custom libraries for everything ... or they offer POSIX-like APIs.
One area where one can see this is IncludeOS - they had their custom APIs for a long time to leverage their architecture, but are now focussed on providing POSIX compatibility ... and that's only for single purpose unikernel systems. If the OS should also be universal this is even more of a problem. (Especially if you also want a graphical desktop ...)
With that attitude you'll never be able to ditch legacy cruft, and there's quite a lot of it in POSIX. I agree with the other poster that a compatibility layer in the form of virtualization and/or emulation should be sufficient, as long as your new OS brings something to the table that's desirable. Shunt the legacy crap off into its own contained environment and build a nice new clean one for new stuff to use.
I still think there are a plenty of reasons to support POSIX in many places. As someone who's been running fish for a while, I can appreciate the common ground it provides across the current UNIX ecosystem, but it's not like translating a bash script to fish is impossible.
There should always be a supported standard, but there should be nothing forcing you too it. This is the freedom we need to demand of our OSes.
In the same imaginary world, creating a new POSIX OS that isn't Linux is a giant waste of time just from gap in drivers alone. If you're going to ditch all that hardware support then you may as well ditch POSIX and its crap while you're at it.
Bull. People developed software for multiple completely different computer architectures throughout the 80s and 90s. People do it today between different game console platforms in addition to operating systems. Even if you try and target something like SDL or a web browser your abstraction won't save you from a platform's quirks once you reach a certain level of complexity and then you'll have to work around it anyway.
Hell, even between supposedly POSIX systems there's a lot of #ifdef going on to make things work.
My issues with POSIX stem from the fact that writing completely correct code which handles signals, interruptable operating system calls, and threads is hard. There are plenty of little details that are easy to get wrong. And you won't know you've gotten something wrong until much later when some confluence of events occurs.
I don't know if deprecating parts of POSIX is going to work any better than deprecating parts of C++. If all the bad stuff is still there waiting to be misused...
Or successfully pull off the library OS concept with POSIX as a first class citizen. Z/OS is most of the way there. NT has tried a couple times with the POSIX subsystem first, and now their Windows Subsystem for Linux work came out of MS Research's drawbridge library OS work.
As a user, I personally wouldn't mind a "fresh start" when it comes to userspace. Just look at Haiku -- yes, it's technically not a new design, but it sure ain't Unix.
> yes, it's technically not a new design, but it sure ain't Unix.
Except we are? We are pretty POSIX compliant all the way into the kernel, we have "/dev", filemodes, etc. We don't have X11 or other UNIX staples, sure, but we are pretty UNIXy.
> By "userspace" I was more talking about "the programs and interfaces that a normal user interacts with". Haiku is pretty unique in that regard.
In terms of GUI apps... sorta? We use extended attributes and IPC messaging more than most Linux desktops do, that's true, and our UI/UX is often different.
But if you're talking CLI, then, also no. Bash is the default shell, coreutils are installed by default, sshd is activated by default, etc.
> The issue with such an OS is that there aren't won't be any libraries and tools.
This might not be as big of a deal. Rust increases your productivity quite a bit and I'm really impressed with the pace of progress in the community. I can imagine that new, better & more integrated tools will be made.
My guess is someone will try to have an OS based on containers of Web Assembly apps. There are quite a few APIs that have been built over the years that are familiar to programmers. I do believe this will cut down on the pain of having to new develop system level tools to manage such a beast.
"Nebulet is a microkernel that executes WebAssembly modules in ring 0 and a single address space to increase performance. This allows for low context-switch overhead, syscalls just being function calls, and exotic optimizations that simply would not be possible on conventional operating systems. The WebAssembly is verified, and due to a trick used to optimize out bounds-checking, unable to even represent the act of writing or reading outside its assigned linear memory."
WebAssembly has a few more advantages when deployed as part of the kernel; you can run things in ring 0. Even better, you can transparently remap things in memory. And even better than, you can realize IPC as a simple function pointer. Safely.
Of course, you can't have everything as WebAssembly, some core drivers will need to run some critical machine code, but those could be tightly enough integrated that the overhead is almost zero (ie, by using WA imports you can turn this into a function call overhead)
I agree with the first two points, not sure how Rust and GPUs are really related yet. I mean I know you can bind into GL/etc libs, but there's something more profound about Rust's type system, and the parallelism / memory model of a GPU (or CPU/heterogeneous computation in general). AFAIK, there's no way to write GPU shader code that shares the static analysis from the Rust CPU code. It would be very interesting to be able to talk about the move semantics across the full modern computing architecture.
If anyone knows work being done in this area I'd be curious to read more personally.
As for a better shell, I also completely agree, but I'm not sure it needs to break POSIX. Shameless little plug, I recently started a shell in Rust myself: https://github.com/nixpulvis/oursh
As a fish user myself, I would love to see a new shell that retains many of the UI features of fish (like the excellent autocompletion behavior while typing) but with an actual usable modern fast scripting language.
POSIX compatibility at the scripting layer is beneficial for being able to run existing shell scripts, but the sh scripting language sucks in many ways.
What I'd really like to have is a shell that supports both a POSIX compatibility mode for running existing scripts, alongside a more powerful and modern scripting language for use in writing scripts targeting the new shell directly. I'm not sure how to identify which mode an arbitrary script should run in though, or which mode should be used at the command line.
Oh great! I hope you take good care in designing the "modern" language, because once people start writing scripts for your shell, it becomes very hard to fix mistakes (this has been a problem for fish's scripting). I wish I had the time to be involved in designing a new shell scripting language, as it's something I'd really like to see done right, I just have no time to spend on that.
Incidentally, the link to the `modern` module is broken, it's just program::modern (which is of course not a valid link). Given that I don't see a `modern` module in the TOC I'm assuming the module doesn't actually exist yet?
Oh man, a lot of the ideas I've thought about for improving handling of shell scripts have problems in the presence of background jobs, and in particular the ability to background a job that's currently foregrounded.
On a related note, here's something I've been thinking about:
I want to be able to insert shell script functions in the middle of a pipeline without blocking anything. Or more importantly, have two shell functions as two different components of the pipeline. I believe fish handles this by blocking everything on the first function, collecting its complete output, then continuing the pipeline, but that's awful. Solving this means allowing shell functions to run concurrently with each other. But given that global variables are a thing (and global function definitions), we need a way to keep the shell functions from interfering with each other. POSIX shells solve this by literally forking the shell, but that causes other problems, such as the really annoying Bash one where something like
someCommand | while read line; do …; done
can't modify any variables in the while loop because that's running in a subshell.
So my thought was concurrently-executed shell functions can run on a copy of the global variables (and a copy of the global functions list). And when they finish, we can then merge any changes they make back. I'm not sure how to deal with conflicts yet, but we could come up with some reasonable definition (as this all happens in-process, we could literally attach a timestamp to each modification and say last-change-wins, though this does mean a background process could have some changes merged and some discarded, so I'm not sure if this is really the best approach; we could also use the timestamp the job was created, or finished, or we could also give priority to changes made by functions later in a pipeline so in e.g. `func1 | func2` any changes made by func2 win over changes made by func1).
When I first started typing this out I thought that this scheme didn't work if the user started a script in the foreground and then backgrounded it, but now that I've written it out, it actually could work. If every script job runs with its own copy of the global environment, and merges the environment back when done, then running in the foreground and running in the background operates exactly the same, and this also neatly solves the question of what happens if a background job finishes while a foreground job is running; previously I was thinking that we'd defer merging the background job's state mutations until the foreground job finishes, but if the foreground job uses the same setup of operating on a copy of the global state, then we can just merge whenever. The one specialization for foreground jobs we might want to make is explicitly defining that foreground jobs always win conflicts.
This is along the lines of things I was thinking myself. I'm currently aiming to get the POSIX programs working 100%, which I don't believe could allow this. But, the framework for running managing foreground and background jobs should support both the POSIX and Modern syntax and something like this. This is EXACTLY the kind of thing I want to add to the new "modern" language!
Also, the ability to "rerun" previous commands from a buffer without actually re-executing anything would be a cool somewhat related feature.
If you want to chat about shells anytime shoot me an email or something: nathan@nixpulvis.com
I feel like a new fresh look at what UNIX is today could be valuable. I wouldn't want to give up a lot of the philosophy around it. Redox does in fact do this in a number of places, for example I think the expansion on "everything is a file" [1] is a pretty awesome idea.
I'd love to see a "there are no paths, file system is a db" like it was originally rumored Vista was going to be. I'd also love to see more opinionated OS integration e.g. per-application volume sliders, better file system search indexing, real-time built-in file and directory watching API that can be blocking or non-blocking, standardized storage of program files instead of installed app locations being ambiguous, standardized system and app settings via a neat API, etc., etc., an easy and sane git-based package manager, etc
Chrome OS has a fixed app install location, as does Windows 10 in 'S' mode (since you can only install store apps).
I remember being intrigued by the database-as-filesystem idea when it was first touted - has any OS actually implemented this? I'd be interested to see how it works in practice.
That only makes things more confusing because Redox is explicitly not a traditional Unix. It's more like Plan9, except it goes even further than Plan9 in few places, and things like the Ion shell in Redox don't attempt to be Posix compatible.
What is a lambda to you in this case? That lambda will need to be scheduled, it will need to maintain it's scope... All the same general issues could exist. There have been plenty of machines that simply run functions, in fact before there was things we call OSes today, the machines that ran the code would typically not have an notion of time sharing, and would map even more closely to a pure lambda evaluator.
Without specifics about what differences you mean, lambda = function = process = thread = fiber = service = worker = ...
That lambda will need to be scheduled, it will need to maintain it's scope... All the same general issues could exist.
But no users would have to be able to start processes. Instead, lambdas could be associated with persistent storage of state, and processes would be started by the OS to apply the lambdas in a simulation loop, but users wouldn't directly start processes.
Perhaps thinking of those as processes isn't quite right either.
Have you looked at Urbit? It uses this idea - programs are state machines that you give to the runtime, which pumps them with events (IO, RPC calls, etc) that return [new-state events-out]. All programs are purely functional and deterministic, so you can replay events or serialize the app to transfer somewhere else.
All programs are purely functional and deterministic, so you can replay events or serialize the app to transfer somewhere else.
Interesting. Actually, I have one of my processes serializing its state and exporting it to a different process, while all of the clients continue playing the MMOsteroids game they're logged into.
The only truly pure functions exist in the imaginary world of mathematics, if you even believe that.
But I think I kinda get your point, but would challenge you to think about this issue with the frame of mind of access control and permissions a bit. I think you'll find the need for some kind of process like task. Maybe not...
The only truly pure functions exist in the imaginary world of mathematics, if you even believe that.
Note I already mentioned persistent stores of state.
I think you'll find the need for some kind of process like task. Maybe not...
Maybe re-read. I've already said that there would be a process like task. Lambdas will need to be associated with state. Users won't have to start processes. Instead, processes will be more like processors.
I'd look at Haskell-based stuff first. I was told, though, by Haskellers that things like House operating system are imperative in style even if done in Haskell. So, a quick search for functional programs for OS's gave these possibilities:
That's kinda trippy. It's like I was a different person. Also, I was working on the predecessor system to the one I'm currently working on. Back then, the thing was written in Clojure. I later ported it to Go. The design philosophy has changed a heck of a lot as well. Back then, I was going to have everything on the same very large and fast virtual instance, with modest goals for the largest population/scale. Now my system is scalable by adding more "workers," which I've spread out onto small AWS instances.