Windows is strictly better here because 1) it will never pretend that it has more memory than it actually does, and yet 2) it allows processes to reserve as much address space as they need without using them (which is the sole justification for overcommit), by providing the APIs to control this in a fine grained way.
The Windows approach is better from a "perfect system" point of view: in theory an application knows all the code that it is loading and has a grasp on its own memory usage. You still have virtual address space because it is used for other things too (like memory mapped files), but you "commit" your current upper limit. You can be sure that if needed you'll actually be able to malloc (well HeapAlloc) that much memory. It might be slow due to swapping but it won't fail.
The Unix approach is better from a "realistic" point of view: most processes have a lot of library code they don't control and don't have time to audit. Usage patterns vary. And most processes end up reserving more memory than they ever actually touch. Note what they mentioned in the article - 3rd party graphics drivers run code in every process on Windows and that code allocates whatever it wants that counts against your commit limit. That isn't under your control at all and worse most of the time the memory is never touched.
Having lived under both systems I think I prefer the Unix view. In practice almost no Windows software does anything useful with commit limits so it just creates extra complexity and failure modes for little benefit.
> And most processes end up reserving more memory than they ever actually touch.
I find this assertion dubious. Explicitly pre-allocating large arenas is common when micro-optimizing, which isn't something that happens for most apps out there - they just do the usual malloc/free (or equivalent) dance as needed. So for your average app, it boils down to the quality of implementation of its heap allocator. And there aren't that many of them to get right.