Afaik memory-mapped files don't count towards the commit limit. Couldn't they roll a custom file-backed allocator (essentially manual swap) to get around the windows limit?
It’s good to somewhat limit the amount of memory that gets paged to disk, as it is incredibly slow memory. There should be backpressure on memory allocation. Getting around the OS on this matter would be silly, IMO. We’re talking about an interactive UI environment where massive page files would be doubly bad, swallowing up surprising amounts of disk space in order to provide incredibly slow memory.
However, the type of solution you’re talking about would apply well to a caching server, and Varnish (on FreeBSD I believe) uses it quite successfully.
Reminds me how some folk (uTorrent team, AFAIR) thought they are smarter than those guys in Redmond and rolled out their own caching algorithm. Except they weren't in fact smarter and fucked up the memory usage by allocating all the availble memory for a disk cache... pushing everything else including OS to the swap.
Don't give them ideas about how to use even more memory. For someone who's first computer had 64KiB RAM, it's incomprehensible why rendering a single web page should consume 100MiB.