Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Well, over-committing is an arguable choice. "Here's the memory you requested! I don't know if I have it available, but anyway. Here it is!"

It turns out it probably works well/better in many cases, because apps don't actually use all the memory they request (up front), and for other reasons, but it's not the obvious choice to make. I would intuitively expect my OS to fail malloc if it does not have any enough memory available if I didn't know better.

I would expect an OS capable of expending its swap file to try doing it before failing my malloc call though.



Yeah, I was pretty incredulous when I first discovered over-commit in Linux - I asked for memory and you gave it to me without an error, and now that I am deep in the guts of processing and can't easily recover, now you decide to tell me you don't really have it!

But once you know about over-commit there are workarounds in languages where you control memory allocation, like touching every page right after you allocate it, but before using it. And in a garbage collected language you don't have any control or insight into when OOM exceptions will occur in either approach. So the ability for the OS to not trust lazy and greedy software that asks for memory but doesn't use it seems like a reasonable trade-off.


I think one of the main reason overcommit is a thing on Linux (certainly one of the things one runs into if one turns it off) is dealing with the weirdness of fork()/exec(). For a moment there in between the two calls you _technically_ have double the RSS allocated - so spawning processes with fork()/exec() from a process with very large RSS is dicey if you don't want to technically overcommit the memory. Since 99.9% of the time very little of that memory is touched/COW'd before the exec() letting it overcommit rather than dying when a 4GB process tries to spawn a new child just because you don't have another spare 4GB of ram sitting around "just in case" is seen as a reasonable tradeoff.

(Modulo vfork() and spawn() of course which are different and arguably better solutions to this issue.)


I don't think the reasonable solution is to, by default, lie to programs that care about memory allocation failure for the sake of those that don't.


>I would expect an OS capable of expending its swap file to try doing it before failing my malloc call though.

It takes A LOT of time to expand the swap file. So failing malloc immeditately seems, to me, the right way to handle it.

Maybe adding an optional callback to malloc to be notified when further allocations are possible would be a better way to handle this.


But 99.99999% of the time, when a program calls malloc, it needs it to succeed. So if there is a callback or something to notify that a wait is required or whatever, then 99.99999% of programs are going to do it. That means the high cost of expanding the swap file will be incurred basically every single time...so why not make that the default?

In the rare case where a program wants to handle a failed allocation differently, then they should use a native system call that provides a more detailed interface than standard malloc. It doesn't matter if it's not portable since this is really a Windows-only thing.

Crashing is not good for anyone. A temporary freeze sucks, sure, but that's what you get for not having enough memory.

...plus, it's not like random freezing is a foreign concept to Windows users.


> It takes A LOT of time to expand the swap file. So failing malloc immediately seems, to me, the right way to handle it.

Maybe it'd be possible to check if expanding the swap file is possible, return and then actually expand the swap file when convenient.

(I like being an armchair kernel developer :-))


The Windows approach let you fail at a predictable place failing at the time of memory allocation. The overcommit approach causes OOM crashing at random places depending on how your program touches memory.


But then we end up with the OOM Killer, which is awful - randomly kill the biggest process because "reasons".... It would be better if the OS could say no.


In this case, the OOM Killer would be better, because a parent process is allowed to sacrifice one of its children, so the browser could kill the least-recently-used tab instead of itself.

https://unix.stackexchange.com/questions/282155/what-is-the-...


If allocation predictably fails, you don't need an OS-level OOM killer to kill least-recently-used - you could just do said killing manually on failed allocation yourself. And you'd be able to do so in a much more controlled manner too while at it, instead of hoping the OS does what you want. (and if an OS/stdlib wanted to, such behavior could be made the default operation on allocation failure)


No, because you only have control over your own process (and its children) and not the others ?


Right, it wouldn't help when one process wants more memory but you want an unrelated one to get killed, but the question here was about a browser killing one of its own tabs instead of the main browser process dying. (though, for what its worth, in the case where processes themselves can't decide how to free memory, I, as the user, would much prefer to be given the option of what to kill anyway; linux completely fails to do that, and given that overcommitting affects DEs too, it'd be pretty complicated to allow for such a thing)


Stuff like earlyoom gives you some control (?)


not dynamically chosen though, at least in the case of earlyoom; whether to prefer killing the browser, or a random long-running process that has built up a couple gigabytes of RAM usage (or even just a bunch of small processes I don't need) will entirely depend on the intent (or lack thereof) behind the process, and what's currently happening in the browser.


Indeed. No solution is perfect.

Without over-committing, you could be preventing something that would work anyway. With it, the OOM could (often does) pick the biggest process that could very be using its memory legitimately AND be the most important process of this machine too.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: