Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I get that, and sure enough for certain workloads it is definitely not ideal. But I think you would be surprised at how fast it can actually be.

In projects where you can’t get away from object allocation (due to it being less deterministic for example) with some clever trick like arenas and the like, chances are the GC will be actually much faster than malloc-ing them all — and while it may not sit well with you, GC running in the background just amortizes the cost of each deallocation to basically zero time overhead (of course at the cost of memory overhead).

But I’m not trying to dismiss your experience or anything, the JVM has its place and of course there are plenty of use cases where it is not the best fit at all.



I said it felt (to me) "bulky but solid kinda like a BMW car" which I think is a fair criticism and also a statement of respect, so what you tripped you up?

And that statement was not about the allocation time. But my understanding is that deallocation (GC) is not amortizable to constant for long-lived objects.

We can probably agree that having to code a separate class (with object overhead) to represent each little tuple of 2 integers or so, is bulky - programming-wise but also concerning the memory overhead (and possibly GC overhead).

Programming in C does not imply using malloc() to allocate individual objects. This is a bad practice.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: