Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm also a novice on low-level stuff, but if I had to guess...

I'd guess that the virtual machine of C pertains to the addressing and the presentation of memory as a "giant array of bytes". Stack addresses start high and "grow down", heap addresses start low. These addresses need not exist on the machine. For example, two running C processes can have 0x3a4b7e pointing to different places in machine memory (which prevents them from clobbering each other).

Please, someone with more knowledge than me, fill me in on where I'm right and wrong.



C does not require the presentation of memory as a "giant array of bytes"---certainly when you have a pointer, it points to an array of bytes (or rather, a contiguous array of items of the pointer type) but that's about it. The stack does not have to start high and grown down (in fact, the Linux man page for clone() states that the stack on the HP PA architecture grows upward) and the heap doesn't necessarily start low and grow up (mmap() for instance).

You are also confusing multitasking/multiprocessing with C. C has no such concept (well, C11 might, I haven't looked fully into it) and on some systems (like plenty of systems in the 80s) only one program runs at a time. The fact that two "programs" both have a pointer to 0x3A4B7E that reference two physically different locations in memory is an operating system abstraction, not a C abstraction.


C pointer aliasing defeats certain compiler optimizations that can be made in other languages, and is frequently brought up in C vs FORTRAN comparisons. I think that's probably what the GP had in mind.


C99 includes restricted pointers, but support is a bit spotty. Microsoft's compiler (which is of course really just a C++ compiler) includes it as a nonstandard keyword, too.

http://en.wikipedia.org/wiki/Restrict


That makes a lot of sense. Thanks!


Well that is how memory, the hardware we have and also all normal operating systems work, but if you want to discuss other stuff we can do that too :) Try serious debugging and you will find that all your preconceptions are confirmed, yet it's still hard to know WTF is going on.


The "memory is just a giant array of bytes" abstraction hasn't been true ever since DRAM has existed (because DRAM is divided into pages), ever since caches were introduced, and certainly isn't true now that even commodity mulch-processors are NUMA with message-passing between memory nodes.


Look if we want to be super anal about shit all memory is slowly discharging capacitors with basically random access times based on how busy the bus circuity is with our shit this week. It turns out that memory is really complicated stuff if you look at it deeply, but the magic of modern computer architecture is that you get to (hopefully) keep your shelf model for as long as you can. If you were to try to model actual memory latency: here's a shortcut: you can't. That's why everyone bullshits it.


Fair point. At this point, it's a very leaky abstraction because not all levels of "random access" (e.g. L1 cache vs. main memory) are created equal.


True, and this is my biggest problem with writing optimized code in C -- it takes a lot of guessing and inspecting the generated assembler and understanding your particular platform to make sure you're ACTUALLY utilizing registers and cache like you intend.

If there were some way of expressing this intent through the language and have the compiler enforce it, that'd be fantastic :)

That said, there's really not a better solution to the problem than C, just pointing out that even C is often far less than ideal in this arena.


i'm actually spending much of my time these days having a go writing HPC codes in haskell.

So far, the generated assembly looks pretty darn awesome, and the performance pretty competitive with alternatives i have access too :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: