Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Can anyone hazard a guess as to how Linus was able to measure the cost of a page fault and an iret so precisely? What tools and techniques might he have used?


It's quite likely that he was using `perf':

    $ perf stat make
    ...
    116,222 page-faults               #    0.046 M/sec
    ...
https://perf.wiki.kernel.org/index.php/Tutorial


I don't know what he used, but its probably based on the model specific registers for performance counters. See Chapter 18 of the Intel 64 and IA-32 Architectures Software Developer's Manual.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: