Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

They are not being ignored. Better versions of those benchmarks exist in-tree: https://github.com/rust-lang/rust/tree/master/src/test/bench (the shootout-....rs). The only reason they're not on the website is licensing problems (which have recently been resolved) and the fact that the benchmarks game is using the 0.11 release (while the in-tree benchmarks are kept up-to-date with the master branch and so don't necessarily compile with 0.11); this new release represents a good chance to push them up.

Those small benchmarks are not good representations of the rest of the world; they are contrived and limited problems, with some fairly arbitrary rules about which languages/implementations are valid to include, e.g. PyPy is not allowed, and Java gets JIT warm-up time etc.



> The only reason they're not on the website is…

… that no one has contributed them.

> … with some fairly arbitrary rules about which languages/implementations are valid to include, e.g. PyPy is not allowed …

Hundreds of programming language implementations are not included! It would take more time than I choose to donate. Been there; done that.

http://benchmarksgame.alioth.debian.org/play.html#languagex

> … and Java gets JIT warm-up time etc.

Java does not get JIT warm-up time! Please stop making up misinformation!

http://benchmarksgame.alioth.debian.org/play.html#measure

http://benchmarksgame.alioth.debian.org/play.html#java


The site could be renamed to be "The Computer Language Implementation Benchmarks Game", since it's not testing the language speed (best approximated by the fastest known implementation), just certain implementations, some of which are designed with priorities above speed in mind (e.g. cPython).

> Java does not get JIT warm-up time! Please stop making up misinformation!

Oh, sorry! I must've been misremembering something someone told me. (Although, how does `Cold` differ from `Usual` there? It's not clear from the text what the difference is.)


> best approximated by the fastest known implementation

Best not to become so confused: " Measurement is highly specific -- the time taken for this benchmark task, by this toy program, with this programming language implementation, with these options, on this computer, with these workloads."

> `Usual`

http://benchmarksgame.alioth.debian.org/play.html#measure

> `Cold`

http://benchmarksgame.alioth.debian.org/play.html#java

"Here are some additional (Intel® Q6600® quad-core) elapsed time measurements, taken after the Java programs started and before they exited:

In the first case (Cold), we simply started and measured the program 66 times; and then discarded the first measurement leaving 65 data points."


Yow.

I didn't know that.

(Seems like a better way to test would be to allow anything, with three sets of numbers. One for "time from source to as far as you can go without input" (i.e. compilation time, loading the source into RAM, that sort of thing) "time from input to end of first run", one for "time for nth run", with n being high enough that the timing settles. So after any JITters are done, that sort of thing.)


You "didn't know that" because it isn't true.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: