Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

They're both parallelizable, but the second one is harder to automatically parallelize.

There are guarantees about the way a map function works. It doesn't mutate its input, it only has access to one element at a time, you can't access the data structure you're building, etc.

All of these traits are true of the imperative version as well, but it's a lot harder to write a program which understands that. Meanwhile, you know that's how map and filter work, so it's trivially easy to know that those guarantees hold.



As a matter of fact loops that mutate state are not parallelizable because you can't analyze what they do in order to infer intent and thus some imposed ordering with which you could efficiently distribute the required work while at the same time guaranteeing correctness - solve that and you'd solve the Halting problem ;-)

Can't be done.


Well, you can't solve it in the general case. But parallelizing compilers can and have been written that perform a conservative analysis -- they can prove some subset of all parallelizable loops as parallelizable, and for the ones they can't prove as such, they err toward 'remain serialized'.

For a simple example, code that performs regular array-based accesses, where index computations are simple affine functions (a*i + b) of a single iteration variable 'i', is a case that's received a lot of attention. See e.g. Polly for LLVM.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: