Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

people aren't really doing GPU coding; they're calling CUDA libraries. As with other Intel projects this needs dev support to survive and Intel has not been good at providing it. Consider the fate of the Xeon Phi and Omnipath.


Huh that is a solid point, how big a fraction would you estimate to be actually capable of doing real GPU coding?


I don't think I have a broad enough perspective of the industry to assess that. In my corner of the world (scientific computing) it's probably three quarters simple use of accelerators and one quarter actual architecture based on the GPU itself.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: