Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Point 1 is a good point, although tfdebug exists and it works.

For point 2: I don't see the problem. I use almost daily a wrapper (defined inside tf.contrib, that in version 2.x will go in core [I hope]) around the optimizer that in 2 lines allows me to distribute the training on multiple GPUs on the same machine



Which wrapper are you talking about?





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: