Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
andbberger
on Dec 9, 2018
|
parent
|
context
|
favorite
| on:
JAX: Numpy with Gradients, GPUs and TPUs
I think this is more exploiting XLA to speedup autograd than for deep learning. You would generally use tensorflow for actual training, I've never encountered any situations where autograd had to be used during training.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: