But why? I do use Tensorflow form the 0.x release and I'm still using it right now, migrating all the codebase to the 2.x version.
I can do everything in both pytorch and tensorflow and when I have to define really efficient input pipelines (tf.data is a great thing), parallelize and distribute the training and export a trained model to production... With tensorflow everything is easier.
Moreover, pytorch in 1.x will have static-graph too, exactly like Tensorflow.
Both frameworks are converging to something really similar. I don't see a reason to switch (right now)
At least 1) debugging in a numpy-like env and 2) multi GPU training on a single machine with multiple GPU (last time I was using TF it was such a nightmare that people had to use horovod, a wrapper library developed by... Uber).
Point 1 is a good point, although tfdebug exists and it works.
For point 2: I don't see the problem. I use almost daily a wrapper (defined inside tf.contrib, that in version 2.x will go in core [I hope]) around the optimizer that in 2 lines allows me to distribute the training on multiple GPUs on the same machine
As someone on the sidelines with no explanation in either, but interest in where all this is going, can you share what exactly you do with TF or pytorch?
Machine learning pipelines are brittle and hard to debug as the data representation is condensed to numbers in matrices instead of expressive data structures with semantically meaningful variable names.
To mitigate this complexity, PyTorch and Tensorflow are heavy weight machine learning frameworks that give your software a lot of structure as well as tooling to monitor the progress of training and debug your models as well as some deployment tooling.
Any neural net based component of software will likely be developed in one of these frameworks.
Computer Vision tasks: I train models to to object detection, image classification, semantic segmentation... export the model, load the trained model in C++ and run inference.
Or generative models: same workflow (python definition and train. Export and use in other languages).
I use Tensorflow basically everyday.
I use pytorch only when I read a model implemented in this framework to reimplement it in Tensorflow (in order to use all the tools I developed to simplify the train to shipping to production phases)
There are lots of applications, theoretically, but the primary use cases are image classification and natural language processing. It's not worth it for other problem types.
I'm constantly surprised by how much attention it gets given how narrow the scope is. I guess a lot of people need to classify images.
I can do everything in both pytorch and tensorflow and when I have to define really efficient input pipelines (tf.data is a great thing), parallelize and distribute the training and export a trained model to production... With tensorflow everything is easier.
Moreover, pytorch in 1.x will have static-graph too, exactly like Tensorflow.
Both frameworks are converging to something really similar. I don't see a reason to switch (right now)