Screen Shot 2016-03-13 at 18.45.40
The landscape of Deep Learning was impacted in November, 2015, with the release of Google’s TensorFlow, what is now the most popular open source machine learning library on Github by a wide margin.

Some researchers showed their dissatisfaction with the project because the lack of distributed training capabilities (because such capabilities were directly alluded to in the accompanying whitepaper’s title).

However, the distributed TensorFlow has arrived, few week ago [*] Google announced an update to its deep learning library and TensorFlow now supports distributed training.

The distributed version of TensorFlow is supported by gRPC, which is a high performance, open source RPC framework for inter-process communication (the same protocol used by TensorFlow Serving).

Remember that the second most-starred machine learning project of Github is Scikit-learn, the de facto official Python general machine learning framework. For these users TensorFlow can be used through Scikit Flow (skflow), a simplified interface for TensorFlow coming out of Google.

Practically Scikit Flow is a high level wrapper for the TensorFlow library, which allows the training and fitting of neural networks using the familiar approach of Scikit-learn. This library covers variety of needs from linear models to Deep Learning applications.

After this announcement (adding TensorFlow Serving and Scikit Flow), in my humble opinion I think that TensorFlow can become de facto mainstream deep learning library.

[*] I have taken seriously my role to keep up with the newest releases of TensorFlow, however, I could not finish this post so far by personal issues.