The landscape of Deep Learning was impacted in November, 2015, with the release of Google’s TensorFlow, what is now the most popular open source machine learning library on Github by a wide margin.
Some researchers showed their dissatisfaction with the project because the lack of distributed training capabilities (because such capabilities were directly alluded to in the accompanying whitepaper’s title).
However, the distributed TensorFlow has arrived, few week ago [*] Google announced an update to its deep learning library and TensorFlow now supports distributed training.
Remember that the second most-starred machine learning project of Github is Scikit-learn, the de facto official Python general machine learning framework. For these users TensorFlow can be used through Scikit Flow (skflow), a simplified interface for TensorFlow coming out of Google.
Practically Scikit Flow is a high level wrapper for the TensorFlow library, which allows the training and fitting of neural networks using the familiar approach of Scikit-learn. This library covers variety of needs from linear models to Deep Learning applications.
[*] I have taken seriously my role to keep up with the newest releases of TensorFlow, however, I could not finish this post so far by personal issues.