Few days ago Facebook announced Caffe2, a new open-source, cross-platform framework for deep learning. They say that Caffe2 is the successor to Caffe ( really?), the deep learning framework developed by Berkeley AI Research and community contributors. Caffe2’s GitHub page describes it as “an experimental refactoring of Caffe that allows a more flexible way to organize computation.”

As my readers know, when appeared TensorFlow I decided to pay attention to it because it could change the scene of DL/AI frameworks. Now, we are in the same situation, Caffe2 could change the current scene that Francesc Sastre, one of my master students,  build for his master thesis:  “Frameworks popularity evolution in GitHub”

No questions, right?

Facebook launched Caffe2, an open-source deep learning framework made with expression, speed, and modularity in mind. It address the bottlenecks observed in the use and deployment of Caffe over the years. It seems that with Caffe2 developers can scale their deep learning models across multiple GPUs on a single machine or across many machines with one or more GPUs. The framework adds deep learning smarts to mobile and low-power devices by enabling the programming of iPhones, Android systems and Raspberry Pi boards. On the new Caffe2 website, Facebook reported that they use the framework internally to train large machine learning models.

In a recent blog post of Nvidia they say that have fine-tuned Caffe2 from the ground up to take full advantage of the NVIDIA GPU deep learning platform. Caffe2 uses the latest NVIDIA Deep Learning SDK libraries as cuDNN, cuBLAS or NCCL,  to deliver high-performance, multi-GPU accelerated training and inference. In another post Nvidia claims near-linear scaling of deep learning training with 57x throughput acceleration employing a total of 64 Nvidia Tesla P100 GPUs:

Nvidia also reported that its DGX-1 supercomputer will offer Caffe2 within its software stack.

Also in a recent  blog post,  Intel describe the company’s efforts to boost Caffe2 performance on Intel CPUs, collaborating with Facebook to incorporate Intel Math Kernel Library (MKL) functions into Caffe2.  Intel shares some performance numbers related with the inference on AlexNet using the Intel MKL library and the Eigen BLAS library for comparison (experiments were performed on Xeon processor E5-2699 v4 (Broadwell) @ 2.20GHz with dual sockets, 22 physical cores per socket (total of 44 physical cores in both sockets), 122GB RAM DDR4, 2133 MHz, HT Disabled):

Amazon and Facebook also collaborate to Optimize Caffe2 for the Cloud environments according this blog post. Also Facebook  says it worked closely with Qualcomm mobile environments.

Recent years, as a result of the increase of the popularity of deep learning, many frameworks have surged in order to ease the task of create and train models.  No doubt that Caffe2 will be a new important player that will compete with TensorFlow, all required ingredients to achieve it are in the project presented by Facebook. Will Caffe2 steal the supremacy that TensorFlow has right now? I do not have a crystal ball. My opinion is that It seems not easy, but it is the only one that could be a competitor for Google especially in the production arena in companies. We will see it soon, because Facebook has to hurry, the window of time to get it will be very small.