If you were given a 10-minute video but only 30 seconds to watch it and understand what it is about, you would be able to skim through it to find the most relevant segments in this short period of time. Artificial Neural Networks are very good at solving cognitive tasks such as image and video understanding, speech recognition or translation, but they are not able to skim through the given data to accelerate their decisions.

The Barcelona Supercomputing Center (Víctor Campos, Jordi Torres), in collaboration with Universitat Politècnica de Catalunya (Xavier Giró-i-Nieto, Jordi Torres, Google (Brendan Jou) and Columbia University (Shih-Fu Chang), is presenting a paper at ICLR showing how these neural networks can be enhanced to ignore parts of the input data while preserving the accuracy on the target task. This is a very interesting feature that allows to make faster predictions and reduce the energy consumption, which is key for the deployment in portable devices. For more information, please visit the project site or read the paper Skip RNN: Skipping State Updates in Recurrent Neural Networks. The code to reproduce the experiments is publicly available, which we hope will help and encourage research in the field.

The International Conference on Learning Representations (ICLR) is one of the top conferences in the field of machine learning. Steered and organized by some of the best researchers in the field such as Yann LeCun (NYU/Facebook), Yoshua Bengio (Université de Montreal), Oriol Vinyals (Google DeepMind), Aaron Courville (Université de Montreal) or Hugo Larochelle (Google), the conference has published high impact works in the last years. It has attracted the interest of top players in the industry, with a long list of sponsors including NVIDIA, Facebook, Intel, Salesforce, Google or Amazon.

2018-04-27T23:13:31+00:00 April 27th, 2018|