BSC-CNS at NIPS2017, a top Machine Learning and Artificial Intelligence conference

BSC-CNS will be present at the 31th Annual Conference on Neural Information Processing Systems (NIPS 2017), a machine learning and computational neuroscience conference that includes invited talks, demonstrations and oral and poster presentations. It covers topics ranging from deep learning and computer vision to cognitive science and reinforcement learning. NIPS is one of the top Machine Learning and Artificial Intelligence conferences in the world and has become the academic and industry AI conference.

One paper will be presented at Machine Learning for Health workshop: “Detection-aided liver lesion segmentation using deep learning“. In this paper we propose a method to segment the liver and its lesions from Computed Tomography (CT) scans using Convolutional Neural Networks (CNNs), that have proven good results in a variety of computer vision tasks, including medical imaging. The network that segments the lesions consists of a cascaded architecture, which first focuses on the region of the liver in order to segment the lesions on it. Moreover, we train a detector to localize the lesions, and mask the results of the segmentation network with the positive detections. The segmentation architecture is based on DRIU(Maninis, 2016), a Fully Convolutional Network (FCN) with side outputs that work on feature maps of different resolutions, to finally benefit from the multi-scale information learned by different stages of the network. The main contribution of this work is the use of a detector to localize the lesions, which we show to be beneficial to remove false positives triggered by the segmentation network.  This work is a joint work with researchers at UPC and ETH:

The webpage of this paper can be found here, and the code of the project (in TensorFlow) here. The PDF paper is also available at arXiv.

 

The second paper will be presented at NIPS Time Series Workshop: “Skip RNN: Skipping State Updates in Recurrent Neural Networks“. Recurrent Neural Networks (RNNs) continue to show outstanding performance in sequence modeling tasks. However, training RNNs on long sequences often face challenges like slow inference, vanishing gradients and difficulty in capturing long term dependencies. In backpropagation through time settings, these issues are tightly coupled with the large, sequential computational graph resulting from unfolding the RNN in time. We introduce the Skip RNN model which extends existing RNN models by learning to skip state updates and shortens the effective size of the computational graph. This model can also be encouraged to perform fewer state updates through a budget constraint. We evaluate the proposed model on various tasks and show how it can reduce the number of required RNN updates while preserving, and sometimes even improving, the performance of the baseline RNN models. This work is a joint work with researchers at Google Inc., Universitat Politècnica de Catalunya and Columbia University:

The webpage of this paper can be found here, and the code of the project (in TensorFlow) here. The PDF paper is also available at arXiv.

 

2017-12-03T18:38:00+00:00 December 3rd, 2017|