BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Facebook Open Sources Modules for Faster Deep Learning on Torch

Facebook Open Sources Modules for Faster Deep Learning on Torch

Leia em Português

Facebook has open sourced a number of modules for faster training of neural networks on Torch.

Not long after Nvidia released cuDNN, a CUDA-based library for deep neural networks, Facebook’s AI Research laboratory (FAIR) has released for public use a number of modules for Torch, collectively called fbcunn and “significantly faster than the default ones.” The modules target mainly convolutional nets, are optimized for GPUs and are built on Nvidia’s cuFFT library. The package contains:

  • Spatial convolution modules using FFT to accelerate convolutions
  • Containers for parallelizing both the data and model training on multiple GPUs
  • Wrappers for FFT/IFFT 
  • A faster temporal convolutional layer (1.5x to 10x faster than cuDNN)
  • Lookup table for neural language models and word embedding
  • Hierarchical SoftMax module for training using very large number of classes

Facebook has built these modules based on ideas coming out of the paper Fast Training of Convolutional Networks through FFTs, co-authored by Yann LeCun, director of FAIR. According to the release notes, fbcunn provides up to 1.84x speed improvement on small kernel sizes (3x3) over cuDNN and up to 23.5x faster for large kernel sizes (5x5).

One of the first applications of Torch and fbcunn is faster image recognition, an example being classifying objects found in 1.2M pictures from ImageNet.

Rate this Article

Adoption
Style

BT