Google Tensor2Tensor to speed up deep learning research

Google has launched Tensor2Tensor, a library that will help researchers train deep learning models for use in its TensorFlow framework.

“T2T facilitates the creation of state-of-the art models for a variety of ML applications, such as translation, parsing, image captioning, and more,” said Google.

“This release also includes a library of datasets and models, including the best models from a few recent papers to help kick-start your own DL research.”

Models available include:

  • Attention Is All You Need
  • Depthwise Separable Convolutions for Neural Machine Translation
  • One Model to Learn Them All

The following results of a standard WMT English-German translation task using previous state-of-the-art models, compared to Tensor2Tensor, were provided by Google.

Its Transformer and SliceNet outperformed GNMT and GNMT+MOE.

Translation Model
Training time
BLEU (difference from baseline)
Transformer (T2T)
3 days on 8 GPUs
28.4 (+7.8)
SliceNet (T2T)
6 days on 32 GPUs
26.1 (+5.5)
1 day on 64 GPUs
26.0 (+5.4)
ConvS2S
18 days on 1 GPU
25.1 (+4.5)
GNMT
1 day on 96 GPUs
24.6 (+4.0)
8 days on 32 GPUs
23.8 (+3.2)
MOSES (phrase-based baseline)
N/A
20.6 (+0.0)

Now read: Google has open sourced its artificial intelligence engine

Latest news

Partner Content

Show comments

Recommended

Share this article
Google Tensor2Tensor to speed up deep learning research