Abstract
T The dominant sequence transduction models are based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. This parallel processing makes training much faster and helps the model understand complex relationships in language.
The results were remarkable: the Transformer achieved state-of-the-art performance on translation tasks while being significantly more efficient to train.