Removal translate

Removal translate DEFAULT

Translation of removal – English–Russian dictionary

However, the overall picture shows that resources, operations and removals of illegal immigrants have increased.
From the

Hansard archive

At the end of his speech, he gave examples of track removals, and my experience is exactly the same.
From the

Hansard archive

Perhaps a more unexpected finding is the changing character of patronage removals over time.

From the Cambridge English Corpus

Other police officers effect removals during the course of, or when they can be released from, other duties.
From the

Hansard archive

We shall want to inquire very fully into the effect of such removals when they come along.
From the

Hansard archive

I hope, however, that property-owners will not press for removals of wires and poles which will cause interruptions in the service of subscribers.
From the

Hansard archive

There ought to be a margin available for removals of this kind.
From the

Hansard archive

These examples are from corpora and from sources on the web. Any opinions in the examples do not represent the opinion of the Cambridge Dictionary editors or of Cambridge University Press or its licensors.


Translation (relic)

Movement of a holy relic from one location to another

In Christianity, the translation of relics is the removal of holy objects from one locality to another (usually a higher-status location); usually only the movement of the remains of the saint's body would be treated so formally, with secondary relics such as items of clothing treated with less ceremony. Translations could be accompanied by many acts, including all-night vigils and processions, often involving entire communities.

The solemn translation (in Latin, translatio) of relics is not treated as the outward recognition of sanctity. Rather, miracles confirmed a saint's sanctity, as evinced by the fact that when, in the twelfth century, the Papacy attempted to make sanctification an official process; many collections of miracles were written in the hope of providing proof of the saint-in-question's status. In the early Middle Ages, however, solemn translation marked the moment at which, the saint's miracles having been recognized, the relic was moved by a bishop or abbot to a prominent position within the church. Local veneration was then permitted. This process is known as local canonization.[1]

The date of a translation of a saint's relics was celebrated as a feast day in its own right. For example, on January 27 is celebrated the translation of the relics of St. John Chrysostom from the Armenian village of Comana (where he died in exile in 407) to Constantinople.[2] The most commonly celebrated feast days, however, are the dies natales (the day on which the saint died, not the modern idea of birthday).

Relics sometimes travelled very far. The relics of Saint Thyrsus at Sozopolis, Pisidia, in Asia Minor, were brought to Constantinople and then to Spain. His cult became popular in the Iberian Peninsula, where he is known as San Tirso or Santo Tirso.[3] Some of his relics were brought to France: Thyrsus is thus the titular saint of the cathedral of Sisteron in the Basses Alpes,[4] the Cathédrale Notre Dame et Saint Thyrse. Thyrsus is thus the patron saint of Sisteron.[5]Liborius of Le Mans became patron saint of Paderborn, in Germany, after his relics were transferred there in 836.[6]


In the early church, the disturbance, let alone the division, of the remains of martyrs and other saints, was not of concern or interest, much less practised. It was assumed that they would remain permanently in their often-unidentified resting places in cemeteries and the catacombs of Rome (but always outside the walls of the city, continuing a pagan taboo). Then, martyriums began to be built over the site of the burial of saints. It came to be considered beneficial to the soul to be buried close to saintly remains, and as such, several large "funerary halls" were built over the sites of martyr's graves, the primary example being the Old Saint Peter's Basilica.

The earliest recorded removal of saintly remains was that of Saint Babylas at Antioch in 354. However, partly perhaps because Constantinople lacked the many saintly graves of Rome, translations soon became common in the Eastern Empire, even though it was still prohibited in the West. The Eastern capital was able to acquire the remains of Saints Timothy, Andrew and Luke.[how?] The division of bodies also began; the 5th-century theologian Theodoretus declaring that "Grace remains entire with every part". An altar slab dated 357, found in North Africa but now in the Louvre, records the deposit beneath it of relics from several prominent saints.

Non-anatomical relics, above all that of the True Cross, were divided and widely distributed from the 4th century. In the West a decree of Theodosius I only allowed the moving of a whole sarcophagus with its contents, but the upheavals of the barbarian invasions relaxed the rules, as remains needed to be relocated to safer places.[7]

In the 4th century, Basil the Great requested of the ruler of Scythia Minor, Junius Soranus (Saran), that he should send him the relics of saints of that region. Saran sent the relics of Sabbas the Goth to him in Caesarea, Cappadocia, in 373 or 374 accompanied by a letter, the "Epistle of the Church of God in Gothia to the Church of God located in Cappadocia and to all the Local Churches of the Holy Universal Church".[citation needed] The sending of Sabbas' relics and the writing of the actual letter has been attributed to Bretannio. This letter is the oldest known writing to be composed on Romanian soil and was written in Greek.[citation needed]

The spread of relics all over Europe from the 8th century onward is explained by the fact that after 787, all new Christian churches had to possess a relic before they could be properly consecrated.[8] New churches, situated in areas newly converted to Christianity, needed relics and this encouraged the translation of relics to far-off places. Relics became collectible items, and owning them became a symbol of prestige for cities, kingdoms, and monarchs,[8] Relics were also desirable as they generated income from pilgrims traveling to venerate them. According to one legend concerning Saint Paternian, the inhabitants of Fano competed with those of Cervia for possession of his relics. Cervia would be left with a finger, while Fano would possess the rest of the saint's relics.[9]

The translation of relics was a solemn and important event. In 1261, the relics of Lucian of Beauvais and his two companions were placed in a new reliquary by William of Grès (Guillaume de Grès), the bishop of Beauvais. The translation took place in the presence of St. Louis IX, the king of France, and Theobald II, the king of Navarre, as well as much of the French nobility. The memory of this translation was formerly celebrated in the abbey of Beauvais as the fête des Corps Saints.[10]

On February 14, 1277, while work was being done at the church of St. John the Baptist (Johanniterkirche) in Cologne, the body of Saint Cordula, one of the companions of Saint Ursula, was discovered.[11] Her relics were found to be fragrant and on the forehead of the saint herself were written the words, "Cordula, Queen and Virgin". When Albert the Great, who had been residing in Cologne in his old age, had listened to the account of the finding of the relics,

he wept, praised God from the depth of his soul, and requested the bystanders to sing the Te Deum. Then vesting himself in his episcopal robes, he removed the relics from under the earth, and solemnly translated them into the church of the monks of St. John. After singing Mass, he deposited the holy body in a suitable place, which God has since made illustrious by many miracles.[12]

Some relics were translated from place to place, buffeted by the tides of wars and conflicts. The relics of Saint Leocadia were moved from Toledo to Oviedo during the reign of Abd ar-Rahman II, and from Oviedo they were brought to Saint-Ghislain (in present-day Belgium). Her relics were venerated there by Philip the Handsome and Joanna of Castile, who recovered for Toledo a tibia of the saint. Fernando Álvarez de Toledo, 3rd Duke of Alba attempted unsuccessfully to rescue the rest of her relics.[13] Finally, a Spanish Jesuit, after many travels, brought the rest of the saint's relics to Rome in 1586. From Rome they were brought to Valencia by sea, and then finally brought to Toledo from Cuenca. Philip II of Spain presided over a solemn ceremony commemorating the final translation of her relics to Toledo, in April 1587.[13]

Idesbald's relics were moved from their resting-place at the abbey of Ten Duinen after the Geuzen ("Sea Beggars") plundered the abbey in 1577; his relics were translated again to Bruges in 1796 to avoid having them destroyed by Revolutionary troops.[14]

The translation of the relics continued into modern times. On December 4, 1796, as a result of the French Revolution, the relics of Saint Lutgardis were carried to Ittre from Awirs. Her relics remain in Ittre.[15]

Notable translations[edit]

Among the most famous translations is that of St Benedict of Nursia, author of the "Regula S. Benedicti", from Cassino to Fleury, which Adrevald memorialized. In England, the lengthy travels of St Cuthbert's remains to escape the Vikings, and then his less respectful treatment after the English Reformation, have been much studied, as his coffin, gospel book and other items buried with him are now very rare representatives of Anglo-Saxon art.[citation needed]

Some well-known translations of relics include the removal of the body of Saint Nicholas from Myra in Asia Minor to Bari, Italy in 1087. Tradesmen of Bari visited the relics of Saint Nicholas in 1087 after finding out their resting-place from the monks who guarded them. According to one account, the monks showed the resting-place but then became immediately suspicious: "Why you men, do you make such a request? You haven't planned to carry off the remains of the holy saint from here? You don't intend to remove it to your own region? If that is your purpose, then let it be clearly known to you that you parley with unyielding men, even if it mean our death."[16] The tradesmen tried different tactics, including force, and manage to take hold of the relics. An anonymous chronicler writes about what happened when the inhabitants of Myra found out:

Meanwhile, the inhabitants of the city learned of all that had happened from the monks who had been set free. Therefore they proceeded in a body, a multitude of men and women, to the wharves, all of them filled and heavy with affliction. And they wept for themselves and their children, that they had been left bereft of so great a blessing ... Then they added tears upon tears and wailing and unassuageable lamentation to their groans, saying: "Give us our patron and our champion, who with all consideration protected us from our enemies visible and invisible. And if we are entirely unworthy, do not leave us without a share, of at least some small portion of him."

— Anonymous, Greek account of the transfer of the Body of Saint Nicholas, 13th century, [16]

Professor Nevzat Cevik, the Director of Archaeological Excavations in Demre (Myra), has recently recommended that the Turkish government should request the repatriation of St Nicholas' relics, alleging that it had always been the saint's intention to be buried in Myra.[17] The Venetians, who also claimed to have some parts of St Nicholas, had another story: The Venetians brought the remains back to Venice, but on the way they left an arm of St Nicholas at Bari (The Morosini Codex 49A).

In 828, Venetian merchants acquired the supposed relics of Saint Mark the Evangelist from Alexandria, Egypt. These are housed in St Mark's Basilica; in 1968, a small fragment of bone was donated to the Coptic Church in Alexandria.

In recent times[edit]

A famous and recent example is the return of the relics of John Chrysostom and Gregory of Nazianzus to the See of Constantinople (Greek Orthodox Church) by Pope John Paul II in November 2004.[18][19] Another modern example is the exhumation, display, and reburial of the relics of Padre Pio in 2008–2009.[citation needed]


  1. ^Eric Waldram Kemp, Canonization and Authority in the Western Church, Oxford, 1948.
  2. ^"The Translation of the Relics of St. John Chrysostom".
  3. ^Christian IconographyArchived 2006-09-09 at the Wayback Machine
  4. ^
  5. ^"Cathédrale Notre-Dame et Saint-Thyrse".
  6. ^"Saints of July 23".
  7. ^Eduard Syndicus, Early Christian Art, p. 73, Burns & Oates, London, 1962; Catholic Encyclopedia (1913) on the Louvre slab and True Cross.
  8. ^ ab"Fully Certified Professional & Qualified Translators". DHC Translations. 2017-07-12. Retrieved 2020-10-21.
  9. ^"Riti e Credenze: San Paterniano 13 novembre - Cervia Turismo". Archived from the original on September 27, 2007.
  10. ^"St Lucien - 1er Evêque du Beauvaisis". Archived from the original on December 4, 2007.
  11. ^Joachim Sighart, Albert the Great (R. Washbourne, 1876), 360.
  12. ^Joachim Sighart, Albert the Great (R. Washbourne, 1876), 361-362.
  13. ^ ab"La diócesis de Toledo celebra el Año Jubilar de santa Leocadia". Archived from the original on September 27, 2007.
  14. ^"Beato Idesbaldo delle Dune su".
  15. ^"Santa Lutgarda su".
  16. ^ ab"Internet History Sourcebooks Project".
  17. ^"Turks want Santa's bones returned". BBC News. 2009-12-28. Retrieved 2020-10-21.
  18. ^"Return of the Relics of Sts. Gregory the Theologian and John Chrysostom to Constantinople" – via
  19. ^"Ecumenical celebration relics of Saints Gregory Nazianzus and John Chrysostom [IT]".

Further reading[edit]

  • Patrick J. Geary, Furta Sacra, Princeton University Press, 1975.
  • Eric W. Kemp, Canonization and Authority in the Western Church, Oxford University Press, 1948.

External links[edit]

  1. Barra fox body
  2. Corteva canada
  3. Mpc 1000 tutorial
  4. Rib gun holster
  5. Chrome rdc

Transforming machine translation: a deep learning system reaches news translation quality comparable to human professionals


The quality of human translation was long thought to be unattainable for computer translation systems. In this study, we present a deep-learning system, CUBBITT, which challenges this view. In a context-aware blind evaluation by human judges, CUBBITT significantly outperformed professional-agency English-to-Czech news translation in preserving text meaning (translation adequacy). While human translation is still rated as more fluent, CUBBITT is shown to be substantially more fluent than previous state-of-the-art systems. Moreover, most participants of a Translation Turing test struggle to distinguish CUBBITT translations from human translations. This work approaches the quality of human translation and even surpasses it in adequacy in certain circumstances.This suggests that deep learning may have the potential to replace humans in applications where conservation of meaning is the primary aim.


The idea of using computers for translation of natural languages is as old as computers themselves1. However, achieving major success remained elusive, in spite of the unwavering efforts of the machine translation (MT) research over the last 70 years. The main challenges faced by MT systems are correct resolution of the inherent ambiguity of language in the source text, and adequately expressing its intended meaning in the target language (translation adequacy) in a well-formed and fluent way (translation fluency). Among key complications is the rich morphology in the source and especially in the target language2. For these reasons, the level of human translation has been thought to be the upper bound of the achievable performance3. There are also other challenges in recent MT research such as gender bias4 or unsupervised MT5, which are mostly orthogonal to the present work.

Deep learning transformed multiple fields in the recent years, ranging from computer vision6 to artificial intelligence in games7. In line with these advances, the field of MT has shifted to the use of deep-learning neural-based methods8,9,10,11, which replaced previous approaches, such as rule-based systems12 or statistical phrase-based methods13,14. Relying on the vast amounts of training data and unprecedented computing power, neural MT (NMT) models can now afford to access the complete information available anywhere in the source sentence and automatically learn which piece is useful at which stage of producing the output text. This removal of past independence assumptions is the key reason behind the dramatic improvement of translation quality. As a result, neural translation even managed to considerably narrow the gap to human-translation quality on isolated sentences15,16.

In this work, we present a neural-based translation system CUBBITT (Charles University Block-Backtranslation-Improved Transformer Translation), which significantly outperformed professional translators on isolated sentences in a prestigious competition WMT 2018, namely the English–Czech News Translation Task17. We perform a new study with conditions that are more representative and far more challenging for MT, showing that CUBBITT conveys meaning of news articles significantly better than human translators even when the cross-sentence context is taken into account. In addition, we validate the methodological improvements using an automatic metric on English↔French and English↔Polish news articles. Finally, we provide insights into the principles underlying CUBBITT’s key technological advancement and how it improves the translation quality.


Deep-learning framework transformer

Our CUBBITT system (Methods 1) follows the basic Transformer encoder-decoder architecture introduced by Vaswani et al.18. The encoder represents subwords19 in the source-language sentence by a list of vectors, automatically extracting features describing relevant aspects and relationships in the sentence, creating a deep representation of the original sentence. Subsequently, the decoder converts the deep representation to a new sentence in the target language (Fig. 1a, Supplementary Fig. 1).

a The input sentence is converted to a numerical representation and encoded into a deep representation by a six-layer encoder, which is subsequently decoded by a six-layer decoder into the translation in the target language. Layers of the encoder and decoder consist of self-attention and feed-forward layers and the decoder also contains an encoder-decoder attention layer, with an input of the deep representation created by the last layer of encoder. b Visualization of encoder self-attention between the first two layers (one attention head shown, focusing on “magazine” and “her”). The strong attention link between ‘magazine’ and ‘gun’ suggests why CUBBITT ultimately correctly translates “magazine” as “zásobník” (gun magazine), rather than “časopis” (e.g., news magazine). The attention link between ‘woman’ and ‘her’ illustrates how the system internally learns coreference. c Encoder-decoder attention on the second layer of the decoder. Two heads are shown in different colors, each focusing on a different translation aspect which is described in italic. We note that the attention weights were learned spontaneously by the network, not inputted a priori.

Full size image

A critical feature of the encoder and decoder is self-attention, which allows identification and representation of relationships between sentence elements. While the encoder attention captures the relationship between the elements in the input sentence (Fig. 1b), the encoder-decoder attention learns the relationship between elements in the deep representation of the input sentence and elements in the translation (Fig. 1c). In particular, our system utilizes the so-called multi-head attention, where several independent attention functions are trained at once, allowing representation of multiple linguistic phenomena. These functions may facilitate, for example, the translation of ambiguous words or coreference resolution.

Utilizing monolingual data via backtranslation

The success of NMT depends heavily on the quantity and quality of the training parallel sentences (i.e., pairs of sentences in the source and target language). Thanks to long-term efforts of researchers, large parallel corpora have been created for several language pairs, e.g., the Czech-English corpus CzEng20 or the multi-lingual corpus Opus21. Although millions of parallel sentences became freely available in this way, this is still not sufficient. However, the parallel data can be complemented by monolingual target-language data, which are usually available in much larger amounts than the parallel data. CUBBITT leverages the monolingual data using a technique termed backtranslation, where the monolingual target-language data are machine translated to the source language, and the resulting sentence pairs are used as additional (synthetic) parallel training data19. Since the target side in backtranslation are authentic sentences originally written in the target language, backtranslation can improve fluency (and sometimes even adequacy) of the final translations by naturally learning the language model of the target language.

CUBBITT is trained with backtranslation data in a novel block regime (block-BT), where the training data are presented to the neural network in blocks of authentic parallel data alternated with blocks of synthetic data. We compared our block regime to backtranslation using the traditional mixed regime (mix-BT), where all synthetic and authentic sentences are mixed together in random order, and evaluated the learning curves using BLEU, an automatic measure, which compares the similarity of an MT output to human reference translations (Methods 2–13). While training with mix-BT led to a gradually increasing learning curve, block-BT showed further improved performance in the authentic training phases, alternated with reduced performance in the synthetic ones (Fig. 2a, thin lines). In the authentic training phases, block-BT was better than mix-BT, suggesting that a model extracted at the authentic-data phase might perform better than mix-BT trained model.

a The effect of averaging eight last checkpoints with block-BT and mix-BT on the translation quality as measured by BLEU on the development set WMT13 newstest. The callouts (pointing to the initial and final peaks of the block-BT + avg8 curve) illustrate the 8 averaged checkpoints (synth-trained ones as brown circles, auth-trained ones as violet circles). b Diagram of iterated backtranslation: the system MT1 trained only on authentic parallel data is used to translate monolingual Czech data into English, which are used to train system MT2; this step can be iterated one or more times to obtain MT3, MT4, etc. The block-BT + avg8 model shown in a is the MT2 model in (B) and in Supplementary Fig. 2. c BLEU results on WMT17 test-set relative to the WMT17 winner UEdin2017. All five systems use checkpoint averaging.

Full size image

CUBBITT combines block-BT with checkpoint averaging, where networks in the eight last checkpoints are merged together using arithmetic average, which is a very efficient approach to gain better stability, and by that improve the model performance18. Importantly, we observed that checkpoint averaging works in synergy with the block-BT. The BLEU improvement when using this combination is clearly higher than the sum of BLEU improvements by the two methods in separation (Fig. 2a). The best performance was gained when averaging authentic-trained model and synthetic-trained models in the ratio of 6:2; interestingly, the same ratio turned out to be optimal across several occasions in training. This also points out an advantage of block-BT combined with checkpoint averaging: the method automatically finds the optimal ratio of the two types of synthetic/authentic-trained models, as it evaluates all the ratios during training (Fig. 2a).

The final CUBBITT system was trained using iterated block-BT (Fig. 2b, Supplementary Fig. 2). This was accompanied by other steps, such as data filtering, translationese tuning, and simple regex postprocessing (Methods 11). Evaluating the individual components of CUBBITT automatically on a previously unseen test-set from WMT17 showed a significant improvement in BLEU over UEdin2017, the state-of-the-art system from 2017 (Fig. 2c).

Evaluation: CUBBITT versus a professional agency translation

In 2018, CUBBITT won the English→Czech and Czech→English news translation task in WMT1817, surpassing not only its machine competitors, but it was also the only MT system, which significantly outperformed the reference human translation by a professional agency in WMT18 English→Czech news translation task (other language pairs were not evaluated in such a way to allow comparison with the human reference) (Fig. 3a). Since this result is highly surprising, we decided to investigate it in greater detail, evaluating potential confounding factors and focusing at how it can be explained and interpreted. We first confirmed that the results are not due to the original language of the reference sentences being English in half of the evaluated sentences and Czech in the other half of the test dataset (Supplementary Fig. 4; Methods 13), which was proposed to be a potential confounding factor by the WMT organizers17 and others22,23.

a Results from context-unaware evaluation in WMT18, showing distributions of source-based direct assessment (SrcDA) of five MT systems and human reference translation, sorted by average score. CUBBITT was submitted under the name CUNI-Transformer. Online G, A, and B are three anonymized online MT systems. b Translations by CUBBITT and human reference were scored by six non-professionals in the terms of adequacy, fluency and overall quality in a context-aware evaluation. The evaluation was blind, i.e., no information was provided on whether the translations are human or machine translated. The scores (0–10) are shown as violin plots with boxplots (median + interquartile range), while the boxes below represent the percentage of sentences scored better in reference (orange), CUBBITT (blue), or the same (gray); the star symbol marks the ratio of orange vs. blue, ignoring gray. Sign test was used to evaluate difference between human and machine translation. c As in a, but evaluation by six professional translators. ***P < 0.001; **P < 0.01; *P < 0.05.

Full size image

An important drawback in the WMT18 evaluation was the lack of cross-sentence context, as sentences were evaluated in random order and without document context. While the participating MT systems translated individual sentences independently, the human reference was created as a translation of the entire documents (news articles). The absence of cross-sentence context in the evaluation was recently shown to cause an overestimation of the quality of MT translations compared to human reference22,23. For example, evaluators will miss MT errors that would be evident only from the cross-sentence context, such as gender mismatch or incorrect translation of an ambiguous expression. On the other hand, independent evaluation of sentences translated considering cross-sentence context might unfairly penalize reference translations for moving pieces of information across sentences boundaries, as this will appear as an omission of meaning in one sentence and an addition in another.

We therefore conducted a new evaluation, using the same English→Czech test dataset of source documents, CUBBITT translations, and human reference translations, but presenting the evaluators with not only the evaluated sentences but also the document context (Methods 14–18; Supplementary Fig. 5). In order to gain further insight into the results, we asked the evaluators to assess the translations in terms of adequacy (the degree to which the meaning of the source sentence is preserved in the translation), fluency (how fluent the sentence sounds in the target language), as well as the overall quality of the translations. Inspired by a recent discussion of the translation proficiency of evaluators22, we recruited two groups of evaluators: six professional translators (native in the target language) and seven non-professionals (with excellent command of the source language and native in the target language). An additional exploratory group of three translation theoreticians was also recruited. In total, 15 out of the 16 evaluators passed a quality control check, giving 7824 sentence-level scores on 53 documents in total. See Methods 13–18 for further technical details of the study.

Focusing first at evaluations by non-professionals as in WMT18, but in our context-aware assessment, CUBBITT was evaluated to be significantly better than the human reference in adequacy (P = 4.6e-8, sign test) with 52% of sentences scored better and only 26% of sentences scored worse (Fig. 3b). On the other hand, the evaluators found human reference to be more fluent (P = 2.1e-6, sign test), evaluating CUBBITT better in 26% and worse in 48% (Fig. 3b). In the overall quality, CUBBITT nonsignificantly outperformed human reference (P = 0.6, sign test, 41% better than reference, 38% worse; Fig. 3b).

In the evaluation by professional translators, CUBBITT remained significantly better in adequacy than human reference (P = 7.1e-4, sign test, 49% better, 33% worse; Fig. 3c), albeit it scored worse in both fluency (P = 3.3e-19, sign test, 23% better, 64% worse) and overall quality (P = 3.0e-7, sign test, 32% better, 56% worse; Fig. 3c). Fitting a linear model of weighting adequacy and fluency in the overall quality suggests that professional translators value fluency more than non-professionals; this pattern was also observed in the exploratory group of translation theoreticians (Supplementary Fig. 6). Finally, when scores from all 15 evaluators were pooled together, the previous results were confirmed: CUBBITT outperformed the human reference in adequacy, whereas the reference was scored better in fluency and overall quality (Supplementary Fig. 7). Surprisingly, we observed a weak, but significant effect of sentence length, showing that CUBBITT’s performance is more favorable compared to human in longer sentences with regards to adequacy, fluency, and overall quality (Supplementary Fig. 8, including an example of a well-translated complex sentence).

We next decided to perform additional evaluation that would allow us to better understand where and why our machine translations are better or worse than the human translations. We asked three professional translators and three non-professionals to add annotations of types of errors in the two translations (Methods 19). In addition, the evaluators were asked to indicate whether the translation was wrong because of cross-sentence context.

CUBBITT made significantly fewer errors in addition of meaning, omission of meaning, shift of meaning, other adequacy errors, grammar, and spelling (Fig. 4a, example in Fig. 5a–c, Supplementary Data 1). On the other hand, reference performed better in error classes other fluency errors and ambiguous words (Fig. 4a, Supplementary Fig. 9, examples in Fig. 5d, e, Supplementary Data 1). As expected, CUBBITT made significantly more errors due to cross-sentence context (11.7% compared to 5.2% in reference, P = 1.2e-10, sign test, Fig. 4a), confirming the importance of context-aware evaluation of translation quality. Interestingly, when only sentences without context errors are taken into account, not only adequacy, but also the overall quality is significantly better in CUBBITT compared to reference in ratings by non-professionals (P = 0.001, sign test, 49% better, 29% worse; Supplementary Fig. 10), in line with the context-unaware evaluation in WMT18.

a Percentages of sentences with various types of errors are shown for translations by human reference and CUBBITT. Errors in 405 sentences were evaluated by six evaluators (three professional translators and three non-professionals). Sign test was used to evaluate difference between human and machine translation. b Translations by five machine translation systems were scored by five professional translators in the terms of adequacy and fluency in a blind context-aware evaluation. The systems are sorted according to the mean performance, and the scores (0–10) for individual systems are shown as violin plots with boxplots (median + interquartile range). For each pair of neighboring systems, the box in between them represents the percentage of sentences scored as better in one, the other, or the same in both (gray). The star symbol marks the ratio when ties are ignored. Sign test was used to evaluate difference between the pairs of MT systems. ***P < 0.001; **P < 0.01; *P < 0.05.

Full size image

The Czech translations by the human reference and CUBBITT, as well as the values of the manual evaluation for the individual sentences, are shown in Supplementary Data 1.

Full size image

We observed that the type of document, e.g., business vs. sports articles, can also affect the quality of machine translation when compared to human translation (Methods 18). The number of evaluated documents (53) does not allow for any strong and significant conclusions at the level of whole documents, but the document-level evaluations nevertheless suggest that CUBBITT performs best in news articles about business and politics (Supplementary Fig. 11A-B). Conversely, it performed worst in entertainment/art (both in adequacy and fluency) and in news articles about sport (in fluency). Similar results can be observed also in sentence-level evaluations across document types (Supplementary Fig. 11C–D).

The fact that translation adequacy is the main strength of CUBBITT is surprising, as NMT was shown to improve primarily fluency over the previous approaches24. We were therefore interested in comparison of fluency of translations made by CUBBITT and previous state-of-the-art MT systems (Methods 20). We performed an evaluation of CUBBITT in a side-by-side direct comparison with Google Translate15 (an established benchmark for MT) and UEdin25 (the winning system in WMT2017 and a runner-up in WMT 2018). Moreover, we included a version of basic Transformer with one iteration of mix-BT, and another version of basic Transformer with block-BT (but without iterated block-BT), providing human rating of different approaches to backtranslation. The evaluators were asked to evaluate adequacy and fluency of the five presented translations (again in a blind setting and taking cross-sentence context into account).

In the context-aware evaluation of the five MT systems, CUBBITT significantly outperformed Google Translate and UEdin both in adequacy (mean increase by 2.4 and 1.2 points, respectively) and fluency (mean increase by 2.1 and 1.2 points, respectively) (Fig. 4b). The evaluation also shows that this increase of performance stems from inclusion of several components of CUBBITT: the Transformer model and basic (mix-BT) backtranslation, replacement of mix-BT with block-BT (adequacy: mean increase by 0.4, P = 3.9e-5; fluency: mean increase by 0.3, P = 1.4e-4, sign test), and to a lesser extent also other features in the final CUBBITT system, such as iterated backtranslation or data filtering (adequacy: mean increase by 0.2, P = 0.054; fluency: mean increase by 0.1, P = 0.233, sign test).

Finally, we were interested to see whether CUBBITT translations are distinguishable from human translations. We therefore conducted a sentence-level Translation Turing test, in which participants were asked to judge whether a translation of a sentence was performed by a machine or a human on 100 independent sentences (the source sentence and a single translation was shown; Methods 21). A group of 16 participants were given machine translations by Google Translate system mixed in a 1:1 ratio with reference translations. In this group, only one participant (with accuracy of 61%) failed to significantly distinguish between machine and human translations, while the other 15 participants recognized human translations in the test (with accuracy reaching as high as 88%; Fig. 6). In a group of different 15 participants, who were presented machine translations by CUBBITT mixed (again in the 1:1 ratio) with reference translations, nine participants did not reach the significance threshold of the test (with the lowest accuracy being 56%; Fig. 6). Interestingly, CUBBITT was not significantly distinguished from human translations by three professional translators, three MT researchers, and three other participants. One potential contributor to human-likeness of CUBBITT could be the fact that it is capable of restructuring translated sentences where the English structure would sound unnatural in Czech (see an example in Fig. 5f, Supplementary Data 1).

a Accuracy of individual participants in distinguishing machine from human translations is shown in a bar graph. Fisher test was used to assess whether the participant significantly distinguished human and machine translations and Benjamini–Hochberg method was used to correct for multiple testing. Participants with a Q value below 0.05 were considered to have significantly distinguished between human and machine translations. b Percentage of participants, who significantly distinguished human and machine translations for CUBBITT (top, blue) and for Google Translate (bottom, green).

Full size image

Generality of block backtranslation

Block-BT with checkpoint averaging clearly improves English→Czech news translation quality. To demonstrate that the benefits of our approach are not limited to this language pair, we trained English→French, French→English, English→Polish, and Polish→English versions of CUBBITT (Methods 4, 5, 12) and evaluated them using BLEU as in Fig. 2a. The results are consistent with the behavior on the English→Czech language pair, showing a synergistic benefit of block-BT with checkpoint averaging (Fig. 2a, Supplementary Figs. 3, 14).

How block backtranslation improves translation

Subsequently, we sought to investigate the synergy between block-BT and checkpoint averaging, trying to get an insight into the mechanism of how this improves translation on the English→Czech language pair. We first tested a simple hypothesis that the only benefit of block regime and checkpoint averaging is an automatic detection of the optimal ratio of authentic and synthetic data, given that in block-BT the averaging window explores various ratios of networks trained on authentic and synthetic data. Throughout our experiments, the optimal ratio of authentic and synthetic blocks was ca. 3:1, so we hypothesized that mixed-BT would benefit from authentic and synthetic data mixed in the same ratio. However, this hypothesis was not supported by additional explorations (Supplementary Fig. 15), which suggests that a more profound mechanism underlies the synergy.

We next hypothesized that training the system in the block regime compared to the mix regime might aid the network to better focus at the two types of blocks (authentic and synthetic), one at a time. This would allow the networks to more thoroughly learn the properties and benefits of the two blocks, leading to a better exploration of space of networks, ultimately yielding greater translation diversity during training. We measured translation diversity of a single sentence as the number of all unique translations produced by the MT system at hourly checkpoints during training. Comparing translation diversity between block-BT and mix-BT on the WMT13 newstest, we observed block-BT to have greater translation diversity in 78% sentences, smaller in 18% sentences, and equal in the remaining 4% sentences (Methods 22–23), supporting the hypothesis of greater translation diversity of block-BT compared to mix-BT.

The increased diversity could be leveraged by checkpoint averaging by multiple means. In theory, this can be as simple as selecting the most frequent sentence translation among the eight averaged checkpoints. At the same time, checkpoint averaging can generate sentences that were not present as the preferred translation in any of the eight averaged checkpoints (termed novelAvg8 translation), potentially combining the checkpoints’ best translation properties. This may involve producing a combination of phrase translations seen in the averaged checkpoints (Fig. 7a, Supplementary Fig. 17), or creation of a sentence with phrases not seen in any of the averaged checkpoints (Fig. 7b). The fact that even phrase translations with low frequency in the eight averaged checkpoints can be chosen by checkpoint averaging stems from the way the confidence of the networks in their translations is taken into account (Supplementary Fig. 18).

a A case where the translation resulting from checkpoint averaging is a crossover of translations present in AUTH and SYNTH blocks. All the mentioned translations are shown in Supplementary Fig. 17. b A case where the translation resulting from checkpoint averaging contains a phrase that is not the preferred translation in any of the averaged checkpoints.

Full size image

Comparing the translations produced by models with and without averaging, we observed that averaging generated at least one translation never seen without averaging (termed novelAvg∞) in 60% sentences in block-BT and in 31.6% sentences in mix-BT (Methods 23). Moreover, averaging generated more novelAvg∞ translations in block-BT than mix-BT in 55% sentences, fewer in only 6%, and equal in 39%.

We next sought to explore what is the mechanism of the greater translation diversity and more novelAvg translations in block-BT compared to mix-BT. We therefore computed how translation diversity and novelAvg8 translations develop over time during training and what is their temporal relationship to blocks of authentic and synthetic data (Methods 24). In order to be able to track these features over time, we computed diversity and novelAvg8 using the last eight checkpoints (the width of the averaging window) for each checkpoint during training. While mix-BT gradually and smoothly decreased in both metrics over time, block-BT showed a striking difference between the alternating blocks of authentic and synthetic data (Fig. 8a, Supplementary Fig. 16). The novelAvg8 translations in block-BT were most frequent in checkpoints where the eight averaged checkpoints contained both the authentic- and synthetic-trained blocks (Fig. 8a). Interestingly, also the translation diversity of the octuples of checkpoints in block-BT (without averaging) was highest at the borders of the blocks (Supplementary Fig. 16). This suggests that it is the alternation of the blocks that increases the diversity of the translations and generation of novel translations by averaging in block-BT.

a Percentage of WMT13 newstest sentences with novelAvg8 translation (not seen in the previous eight checkpoints without averaging) over time, shown separately for block-BT (red) and mix-BT (blue). The checkpoints trained in AUTH blocks are denoted by magenta background and letter A, while the SYNTH blocks are shown in yellow background and letter S. b Evaluation of translation quality by BLEU on WMT13 newstest set for four different versions of block-BT (left) and mix-BT (right), exploring the importance of novelAvg8 sentences created by checkpoint averaging. The general approach is to take the best system using checkpoint averaging (Avg), and substitute translations of novelAvg8 and not-novelAvg8 sentences with translations produced by the best system without checkpoint averaging (noAvg), observing the effect on BLEU. In blue is the BLEU achieved by the model with checkpoint averaging, while in purple is the BLEU achieved by the model without checkpoint averaging. In red is the BLEU of a system, which used checkpoint averaging, but where the translations that are not novelAvg8 were replaced by the translations produced by the system without checkpoint averaging. Conversely, yellow bars show BLEU of a system, which uses checkpoint averaging, but where the novelAvg8 translations were replaced by the version without checkpoint averaging.

Full size image

Finally, we tested whether the generation of novel translations by averaging contributes to the synergy between block regime and checkpoint averaging as measured by BLEU (Methods 25). We took the best model in block-BT with checkpoint averaging (block-BT-Avg; BLEU 28.24) and in block-BT without averaging (block-BT-NoAvg; BLEU 27.54). We next identified 988 sentences where the averaging in block-BT-Avg generated a novelAvg8 translation, unseen in the eight previous checkpoints without averaging. As we wanted to know what role do the novelAvg8 sentences play in the improved BLEU of block-BT-Avg compared to block-BT-NoAvg (Fig. 2a), we next computed BLEU of block-BT-Avg translations, where the translations of 988 novelAvg8 sentences were replaced with the block-BT-NoAvg translations. Such replacement led to decrease of BLEU almost to the level of block-BT-NoAvg (BLEU 27.65, Fig. 8b). Conversely, replacement of the 2012 not-novelAvg8 sentences resulted in only a small decrease (BLEU 28.13, Fig. 8b), supporting the importance of novel translations in the success of block-BT with checkpoint averaging. For a comparison, we repeated the same analysis with mix-BT and observed that replacement of novelAvg8 sentences in mix-BT showed a negligible effect on the improvement of mix-BT-Avg over mix-BT-NoAvg (Fig. 8b).

Altogether, our analysis shows that generation of novel sentences is an important mechanism of how checkpoint averaging combined with block-BT lead to synergistically improved performance. Specifically, averaging at the interface between authentic and synthetic blocks leads to the highest diversity and generation of novel translations, allowing combining the best features of the diverse translations in the two block types (examples in Fig. 7, Supplementary Fig. 17).


In this work, we have shown that the deep-learning framework CUBBITT outperforms a professional human-translation agency in adequacy of English→Czech news translations. In particular, this is achieved by making fewer errors in adding, omitting, or shifting meaning of the translated sentences. At the same time, CUBBITT considerably narrowed the gap in translation fluency to human, markedly outperforming previous state-of-the-art translation systems. The fact that the main advantage of CUBBITT is improved adequacy could be viewed as surprising, as it was thought that the main strength of NMT was increased fluency24. However, our results are in line with the study of Läubli et al.23, who observed the deficit of NMT to human to be smaller in adequacy than in fluency. The improvement in translation quality is corroborated by a Translation Turing test, where most participants failed to reliably discern CUBBITT translations from human.

Critically, our evaluation of translation quality was carried out in a fully context-aware evaluation setting. As discussed in this work and in other recent articles on this topic22,23, the previous standard approach of combining context-aware reference translation with context-free assessment gives an unfair advantage to machine translation. Consequently, this study is also an important contribution to MT evaluation practices and points out that the relevance of future evaluations in MT competitions such as WMT will be increased when cross-sentence context is included. In addition, our design where fluency and adequacy are assessed separately, and by professional translators and non-professionals, brings interesting insight into evaluator priorities. The professional translators were observed to be more sensitive to errors in fluency than non-professionals and to have a stronger preference for fluency when rating overall quality of a translation. Such difference in preference is an important factor in designing studies, which measure solely the overall translation quality. While in domains such as artistic writing, fluency is clearly of utmost importance, there are domains (e.g., factual news articles), where an improvement in preservation of meaning may be more important to a reader than a certain loss of fluency. Our robust context-aware evaluation with above-human performance in adequacy demonstrates that human translation is not necessarily an upper bound of translation quality, which was a long-standing dogma in the field.

Among key methodological advances of CUBBITT is the training regime termed block backtranslation, where blocks of authentic data alternate with blocks of synthetic data. Compared to traditional mixed backtranslation, where all the data are shuffled together, block regime offers markedly increased diversity of translations produced during training, suggesting a more explorative search for solutions to the translation problem. This increased diversity can be then greatly leveraged by the technique of checkpoint averaging, which is capable of finding consensus between networks trained on purely synthetic data and networks trained on authentic data, often combining the best of the two worlds. We speculate that such block-training regime of training may be beneficial also for other ways of data organization into blocks and may in theory be applicable beyond backtranslation, or even beyond the field of machine translation.

During reviews of this manuscript, the WMT19 competition took place26. The testing dataset was different, and evaluation methodology was innovated compared to WMT18, which is why the results are not directly comparable (e.g., the translation company was explicitly instructed to not add/remove information from the translated sentences, which was a major source of adequacy errors in this study (Fig. 4a)). Also based on discussions with our team’s members, the organizers of WMT19 implemented a context-aware evaluation. In this context-aware evaluation of English→Czech news task, CUBBITT was the winning MT system and reached overall quality score 95.3% of human translators (DA score 86.9 vs 91.2), which is similar to our study (94.8%, mean overall quality 7.4 vs 7.8, all annotators together). Given that WMT19 did not separate overall quality into adequacy and fluency, it is not possible to validate the potential super-human adequacy on their dataset.

Our study was performed on English→Czech news articles and we have also validated the methodological improvements of CUBBITT using automatic metric on English↔French and English↔Polish news articles. The generality of CUBBITT’s success with regards to other language pairs and domains remains to be evaluated. However, the recent results from WMT19 on English→German show that indeed also in other languages the human reference is not necessarily the upper bound of translation quality26.

The performance of machine translation is getting so close to human reference that the quality of the reference translation matters. Highly qualified human translators with infinite amount of time and resources will likely produce better translations than any MT system. However, many clients cannot afford the costs of such translators and instead use services of professional translation agencies, where the translators are under certain time pressure. Our results show that the quality of professional-agency translations is not unreachable by MT, at least in certain aspects, domains, and languages. Nevertheless, we suggest that in the future MT competitions and evaluations, it may be important to sample multiple human references (from multiple agencies and ideally also prices).

We stress out that CUBBITT is the result of years of open scientific collaboration and is a culmination of the transformation of the field. It started with the MT competitions that provided open data and ideas and continued through the open community of deep learning, which provided open-source code. The Transformer architecture significantly lowered the hardware requirements for training MT models (from months on multi-GPU clusters to days on a single machine18). More effective utilization of monolingual data via iterated block backtranslation with checkpoint averaging presented in this study allows generating large amount of high-quality synthetic parallel data to complement existing parallel datasets at little cost. Together, these techniques allow CUBBITT to be trained by the broad community and to considerably extend the reach of MT.


1 CUBBITT model

Our CUBBITT translation system follows the Transformer architecture (Fig. 1, Supplementary Fig. 1) introduced in Vaswani et al.18. Transformer has an encoder-decoder structure where the encoder maps an input sequence of tokens (words or subword units) to a sequence of continuous deep representations z. Given z, the decoder generates an output sequence of tokens one element at a time. The decoder is autoregressive, i.e., consuming the previously generated symbols as additional input when generating the next token.

The encoder is composed of a stack of identical layers, with each layer having two sublayers. The first is a multi-head self-attention mechanism, and the second is a simple, position-wise fully connected feed-forward network. We employ a residual connection around each of the two sublayers, followed by layer normalization. The decoder is also composed of a stack of identical layers. In addition to the two sublayers from the encoder, the decoder inserts a third sublayer, which performs multi-head attention over the output of the encoder stack. Similar to the encoder, we employ residual connections around each of the sublayers, followed by layer normalization.

The self-attention layer in the encoder and decoder performs multi-head dot-product attention, each head mapping matrices of queries (Q), keys (K), and values (V) to an output vector, which is a weighted sum of the values V:

$${\mathrm{Attention}}\left( {Q,K,V} \right) = {\mathrm{softmax}}\left( {\frac{{QK^T}}{{\sqrt {d_k} }}} \right)V,$$


where Q∈\({\Bbb R}^{n \times d_k}\), K∈\({\Bbb R}^{n \times d_k}\), V∈\({\Bbb R}^{n \times d_v}\), n is the sentence length, dv is the dimension of values, and dk is the dimension of the queries and keys. Attention weights are computed as a compatibility of the corresponding key and query and represent the relationship between deep representations of subwords in the input sentence (for encoder self-attention), output sentence (for decoder self-attention), or between the input and output sentence (for encoder-decoder attention). In encoder and decoder self-attention, all queries, keys and values come from the output of the previous layer, whereas is the encoder-decoder attention, keys and values come from the encoder’s topmost layer and queries come from the decoder’s previous layer. In the decoder, we modify the self-attention to prevent it from attending to following positions (i.e., rightward from the current position) by adding a mask, because the following positions will not be known in inference time.

2 English–Czech training data

Our training data are constrained to the data allowed in the WMT 2018 News translation shared task17 ( Parallel (authentic) data are: CzEng 1.7, Europarl v7, News Commentary v11 and CommonCrawl. Monolingual data for backtranslation are: English (EN) and Czech (CS) NewsCrawl articles. Data sizes (after filtering, see below) are reported in Supplementary Table 1.

While all our monolingual data are news articles, only less than 1% of our parallel data are news (summing News Commentary v12 and the news portion of CzEng 1.7). The biggest sources of our parallel data are: movie subtitles (63% of sentences), EU legislation (16% of sentences), and Fiction (9% of sentences)27. Unfortunately, no finer-grained metadata specifying the exact training-data domains (such as politics, business, and sport) are available.

We filtered out ca. 3% of sentences in the monolingual data by restricting the length to 500 characters and in case of Czech NewsCrawl also by keeping only sentences containing at least one accented character (using a regular expression m/[ěščřžýáíéúůd’t’ň]/i). This simple heuristic is surprisingly effective for Czech; it filters out not only sentences in other languages than Czech, but also various non-linguistic content, such as lists of football or stock-market results.

We divided the Czech NewsCrawl (synthetic data) into two parts: years 2007–2016 (58,231 k sentences) and year 2017 (7152 k sentences). When training block-BT, we simply concatenated four blocks of training data: authentic, synthetic 2007–2016, authentic and synthetic 2017. The sentences within these four blocks were randomly shuffled; we only do not shuffle across the data blocks. When training mix-BT, we used exactly the same training sentences, but we shuffled them fully. This means we upsampled the authentic training data two times. The actual ratio of authentic and synthetic data (as measured by the number of subword tokens) in the mix-BT training data was approximately 1.2:1.

3 English–Czech development and test data

WMT shared task on news translation provides a new test-set (with ~3000 sentences) each year collected from recent news articles (WMT = Workshop on statistical Machine Translation. In 2016, WMT was renamed to Conference on Machine Translation, but keeping the legacy abbreviation WMT. For more information see the WMT 2018 website The reference translations are created by professional translation agencies. All of the translations are done directly, and not via an intermediate language. Test sets from previous years are allowed to be used as development data in WMT shared tasks.

We used WMT13 (short name for WMT newstest2013) as the primary development set in our experiments (e.g., Figure 2a). We used WMT17 as a test-set for measuring BLEU scores in Fig. 2c. We used WMT18 (more precisely, its subset WMT18-orig-en, see below) as our final manual-evaluation test-set. Data sizes are reported in Supplementary Table 2.

In WMT test sets since 2014, half of the sentences for a language pair X-EN originate from English news servers (e.g., and the other half from X-language news servers. All WMT test sets include the server name for each document in metadata, so we were able to split our dev and test sets into two parts: originally Czech (orig-cs, for Czech-domain articles, i.e., documents with docid containing “.cz”) and originally English (orig-en, for non-Czech-domain articles. The WMT13-orig-en part of our WMT13 development set contains not only originally English articles, but also articles written originally in French, Spanish, German and Russian. However, the Czech reference translations were translated from English. In WMT18-orig-en, all the articles were originally written in English.).

According to Bojar et al.17, the Czech references in WMT18 were translated from English “by the professional level of service of, preserving 1–1 segment translation and aiming for literal translation where possible. Each language combination included two different translators: the first translator took care of the translation, the second translator was asked to evaluate a representative part of the work to give a score to the first translator. All translators translate towards their mother tongue only and need to provide a proof or their education or professional experience, or to take a test; they are continuously evaluated to understand how they perform on the long term. The domain knowledge of the translators is ensured by matching translators and the documents using T-Rank,”

Toral et al.22 furthermore warned about post-edited MT used as human references. However, confirmed that MT was completely deactivated during the process of creating WMT18 reference translations (personal communication).

4 English–French data

The English–French parallel training data were downloaded from WMT2014 ( The monolingual data were downloaded from WMT 2018 (making sure there is no overlap with the development and test data). We filtered the data for being English/French using the langid toolkit ( Data sizes after filtering are reported in Supplementary Table 3. When training English–French block-BT, we concatenated the French NewsCrawl2008–2014 (synthetic data) and authentic data, with no upsampling. When training French–English block-BT, we split the English NewsCrawl into three parts: 2011–2013, 2014–2015, and 2016–2017 and interleaved with three copies of the authentic training data, i.e., upsampling the authentic data three times. We always trained mix-BT on a fully shuffled version of the data used for the respective block-BT training.

Development and test data are reported in Supplementary Table 4.

5 English–Polish data

The English–Polish training and development data were downloaded from WMT2020 ( We filtered the data for being English/Polish using the FastText toolkit ( Data sizes after filtering are reported in Supplementary Table 5. When training English–Polish block-BT, we upsampled the authentic data two times and concatenated with the Polish NewsCrawl2008–2019 (synthetic data) upsampled six times. When training Polish–English block-BT, we upsampled the authentic data two times and concatenated with English NewsCrawl2018 (synthetic data, with no upsampling). We always trained mix-BT on a fully shuffled version of the data used for the respective block-BT training.

Development and test data are reported in Supplementary Table 6.

6 CUBBITT training: BLEU score

BLEU28 is a popular automatic measure for MT evaluation and we use it for hyperparameter tuning. Similarly to most other automatic MT measures, BLEU estimates the similarity between the system translation and the reference translation. BLEU is based on n-gram (unigrams up to 4-grams) precision of the system translation relative to the reference translation and a brevity penalty to penalize too short translations. We report BLEU scaled to 0–100 as is usual in most papers (although BLEU was originally defined as 0–1 by Papineni et al.28); the higher BLEU value, the better translation. We use the SacreBLEU implementation29 with signature BLEU+case.mixed+lang.en-cs+numrefs.1+smooth.exp+tok.13a.

7 CUBBITT training: hyperparameters

We use the Transformer “big” model from the Tensor2Tensor framework v1.6.018. We followed the training setup and tips of Popel and Bojar30 and Popel et al.31, training our models with the Adafactor optimizer32 instead of the default Adam optimizer. We use the following hyperparameters: learning_rate_schedule = rsqrt_decay, batch_size = 2900, learning_rate_warmup_steps = 8000, max_length = 150, layer_prepostprocess_dropout = 0, optimizer = Adafactor. For decoding, we use alpha = 1.0, beam_size = 4.

8 CUBBITT training: checkpoint averaging

A popular way of improving the translation quality in NMT is ensembling, where several independent models are trained and during inference (decoding, translation) each target token (word) is chosen according to an averaged probability distribution (using argmax in the case of greedy decoding) and used for further decisions in the autoregressive decoder of each model.

However, ensembling is expensive both in training and inference time. The training time can be decreased by using checkpoint ensembles33, where N last checkpoints of a single training run are used instead of N independently trained models. Checkpoint ensembles are usually worse than independent ensembles33, but allow to use more models in the ensemble thanks to shorter training time. The inference time can be decreased by using checkpoint averaging, where the weights (learned parameters of the network) in the N last checkpoints are element-wise averaged, creating a single averaged model.

Checkpoint averaging has been first used in NMT by Junczys-Dowmunt et al.34, who report that averaging four checkpoints is “not much worse than the actual ensemble” of the same four checkpoints and it is better than ensembles of two checkpoints. Averaging ten checkpoints “even slightly outperforms the real four-model ensemble”.

Checkpoint averaging has been popular in recent NMT systems because it has almost no additional cost (averaging takes only several minutes), the results of averaged models have lower variance in BLEU and are usually at least slightly better than models without averaging30.

In our experiments, we store checkpoints each hour and average the last 8 checkpoints.

9 CUBBITT training: Iterated backtranslation

For our initial experiments with backtranslation, we reused an existing CS → EN system UEdin (Nematus software trained by a team from the University of Edinburgh and submitted to WMT 201635). This system itself was trained using backtranslation. We decided to iterate the backtranslation process further by using our EN → CS Transformer to translate English monolingual data and use that for training a higher quality CS → EN Transformer, which was in turn used for translating Czech monolingual data and training our final EN → CS Transformer system called CUBBITT. Supplementary Fig. 2 illustrates this process and provides details about the training data and backtranslation variants (mix-BT in MT1 and block-BT in MT2–4) used.

Each training we did (MT3–5 in Supplementary Fig. 2) took ca. eight days on a single machine with eight GTX 1080 Ti GPUs. Translating the monolingual data with UEdin2016 (MT0) took ca. two weeks and with our Transformer models (MT1–3) it took ca. 5 days.

10 CUBBITT training: translationese tuning

It has been observed that text translated from language X into Y has different properties (such as lexical choice or syntactic structure) compared to text originally written in language Y36. Term translationese is used in translation studies (translatology) for this phenomenon (and sometimes also for the translated language itself).

We noticed that when training on synthetic data, the model performs much better on the WMT13-orig-cs dev set than on the WMT13-orig-en dev set. When trained on authentic data, it is the other way round. Intuitively, this makes sense: The target side of our synthetic data are original Czech sentences from Czech newspapers, similarly to the WMT13-orig-cs dataset. In our authentic parallel data, over 90% of sentences were originally written in English about non-Czech topics and translated into Czech (by human translators), similarly to the WMT13-orig-en dataset. There are two closely related phenomena: a question of domain (topics) in the training data and a question of so-called translationese effect, i.e., which side of the parallel training data (and test data) is the original and which is the translation.

Based on these observations, we prepared an orig-cs-tuned model and an orig-en-tuned model. Both models were trained in the same way; they differ only in the number of training steps. For the orig-cs-tuned model, we selected a checkpoint with the best performance on WMT13-orig-cs (Czech-origin portion of WMT newstest2013), which was at 774k steps. Similarly, for the orig-en-tuned model, we selected the checkpoint with the best performance on WMT13-orig-en, which was at 788k steps. Note that both the models were trained jointly in one experiment, just selecting checkpoints at two different moments. The WMT18-orig-en test-set was translated using the orig-en-tuned model and the WMT18-orig-cs part was translated using the orig-cs-tuned model.

11 CUBBITT training: regex postediting

We applied two simple post-processings to the translations, using regular expressions. First, we converted quotation symbols in the translations to the correct-Czech lower and upper quotes („ and “) using two regexes: s/(ˆ|[({[])(“|,,|”|“)/$1„/g and s/(“|”)($|[,.?!:;)}\]])/“$2/g. Second, we deleted phrases repeated more than twice (immediately following each other); we kept just the first occurrence. We considered phrases of one up to four words. This postprocessing affected less than 1% sentences in the dev set.

12 CUBBITT training: English–French and English–Polish

We trained English→French, French→English, English→Polish and Polish→English versions of CUBBITT, following the abovementioned English–Czech setup, but using the training data described in Supplementary Tables 3 and 5 and the training diagram in Supplementary Fig. 3. All systems (including M1 and M2) were trained with Tensor2Tensor Transformer (no Nematus was involved). Iterated backtranslation was tried only for French→English. No translationese tuning was used (because we report just the BLEU training curve, but no experiments where the final checkpoint selection is needed). No regex post-diting was used.

13 Reanalysis of context-unaware evaluation in WMT18

We first reanalyzed results from the context-unaware evaluation of WMT 2018 English–Czech News Translation Task, provided to us by the WMT organizers ( The data shown in Fig. 3a were processed in the same way as by the WMT organizers: scores with BAD and REF types were first removed, a grouped score was computed as an average score for every triple language pair (“Pair”), MT system (“SystemID”), and sentence (“SegmentID”) was computed, and the systems were sorted by their average score. In Fig. 3a, we show distribution of the grouped scores for each of the MT systems, using paired two-tailed sign test to compare significance of differences of the subsequent systems.

We next assessed whether the results could be confounded by the original language of the source. Specifically, one half of the test-set sentences in WMT18 were originally English sentences translated to Czech by a professional agency, while the other half were English translations of originally Czech sentences. However, both types of sentences were used together for evaluation of both translation directions in the competition. Since the direction of translation could affect the evaluation, we first re-evaluated the MT systems in WMT18 by splitting the test-set according to the original language in which the source sentences were written.

Although the absolute values of source direct assessment were lower for all systems and reference translation in originally English source sentences compared to originally Czech sentences, CUBBITT significantly outperformed the human reference and other MT systems in both test sets (Supplementary Fig. 4). We checked that this was true also when comparing z-score normalized scores and using unpaired one-tail Mann–Whitney U test, as by the WMT organizers.

Any further evaluation in our study was performed only on documents with the source side as the original text, i.e., with originally English sentences in the English→Czech evaluations.

14 Context-aware evaluation: methodology

Three groups of paid evaluators were recruited: six professional translators, three translation theoreticians, and seven other evaluators (non-professionals). All 16 evaluators were native Czech speakers with excellent knowledge of the English language. The professional translators were required to have at least 8 years of professional translation experience and they were contacted via The Union of Interpreters and Translators ( The translation theoreticians were from The Institute of Translation Studies, Charles University’s Faculty of Arts ( Guidelines presented to the evaluators are given in Supplementary Methods 1.1.

For each source sentence, evaluators compared two translations: Translation T1 (the left column of the annotation interface) vs Translation T2 (the right column of the annotation interface). Within one document (news article), Translation T1 was always a reference and Translation T2 was always CUBBITT, or vice versa (i.e., each column within one document being purely reference translation or purely CUBBITT). However, evaluators did not know which system is which, nor that one of them is a human translation and the other one is a translation by MT system. The order of reference and CUBBITT was random in each document. Each evaluator encountered reference being Translation T1 in approximately one half of the documents.

Evaluators scored 10 consecutive sentences (or the entire document if shorter than 10 sentences) from a random section of the document (the same section was used in both T1 and T2 and by all evaluators scoring this document), but they had access to the source side of the entire document (Supplementary Fig. 5).

Every document was scored by at least two evaluators (2.55 ± 0.64 evaluators on average). The documents were assigned to evaluators in such a way that every evaluator scored nine different nonspam documents and most pairs of evaluators had at least one document in common. This maximized the diversity of annotator pairs in the computation of interannotator agreement. In total, 135 (53 unique) documents and 1304 (512 unique) sentences were evaluated by the 15 evaluators who passed quality control (see below).

15 Context-aware evaluation: quality control

The quality control check of evaluators was performed using a spam document, similarly as in Läubli et al.23 and Kittur et al.37. In MT translations of the spam document, the middle words (i.e., except the first and last words in the sentence) were randomly shuffled in each of the middle six sentences of the document (i.e., the first and last two sentences were kept intact). We ascertained that the resulting spam translations made no sense.

The criterion for evaluators to pass the quality control was to score at least 90% of reference sentences better than all spam sentences (in each category: adequacy, fluency, overall). One non-professional evaluator did not pass the quality control, giving three spam sentences a higher score than 10% of the reference sentences. We excluded the evaluator from the analysis of the results (but the key results reported in this study would hold even when including the evaluator).

16 context-aware evaluation: interannotator agreement

We used two methods to compute interannotator agreement (IAA) on the paired scores (CUBBITT—reference difference) in adequacy, fluency, and overall quality for the 15 evaluators. First, for every evaluator, we computed Pearson and Spearman correlation of his/her scores on individual sentences with a consensus of scores from all other evaluators. This consensus was computed for every sentence as the mean of evaluations by other evaluators who scored this sentence. This correlation was significant after Benjamini–Hochberg correction for multiple testing for all evaluators in adequacy and fluency and overall quality. The median and interquartile range of the Spearman r of the 15 evaluators were 0.42 (0.33–0.49) for adequacy, 0.49 (0.35–0.55) for fluency, and 0.49 (0.43–0.54) for overall quality. The median and interquartile range of the Pearson r of the 15 evaluators were 0.42 (0.32–0.49) for adequacy, 0.47 (0.39–0.55) for fluency, and 0.46 (0.40–0.50) for overall quality.

Second, we computed Kappa in the same way as in WMT 2012–201638, separately for adequacy, fluency, and overall quality (Supplementary Table 7).

17 Context-aware evaluation: statistical analysis

First, we computed the average score for every sentence from all evaluators who scored the sentence within the group (non-professionals, professionals, translation theoreticians for Fig. 3 and Supplementary Fig. 7B) or within the entire cohort (for Supplementary Fig. 7A). The difference between human reference and CUBBITT translations was assessed using paired two-tailed sign test (Matlab function sign test) and P values below 0.05 were considered statistically significant.

In the analysis of relative contribution of adequacy and fluency in the overall score (Supplementary Fig. 6), we fitted a linear model through scores in all sentences, separately for human reference translations and CUBBITT translations for every evaluator, using matlab function fitlm(tableScores,‘overall~adequacy+fluency’,‘RobustOpts’,‘on’, ‘Intercept’, false).

18 Context-aware evaluation: analysis of document types

For analysis of document types (Supplementary Fig. 11), we grouped the 53 documents (news articles) into seven classes: business (including economics), crime, entertainment (including art, film, one article about architecture), politics, scitech (science and technology), sport, and world. Then we compared the relative difference of human reference minus CUBBITT translation scores on the document-level scores and sentence-level scores and used sign test to assess the difference between the two translations.

19 Evaluation of error types in context-aware evaluation

Three non-professionals and three professional translator evaluators performed a follow-up evaluation of error types, after they finished the basic context-aware evaluation. Nine columns were added into the annotation sheets next to their evaluations of quality (adequacy, fluency, and overall quality) of each of the two translations. The evaluators were asked to classify all translation errors into one of eight error types and to identify sentences with an error due to cross-sentence context (see guidelines). In total, 54 (42 unique) documents and 523 (405 unique) sentences were evaluated by the six evaluators. Guidelines presented to the evaluators are given in Supplementary Methods 1.2.

Similarly to Section 5.4, we compute IAA Kappa scores for each error type, based on the CUBBITT—Reference difference (Supplementary Table 8).

When carrying out statistical analysis, we first grouped the scores of sentences with multiple evaluations by computing the average number of errors per sentence and error type from the scores of all evaluators who scored this sentence. Next, we compared the percentage of sentences with at least one error (Fig. 4a) and the number of errors per sentence (Supplementary Fig. 9), using sign test to compare the difference between human reference and CUBITT translations.

20 Evaluation of five MT systems

Five professional-translator evaluators performed this follow-up evaluation after they finished the previous evaluations. For each source sentence, the evaluators compared five translations by five MT systems: Google Translate from 2018, UEdin from 2018, Transformer trained with one iteration of mix-BT (as MT2 in Supplementary Fig. 2, but with mix-BT instead of block-BT), Transformer trained with one iteration of block-BT (MT2 in Supplementary Fig. 2), and the final CUBBITT system. Within one document, the order of the five systems was fixed, but it was randomized between documents. Evaluators were not given any details about the five translations (such as whether they are human or MT translations or by which MT systems). Every evaluator was assigned only documents that he/she has not yet evaluated in the basic quality + error types evaluations. Guidelines presented to the evaluators are given in Supplementary Methods 1.3.

Evaluators scored 10 consecutive sentences (or the entire document if this was shorter than 10 sentences) from a random section of the document (the same for all five translations), but had access to the source side of the entire document. Every evaluator scored nine different documents. In total, 45 (33 unique) documents and 431 (336 unique) sentences were evaluated by the five evaluators.

When measuring interannotator agreement, in addition to reporting IAA Kappa scores for the evaluation of all five systems (as usual in WMT) in Supplementary Table 9, we also provide IAA Kappa scores for each pair of systems in Supplementary Fig. 12. This confirms the expectation that a higher interannotator agreement is achieved in comparisons of pairs of systems with a large difference in quality.

When carrying out statistical analysis, we first grouped the scores of sentences with multiple evaluations by computing the fluency and adequacy score per sentence and translation from the scores of all evaluators who scored this sentence. Next, we sorted the MT systems by the mean score, using sign test to compare the difference between the consecutive systems (for Fig. 4b). Evaluation of the entire test-set (all originally English sentences) using BLEU for comparison is shown in Supplementary Fig. 13.

21 Translation turing test

Participants of the Translation Turing test were unpaid volunteers. The participants were randomly assigned into four non-overlapping groups: A1, A2, B1, B2. Groups A1 and A2 were presented translations by both human reference and CUBBITT. Groups B1 and B2 were presented translations by both human reference and Google Translate (obtained from on 13 August 2018). The source sentences in the four groups were identical. Guidelines presented to the evaluators are given in Supplementary Methods 1.4.

The evaluated sentences were taken from originally English part of the WMT18 evaluation test-set (i.e., WMT18-orig-en) and shuffled in a random order. For each source sentence, it was randomly assigned whether Reference translation will be presented to group A1 or A2; the other group was presented this sentence with the translation by CUBBITT. Similarly, for each source sentence, it was randomly assigned whether Reference translation will be presented to group B1 or B2; the other group was presented this sentence with the translation by Google Translate. Every participant was therefore presented human and machine translations approximately in a 1:1 ratio (but this information was intentionally concealed from them).

Each participant encountered each source sentence at most once (i.e., with only one translation), but each source sentence was evaluated for all the three systems. (Reference was evaluated twice, once in the A groups, once in the B groups.) Each participant was presented with 100 sentences. Only participants with more than 90 sentences evaluated were included in our study.

The Translation Turing test was performed as the first evaluation in this study (but after the WMT18 competition) and participants who overlapped with the evaluators of the context-aware evaluations were not shown results from the Turing test before they finished all the evaluations.

In total, 15 participants evaluated a mix of human and CUBBITT translations (five professional translators, six MT researchers, and four other), 16 participants evaluated a mix of human and Google Translate translations (eight professional translators, five MT researchers, and three other). A total of 3081 sentences were evaluated by all participants of the test.

When measuring interannotator agreement, we computed the IAA Kappas (Supplementary Table 10) using our own script, treating the task as a simple binary classification. While in the previous types of evaluations, we computed the IAA Kappa scores using the script from WMT 201638, this was not possible in the Translation Turing test, which does not involve any ranking.

When carrying out statistical analysis, we computed the accuracy for each participant as the percentage of sentences with correctly identified MT or human translations (i.e., number of true positives + true negatives divided by the number of scored sentences) and the significance was assessed using the Fisher test on the contingency table. The resulting P-values were corrected for multiple testing with the Benjamini–Hochberg method using matlab function fdr_bh(pValues,0.05,‘dep’,‘yes’)39 and participants with the resulting Q-value below 0.05 were considered to have significantly distinguished between human and machine translations.

22 Block-BT and checkpoint averaging synergy

In this analysis, the four systems from Fig. 2a were compared: block-BT vs mix-BT, both with (Avg) vs without (noAvg) checkpoint averaging. All four systems were trained with a single iteration of backtranslation only, i.e., corresponding to the MT2 system in Supplementary Fig. 2. The WMT13 newstest (3000 sentences) was used to evaluate two properties of the systems over time: translation diversity and generation of novel translations by checkpoint averaging. These properties were analyzed over the time of the training (up to 1 million steps), during which checkpoints were saved every hour (up to 214 checkpoints).

23 Overall diversity and novel translation quantification

We first computed the overall diversity as the number of all the different translations produced by the 139 checkpoints between 350,000 and 1,000,000 steps. In particular, for every sentence in WMT13 newstest, the number of unique translations was computed in the hourly checkpoints, separately for block-BT-noAvg and mix-BT-noAvg. Comparing the two systems in every sentence, block-BT-noAvg produced more unique translations in 2334 (78%) sentences; mix-BT-noAvg produced more unique translations in 532 (18%) sentences; and the numbers of unique translations were equal in 134 (4%) sentences.

Next, in the same checkpoints and for every sentence, we compared translations produced by models with and without averaging and computed the number of checkpoints with a novelAvg∞ translation. These are defined as translations that were never produced by the same system without checkpoint averaging (by never we mean in none of the checkpoints between 350,000 and 1,000,000). In total, there were 1801 (60%) sentences with at least one checkpoint with novelAvg∞ translation in block-BT and 949 (32%) in mix-BT. When comparing the number of novelAvg∞ translations in block-BT vs mix-BT in individual sentences, there were 1644 (55%) sentences with more checkpoints with novelAvg∞ translations in block-BT, 184 (6%) in mix-BT, and 1172 (39%) with equal values.

24 Diversity and novel translations over time

First, we evaluated development of translation diversity over time using moving window of octuples of checkpoints in the two systems without checkpoint averaging. In particular, for every checkpoint and every sentence, we computed the number of different unique translations in the last eight checkpoints. The average across sentences is shown in Supplementary Fig. 16, separately for block-BT-noAvg and mix-BT-noAvg.

Second, we evaluated development of novel translations by checkpoint averaging over time. In particular, for every checkpoint and every sentence, we evaluated whether the Avg model created a novelAvg8 translation, i.e., whether the translation differed from all the translations of the last eight noAvg checkpoints. The percentage of sentences with a novelAvg8 translation in the given checkpoint is shown in Fig. 8a, separately for block-BT and mix-BT.

25 Effect of novel translations on evaluation by BLEU

We first identified the best model (checkpoint) for each of the systems according to BLEU: checkpoint 775178 in block-BT-Avg (BLEU 28.24), checkpoint 775178 in block-BT-NoAvg (BLEU 27.54), checkpoint 606797 in mix-BT-Avg (BLEU 27.18), and checkpoint 606797 in mix-BT-NoAvg (BLEU 26.92). We note that the Avg and NoAvg systems do not necessarily need to have the same checkpoint with the highest BLEU, however it was nevertheless the case in both block-BT and mix-BT systems here. We next identified which translations in block-BT-Avg and in mix-BT-Avg were novelAvg8 (i.e., not seen in the last eight NoAvg checkpoints). There were 988 novelAvg8 sentences in block-BT-Avg and 369 in mix-BT-Avg. Finally, we computed BLEU of Avg translations, in which either the novelAvg8 translations were replaced with the NoAvg versions (yellow bars in Fig. 8b), or vice versa (orange bars in Fig. 8b); separately for block-BT and mix-BT.

Reporting summary

Further information on research design is available in the Nature Research Reporting Summary linked to this article.

Data availability

Data used for comparison of human and machine translations may be downloaded at

Code availability

The CUBBITT source code is available at Codes for analysis of human and machine translations were uploaded together with the analyzed data at


  1. 1.

    Hirschberg, J. & Manning, C. D. Advances in natural language processing. Science349, 261–266 (2015).

    ADSMathSciNetCASArticle Google Scholar

  2. 2.

    Bojar, O. Machine Translation. In Oxford Handbooks in Linguistics 323–347 (Oxford University Press, 2015).

  3. 3.

    Hajič, J. et al. Natural Language Generation in the Context of Machine Translation (Center for Language and Speech Processing, Johns Hopkins University, 2004).

  4. 4.

    Vanmassenhove, E., Hardmeier, C. & Way, A. Getting gender right in neural machine translation. In Proc. 2018 Conference on Empirical Methods in Natural Language Processing 3003–3008 (Association for Computational Linguistics, 2018).

  5. 5.

    Artetxe, M., Labaka, G., Agirre, E. & Cho, K. Unsupervised neural machine translation. In 6th International Conference on Learning Representations, ICLR 2018—Conference Track Proceedings (2018).

  6. 6.

    Lecun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature521, 436–444 (2015).

    ADSCASArticle Google Scholar

  7. 7.

    Moravčík, M. et al. DeepStack: Expert-level artificial intelligence in heads-up no-limit poker. Science356, 508–513 (2017).

    ADSMathSciNetArticle Google Scholar

  8. 8.

    Sutskever, I., Vinyals, O. & Le, Q. V. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems 27 (NIPS 2014) 3104–3112 (Curran Associates, Inc., 2014).

  9. 9.

    Bahdanau, D., Cho, K. & Bengio, Y. Neural machine translation by jointly learning to align and translate. In International Conference on Learning Representations (2014).

  10. 10.

    Luong, M.-T. & Manning, C. D. Stanford neural machine translation systems for spoken language domains. In Proceedings of The International Workshop on Spoken Language Translation (IWSLT) (2015).

  11. 11.

    Junczys-Dowmunt, M., Dwojak, T. & Hoang, H. Is neural machine translation ready for deployment? A case study on 30 translation directions. In Proceedings of the Ninth International Workshop on Spoken Language Translation (IWSLT) (2016).

  12. 12.

    Hutchins, W. J. & Somers, H. L. An introduction to machine translation. (Academic Press, 1992).

  13. 13.

    Brown, P. F., Della Pietra, S. A., Della Pietra, V. J. & Mercer, R. L. The mathematics of statistical machine translation. Comput. Linguist19, 263–311 (1993).

    Google Scholar

  14. 14.

    Koehn, P., Och, F. J. & Marcu, D. Statistical phrase-based translation. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology - NAACL ’03 1, 48–54 (2003).

  15. 15.

    Wu, Y. et al. Google’s neural machine translation system: bridging the gap between human and machine translation. Preprint at (2016).

  16. 16.

    Hassan, H. et al. Achieving human parity on automatic Chinese to English news translation. Preprint at (2018).

  17. 17.

    Bojar, O. et al. Findings of the 2018 conference on machine translation (WMT18). In Proceedings of the Third Conference on Machine Translation (WMT) 2, 272–307 (2018).

  18. 18.

    Vaswani, A. et al. Attention is all you need. In Advances in Neural Information Processing Systems (Curran Associates, Inc., 2017).

  19. 19.

    Sennrich, R., Haddow, B. & Birch, A. Neural machine translation of rare words with subword units. 54th Annu. Meet. Assoc. Comput. Linguist. (2015).

  20. 20.

    Bojar, O. et al. CzEng 1.6: Enlarged Czech-English parallel corpus with processing tools dockered. In Text, Speech, and Dialogue: 19th International Conference, TSD 2016 231–238 (2016).

  21. 21.

    Tiedemann, J. OPUS—-parallel corpora for everyone. In Proceedings of the 19th Annual Conference of the European Association of Machine Translation (EAMT) 384 (2016).

  22. 22.

    Toral, A., Castilho, S., Hu, K. & Way, A. Attaining the unattainable? Reassessing claims of human parity in neural machine translation. In Proceedings of the Third Conference on Machine Translation (WMT) 113–123 (2018).

  23. 23.

    Läubli, S., Sennrich, R. & Volk, M. Has machine translation achieved human parity? A case for document-level evaluation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP) 4791–4796 (2018).

  24. 24.

    Castilho, S. et al. Is neural machine translation the new state of the art? Prague Bull. Math. Linguist108, 109–120 (2017).

    Article Google Scholar

  25. 25.

    Haddow, B. et al. The University of Edinburgh’s submissions to the WMT18 news translation Task. In Proc. Third Conference on Machine Translation 2, 403–413 (2018).

  26. 26.

    Barrault, L. et al. Findings of the 2019 conference on machine translation (WMT19). In Proc. Fourth Conference on Machine Translation (WMT) 1–61 (2019).

  27. 27.

    Bojar, O. et al. The joy of parallelism with CzEng 1.0. In Proc. Eighth International Language Resources and Evaluation Conference (LREC’12) 3921–3928 (2012).

  28. 28.

    Papineni, K., Roukos, S., Ward, T. & Zhu, W.-J. BLEU: a method for automatic evaluation of machine translation. In Proc. 40th Annual Meeting on Association for Computational Linguistics—ACL ’02 311–318 (Association for Computational Linguistics, 2002).

  29. 29.

    Post, M. A Call for clarity in reporting BLEU scores. In Proc. Third Conference on Machine Translation (WMT) 186–191 (2018).

  30. 30.

    Popel, M. & Bojar, O. Training tps for the transformer model. Prague Bull. Math. Linguist. (2018).

  31. 31.

    Popel, M. CUNI transformer neural MT system for WMT18. In Proc. Third Conference on Machine Translation (WMT) 482–487 (Association for Computational Linguistics, 2019).

  32. 32.

    Shazeer, N. & Stern, M. Adafactor: Adaptive learning rates with sublinear memory cost. In Proc. 35th International Conference on Machine Learning, ICML 2018 4603–4611 (2018).

  33. 33.

    Sennrich, R. et al. The University of Edinburgh’s neural MT systems for WMT17. In Proc. Second Conference on Machine Translation (WMT)2, 389–399 (2017).

  34. 34.

    Junczys-Dowmunt, M., Dwojak, T. & Sennrich, R. The AMU-UEDIN submission to the WMT16 news translation task: attention-based NMT models as feature functions in phrase-based SMT. In Proc. First Conference on Machine Translation (WMT) 319–325 (2016).

  35. 35.

    Sennrich, R., Haddow, B. & Birch, A. Edinburgh neural machine translation systems for WMT 16. In Proc. First Conference on Machine Translation (WMT) 371–376 (2016).

  36. 36.

    Gellerstam, M. Translationese in Swedish novels translated from English. In Translation Studies in Scandinavia: Proceedings from the Scandinavian Symposium on Translation Theory (SSOTT) 88–95 (1986).

  37. 37.

    Kittur, A., Chi, E. H. & Suh, B. Crowdsourcing user studies with Mechanical Turk. In Proc. 26th Annual CHI Conference on Human Factors in Computing Systems (CHI ’08) 453–456 (ACM Press, 2008).

  38. 38.

    Bojar, O. et al. Findings of the 2016 conference on machine translation. In Proc. First Conference on Machine Translation: Volume 2, Shared Task Papers 131–198 (2016).

  39. 39.

    Groppe, D. fdr_bh MATLAB central file exchange. (2020).

Download references


We thank the volunteers who participated in the Translation Turing test, Jack Toner for consultation of written English, and the WMT 2018 organizers for providing us with the data for the re-evaluation of translation quality. This work has been partially supported by the grants 645452 (QT21) of the European Commission, GX19-26934X (NEUREM3) and GX20-16819X (LUSyD) of the Grant Agency of the Czech Republic. The work has been using language resources developed and distributed by the LINDAT/CLARIAH-CZ project of the Ministry of Education, Youth and Sports of the Czech Republic (project LM2018101).

Author information

Author notes
  1. These authors contributed equally: Martin Popel, Marketa Tomkova, Jakub Tomek.


  1. Faculty of Mathematics and Physics, Charles University, Prague, 121 16, Czech Republic

    Martin Popel, Ondřej Bojar & Zdeněk Žabokrtský

  2. Ludwig Cancer Research Oxford, University of Oxford, Oxford, OX1 2JD, UK

    Marketa Tomkova

  3. Department of Computer Science, University of Oxford, Oxford, OX1 3QD, UK

    Jakub Tomek

  4. Google Brain, Mountain View, California, CA, 94043, USA

    Łukasz Kaiser & Jakob Uszkoreit


M.P. initiated the project. L.K. and J.U. designed and implemented the Transformer model. M.P. designed and implemented training of the translation system. J.T., M.T., and M.P. with contributions from O.B. and Z.Ž. designed the evaluation. M.T., J.T., and M.P. conducted the evaluation. M.T. and J.T. analyzed the results. M.T., J.T., and M.P. wrote the initial draft; all other authors critically reviewed and edited the manuscript.

Corresponding author

Correspondence to Martin Popel.

Ethics declarations

Competing interests

J.U. and L.K. are employed by and hold equity in Google, which funded the development of Transformer. The remaining authors (M.P., M.T., J.T., O.B., Z.Ž.) declare no competing interests.

Additional information

Peer review informationNature Communications thanks Alexandra Birch and Marcin Junczys-Dowmunt for their contribution to the peer review of this work. Peer reviewer reports are available.

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit

Reprints and Permissions

Instant Translate

Messenger RNA regulation: to translate or to degrade

Ann-Bin Shyu,1,4,aMiles F Wilkinson,2 and Ambro van Hoof3,4

Ann-Bin Shyu

1Department of Biochemistry and Molecular Biology, The University of Texas, Medical School, Houston, TX, USA

4These authors contributed equally to this work

Find articles by Ann-Bin Shyu

Miles F Wilkinson

2Department of Biochemistry and Molecular Biology, The University of Texas MD Anderson Cancer Center, Houston, TX, USA

Find articles by Miles F Wilkinson

Ambro van Hoof

3Department of Microbiology and Molecular Genetics, The University of Texas, Medical School, Houston, TX, USA

4These authors contributed equally to this work

Find articles by Ambro van Hoof

Author informationArticle notesCopyright and License informationDisclaimer

1Department of Biochemistry and Molecular Biology, The University of Texas, Medical School, Houston, TX, USA

2Department of Biochemistry and Molecular Biology, The University of Texas MD Anderson Cancer Center, Houston, TX, USA

3Department of Microbiology and Molecular Genetics, The University of Texas, Medical School, Houston, TX, USA

4These authors contributed equally to this work

aDepartment of Biochemistry and Molecular Biology, The University of Texas, Medical School, 6431 Fannin Street, Houston, TX 77030, USA. Tel: +1 713 500 6068; Fax: +1 713 500 0652; E-mail: [email protected]

Received 2007 Nov 10; Accepted 2007 Dec 6.

Copyright © 2008, European Molecular Biology Organization

This article has been cited by other articles in PMC.


Quality control of gene expression operates post-transcriptionally at various levels in eukaryotes. Once transcribed, mRNAs associate with a host of proteins throughout their lifetime. These mRNA–protein complexes (mRNPs) undergo a series of remodeling events that are influenced by and/or influence the translation and mRNA decay machinery. In this review we discuss how a decision to translate or to degrade a cytoplasmic mRNA is reached. Nonsense-mediated mRNA decay (NMD) and microRNA (miRNA)-mediated mRNA silencing are provided as examples. NMD is a surveillance mechanism that detects and eliminates aberrant mRNAs whose expression would result in truncated proteins that are often deleterious to the organism. miRNA-mediated mRNA silencing is a mechanism that ensures a given protein is expressed at a proper level to permit normal cellular function. While NMD and miRNA-mediated mRNA silencing use different decision-making processes to determine the fate of their targets, both are greatly influenced by mRNP dynamics. In addition, both are linked to RNA processing bodies. Possible modes involving 3′ untranslated region and its associated factors, which appear to play key roles in both processes, are discussed.

Keywords: microRNA, mRNA decay, NMD, P-bodies, translation


Messenger RNA (mRNA) mediates the transfer of genetic information from the cell nucleus to ribosomes in the cytoplasm, where it serves as a template for protein synthesis. Once mRNAs enter the cytoplasm, they are translated, stored for later translation, or degraded. mRNAs that are initially translated may later be temporarily translationally repressed. All mRNAs are ultimately degraded at a defined rate. How are these decisions made? Throughout their lifetime, mRNAs associate with a host of proteins factors, some of which are stably bound while others subject to dynamic exchange (Moore, 2005). Individual mRNA–protein complex (mRNP) components may serve as adaptors that allow mRNAs to interface with the machinery mediating their subcellular localization, translation, and decay. Thus, mRNP remodeling is likely to play a critical role in forming decision as to whether to translate or to degrade an mRNA.

In this review, we use two regulatory mechanisms that control mRNA translation and decay as examples to illustrate how a decision may be reached to translate or to degrade a cytoplasmic mRNA. One is nonsense-mediated mRNA decay (NMD), an RNA surveillance mechanism that rapidly degrades mRNAs harboring premature termination codons (PTCs). The other is microRNA (miRNA)-mediated silencing of gene expression, which involves the base pairing of miRNAs with the 3′ untranslated regions (UTRs) of their target mRNAs. Remodeling events are likely to be crucial for both miRNA-mediated silencing and NMD (Schell et al, 2002; Dreyfuss et al, 2003; Maquat, 2004; Amrani et al, 2006; Chang et al, 2007; Jackson and Standart, 2007; Nilsen, 2007; Pillai et al, 2007). We discuss two distinct models for how NMD distinguishes between normal and aberrant PTC-bearing mRNAs, and suggest ways that they can be reconciled via a ‘unified' model. We describe what is known about how miRNAs target mRNAs for rapid decay and translation repression, and highlight recent studies that have begun to pinpoint how miRNAs inhibit translation initiation. In our discussion of the underlying mechanisms for NMD and miRNA-mediated silencing, we consider the role of RNA-processing bodies (P-bodies), the recently identified cytoplasmic foci that harbor translationally silenced mRNPs and may be the burial grounds for at least some mRNAs (Parker and Sheth, 2007; Eulalio et al, 2007a). We also discuss the role of deadenylation in NMD and miRNA-mediated events, as loss of the poly(A) tail leads to loss of poly(A)-binding protein (PABP), which in turn is known to have profound consequences on both translation and mRNA decay (Jacobson, 1996; Mangus et al, 2003).

NMD: a conserved eukaryotic quality control mechanism

NMD is a conserved pathway found in Saccharomyces cerevisiae (yeast; Losson and Lacroute, 1979), Drosophila melanogaster (Brogna, 1999), Caenorhabditis elegans (Hodgkin et al, 1989), mammals (Maquat et al, 1981), and plants (van Hoof and Green, 1996). Most normal eukaryotic cellular mRNAs are not subject to NMD because they only contain a stop codon at the end of the coding region. In contrast, mutant mRNAs that have an in-frame stop codon upstream of the normal stop codon, are recognized by the NMD machinery, leading to mRNA destabilization. Many human inherited diseases are caused by mutations that trigger NMD (Frischmeyer and Dietz, 1999). Some disease alleles contain a mutation that directly changes a sense codon to a stop codon, and others introduce an in-frame stop codon by more indirect ways such as insertions, deletions, and mutations that disrupt RNA splicing, all of which can result in a shift of the reading frame. It has been estimated that 30% of human disease alleles cause NMD, and in many of these cases, NMD contributes to the disease phenotype (Frischmeyer and Dietz, 1999; Holbrook et al, 2004).

The core factors universally required for NMD (i.e., Upf1p, Upf2p and Upf3p) were originally identified in a genetic screen in yeast (Culbertson et al, 1980). Homologs of these proteins were subsequently identified and shown to function in NMD in humans (Sun et al, 1998), D. melanogaster (Gatfield et al, 2003), C. elegans (Page et al, 1999; Aronoff et al, 2001), and Arabidopsis thaliana (Hori and Watanabe, 2005; Arciga-Reyes et al, 2006). Additional genes are also required for NMD in higher eukaryotes (see below). Despite a large body of work on these three Upf proteins, their mechanism of action in NMD is only beginning to be understood. The only Upf protein with a clearly defined biochemical function is Upf1, which is an ATP-binding protein with RNA helicase activity. Upf1 can catalyze the unwinding of double-stranded RNA (dsRNA), but its substrates have not been identified (Czaplinski et al, 1995; Bhattacharya et al, 2000). It is possible that Upf1 may catalyze some other reactions, such as acting like a motor protein that moves along an RNA or remodeling mRNP for translation termination and/or subsequent mRNA degradation (see below).

Many models for NMD have been proposed, but they essentially fall into two broad categories. The first group of models we will refer to collectively as the ‘downstream marker model'. This model posits a central role for ‘marker' proteins that are deposited on the mRNA downstream of the PTC and upstream of a normal termination codon (Figure 1). In a normal mRNA, the translating ribosome and/or associated factors displace these marker proteins so that they cannot trigger NMD (Figure 1A). However, in a PTC-containing mRNA, the marker proteins would still be bound when the translational apparatus recognizes the PTC. Interaction of these marker proteins with translation termination factors recruited to the PTC leads to rapid mRNA degradation (Figure 1A). The second group of models will be referred to herein as the ‘aberrant termination model' (Figure 1B). In this model, normal termination induces an mRNP rearrangement, which leads to mRNA stability, whereas aberrant termination induced by a PTC fails to cause this mRNP remodeling or triggers aberrant mRNP remodeling. In the following sections, we will discuss these two groups of models, as well as some important features that we believe could unify them.

An external file that holds a picture, illustration, etc.
Object name is 7601977f1.jpg

Open in a separate window

Figure 1

Models for nonsense-mediated decay. (A) The downstream marker posits the presence of a marker protein that is bound to the mRNA downstream of the premature stop codon. The presence of this marker triggers degradation of the PTC containing mRNAs (right panel). In a normal mRNA, the translating ribosomes remove the downstream marker from the coding region of the mRNA, thus preventing normal mRNAs from being targeted to the NMD pathway (left panel). (B) The aberrant termination model posits that termination at a normal stop codon (octagonal stop sign) is different from translation termination at a PTC (aberrant square stop sign). The difference in termination may be due to the proximity of PABP to the normal stop codon (double headed arrow), and/or termination at normal stop codons may be faster than termination at a PTC (clock). These two possibilities are not mutually exclusive. Normal termination at a normal stop codon triggers remodeling into a stable mRNP, whereas aberrant termination at a PTC either prevents this remodeling or triggers remodeling into an aberrant mRNP, which in turn triggers mRNA degradation by a variety of mechanisms. (C) The aberrant termination and downstream marker models can be combined into one coherent model. In this model, the difference between normal termination and aberrant termination can be influenced by a number of different signals. For example, proximity to PABP and other features make termination more normal, whereas downstream markers and other features make termination more aberrant. A preponderance of positive signals causes normal termination, which triggers remodeling into a stable mRNP. A preponderance of negative signals either prevents this remodeling, or triggers remodeling into an aberrant mRNP.

The downstream marker model for NMD

One of the best-characterized NMD substrates is yeast PTC-bearing PGK1 mRNA. This mRNA is unstable but can be stabilized by deleting most of the sequence downstream of the PTC. Reinsertion of a small 3′ region of PGK1 mRNA, called the downstream sequence element (DSE), into the deletion mutant restores mRNA instability (Peltz et al, 1993). Further analysis showed that the heterogeneous nuclear RNP protein, Hrp1p, which is able to bind to the DSE in vitro, is required for NMD of PGK1 mRNA (Gonzalez et al, 2000). Thus, Hrp1p is considered as a downstream marker for NMD. However, it is not known whether DSEs and Hrp1p are required for the rapid decay of all PTC-bearing transcripts in yeast.

In mammalian cells, a large exon junction protein complex (EJC) deposited about 20–24 nucleotide (nt) upstream of exon–exon junctions during RNA splicing, is widely considered to be a mark that triggers NMD (Le Hir et al, 2000). Several lines of evidence support this. First, nonsense codons more than 55-nt upstream of the last intron generally trigger NMD, whereas nonsense codons inserted in the last exon do not (Zhang et al, 1998). Second, depletion of EJC components by RNA interference (RNAi) reduces the efficiency of NMD (Mendell et al, 2002; Palacios et al, 2004; Gehring et al, 2005; Kim et al, 2005; Chan et al, 2007). Third, the EJC remains associated with the mRNA while it enters the translating pool of mRNAs (Kim et al, 2001; Le Hir et al, 2001). Lastly, tethering of EJC components downstream of a normal stop codon triggers NMD (Lykke-Andersen et al, 2001; Gehring et al, 2003; Palacios et al, 2004). This model is also consistent with the observation that the normal stop codon in mammalian mRNAs generally occur in the last exon (Nagy and Maquat, 1998).

Although ample evidence supports its role in NMD, the EJC is not universally needed for NMD in mammalian cells (Zhang et al, 1998; Rajavel and Neufeld, 2001; Wang et al, 2002; LeBlanc and Beemon, 2004; Buhler et al, 2006). While in some cases an alternative downstream marker may exist, that does not appear to be so at least in the case of IGμ, (Buhler et al, 2006). Interestingly, although NMD is not conserved in prokaryotes, bacterial genes can undergo NMD when introduced into eukaryotes. For instance, a PTC-containing CAT mRNA can undergo NMD in flies (Gatfield et al, 2003) and a PTC-containing LacZ mRNA can undergo NMD in yeast (Keeling et al, 2004). Moreover, most EJC components are not conserved in S. cerevisiae. Although EJC components are conserved in D. melanogaster and C. elegans, NMD is splicing-independent in these organisms (Gatfield et al, 2003; Longman et al, 2007), suggesting that EJC does not play a role in NMD in these organisms. Thus, it appears that NMD can take place without a known downstream marker.

The aberrant termination model for NMD

The aberrant termination model (Figure 1B) depends on the notion that there is a difference in the translation termination events that occur in normal mRNAs and PTC-containing mRNAs (Amrani et al, 2006). According to this model, normal translation termination occurs at a native stop codon because of the close proximity of a normal 3′UTR, its associated factors, and/or poly(A) tail/PABP. This normal termination is proposed to prevent NMD from occurring. NMD substrates do not have a normal 3′UTR immediately downstream of the stop codon because translation stops in the coding region. The abnormal 3′ end does not permit proper remodeling steps required for normal translation termination. The difference between premature and normal translation termination is unclear. It is possible that translation termination is slower at premature stop codons (Hilleren and Parker, 1999) or termination may be biochemically distinct at normal and premature stop codon (Amrani et al, 2004). This idea is supported by the observations that the frequency of termination (versus translation read through) varies depending on the stop codon identity (UAA, UAG, or UGA) and the nucleotide following the stop codon and other mRNA features (Brown et al, 1990; Bonetti et al, 1995; McCaughan et al, 1995). Consistently, stop codons with low levels of read through caused NMD in yeast, whereas those with higher levels of read through did not (Keeling et al, 2004). A connection between the termination reaction and NMD was also revealed using in vitro translation extracts (Amrani et al, 2004). A toe-printing assay was able to detect a ribosome in the process of terminating at a PTC, but not at several normal stop codons. In addition, ribosomes stalled near PTCs could be detected in extracts made from a wild-type strain, but not from upf1- or upf2-mutant strains. Although these observations suggest that ribosomes associate more tightly with PTCs and/or are released slower from PTCs than from normal stop codons, it is unclear as to how this aberrancy results in NMD and whether it is a conserved feature of NMD.

A central question regarding the aberrant termination model concerns what feature of an mRNA triggers normal or aberrant translation termination. One possibility is that a proper spacing between the stop codon and proteins deposited at 3′UTR of mRNAs during 3′-end formation (e.g., PABP) is important for translation termination (Hilleren and Parker, 1999). This notion is supported by the observation that insertion of extra sequence in the 3′UTR of an mRNA can trigger NMD (Buhler et al, 2006; Behm-Ansmant et al, 2007). Interestingly, NMD can occur when the 3′UTR mRNPs and polyadenylation were generated independent of the normal cleavage and polyadenylation machinery (Baker and Parker, 2006; Behm-Ansmant et al, 2007). Several observations indicate that the protein factors associated with a stop codon, its downstream 3′UTR, and/or the poly(A) tail also play a critical role in determining a translation termination event. For example, tethering of PABP downstream of a PTC recruits the termination factor and rescues the stability of the mRNA (Amrani et al, 2004; Behm-Ansmant et al, 2007). Such stabilization was also observed by tethering PABP downstream of a normal stop codon of an otherwise unstable mRNA (Coller et al, 1998). In addition, when deadenylation in mammalian cells is impaired by knocking down Caf1 poly(A) nuclease or by overexpressing a Caf1 dominant-negative mutant, a PTC-containing mRNA is stabilized, presumably because PABPs remain associated with the unshortened poly(A) tail (N Ezzeddine, D Zheng, C-YA Chen, W Zhu, X He, and A-B Shyu, unpublished observations). Although these findings suggest that PABPs play an inhibitory role to prevent NMD from occurring, proper distinction between a normal stop codon and a PTC can occur in the absence of a poly(A) or PABP. For example, a PTC-containing mRNA harboring the 3′ end of a transcript that does not undergo polyadenylation (histone mRNA) is a substrate for NMD in mammals (Neu-Yilik et al, 2001). Similarly, in yeast, NMD can occur on an unadenylated mRNA or in a mutant that lacks PABP (Meaux et al, 2008). Nevertheless, it is worth noting that these observations are also consistent with the notion that the existence of PABPs prevents NMD from taking place.

While there is considerable support for the aberrant termination model, some observations cannot be explained by this model. For instance, the aberrant termination model does not easily account for the roles of DSEs and EJCs in NMD. Besides, in organisms which have long and heterogeneous 3′UTR length, it is more difficult to conceive of an important role of 3′UTR length.

Important features that may unify the two models for NMD

Neither the ‘downstream marker' model nor the ‘aberrant termination' model appear to apply to all cases of NMD. Nevertheless, both of them explain several critical features of NMD, most of which have to do with signals at or downstream of a PTC. Here, we envision a coherent model that integrates elements of each model to explain how PTCs are recognized by NMD (Figure 1C). It appears that multiple features (e.g., the nature of the stop codon UAA, UAG, or UGA, the nucleotide immediately following the stop codon, and the sequences, length, and associated proteins of 3′UTR) and factors (e.g., DSEs, EJC, PABP) influence the nature of the termination event. These features could work in opposing or dueling fashion (e.g., inhibit or stimulate normal or aberrant termination). It is likely that combination of various features would result in differences in translation termination and/or decay of mRNAs. Depending on the transcript, cell conditions, and/or experimental setup, some of these features may appear to be more important than others.

From premature termination to degradation

Once an mRNA is recognized as containing a PTC, how does this lead to its decay? One possibility is that a downstream marker may recruit mRNA decay enzymes to the mRNA by directly interacting with these enzymes (He and Jacobson, 1995). However, to our knowledge, there is no convincing evidence for this possibility. Another possibility for signaling mRNA degradation is that it depends on an mRNP-remodeling step between termination and the actual decay (Hilleren and Parker, 1999; Amrani et al, 2004). For example, a normal translation termination may result in a general remodeling of the mRNP that stabilizes the mRNA. In contrast, aberrant termination would fail to trigger remodeling or trigger an alternative mRNP-remodeling event, either of which could lead to mRNA degradation. One current challenge is to develop assays for mRNP structure that can test this model. Candidates that may mediate these remodeling events are the helicases and GTPase that have been reported to play important roles in mRNP remodeling (Jankowsky and Bowers, 2006; Small et al, 2006; Bleichert and Baserga, 2007). For example, it is possible that the helicase activity of Upf1 and/or the GTPase activity of eRF3 have key roles in the remodeling steps (Kashima et al, 2006). Since eRF3 is a PABP-interacting protein (Uchida et al, 2002), it is possible that the interaction between the Upf1–eRF1–eRF3 trimer and PABP prevents an aberrant mRNP remodeling.

An intermediate mRNP-remodeling step existing between translation termination and mRNA decay allows for versatility in how an mRNA is ultimately degraded by NMD. Thus, while the core of the NMD pathway appears to be conserved in all eukaryotes, the downstream consequences of PTC recognition appear to be different. In yeast, decapping is a major consequence of PTC recognition (i.e., the removal of the 5′-cap structure) (Muhlrad and Parker, 1994), whereas in flies, PTC recognition leads to endonucleolytic cleavage of the mRNA in the vicinity of the aberrant stop codon (Gatfield and Izaurralde, 2004). In other species, including mammals, PTC recognition leads to accelerated deadenylation (Cao and Parker, 2003; Chen and Shyu, 2003). Another feature about the proposed mRNP remodeling step is that the consequence of aberrant or normal termination may not be limited to one specific decay pathway. For instance, PTC recognition in yeast can increase decapping rate (Muhlrad and Parker, 1994), reduce translation (Muhlrad and Parker, 1999), or accelerate deadenylation (Cao and Parker, 2003) and subsequent degradation by the exosome (Cao and Parker, 2003; Mitchell and Tollervey, 2003). We conclude that mRNP remodeling directed by multiple features downstream of the stop codon play an important role in quality control of gene expression. This is a recurring theme in post-transcriptional regulation, including miRNA-mediated mRNA silencing, as described in the next section.

miRNA-mediated downregulation of gene expression

miRNAs are endogenous ∼22-nt non-coding RNAs that control fundamental cellular processes in animals and plants. In vertebrates, miRNA genes are one of the most abundant classes of regulatory genes (∼1% of all the genes) (Lim et al, 2003; Bartel, 2004; Bartel and Chen, 2004; Lim et al, 2005). After incorporation into the RNA-induced silencing complex (RISC), miRNAs guide the RNAi machinery to their target mRNAs by forming RNA duplexes, resulting in sequence-specific repression of productive translation or mRNA decay (Ambros, 2004; Bartel, 2004; Zamore and Haley, 2005). Regulation by miRNAs is typically mediated by the formation of imperfect hybrids with 3′UTR sequences of target mRNAs. A given miRNA targeted mRNA often has multiple miRNA target sites. Computational methods that have been developed to predict miRNA target genes suggest that 20–30% of protein-coding genes are likely targets of miRNAs (Lewis et al, 2003, 2005; Rajewsky, 2006).

Initially, miRNAs were thought to down-regulate protein expression solely by inhibiting target mRNA translation (Olsen and Ambros, 1999; Seggerson et al, 2002). However, recent studies have indicated that many miRNAs can induce rapid decay of target mRNAs (Bagga et al, 2005; Lim et al, 2005; Behm-Ansmant et al, 2006; Giraldez et al, 2006; Wu et al, 2006; Eulalio et al, 2007c), which then indirectly reduces the amount of protein made. Thus, there are at least two general modes of miRNA-mediated downregulation of targets in metazoan cells: miRNA-mediated translational repression and miRNA-mediated RNA decay (Figure 2; Jackson and Standart, 2007; Nilsen, 2007; Pillai et al, 2007).

An external file that holds a picture, illustration, etc.
Object name is 7601977f2.jpg

Open in a separate window

Figure 2

Mechanisms of miRNA-mediated mRNA silencing. After incorporation into the RISC to form miRNPs, miRNAs guide the miRNPs to their target mRNAs by forming imperfect hybrids with 3′UTR sequences of target mRNAs. The interaction between a miRNP and its target mRNA can promote direct inhibition of translation initiation. Alternatively, the miRNP may accelerate deadenylation of the target mRNA, which in turn represses translation initiation or results in mRNA degradation. In P-bodies, miRNA-targeted mRNAs may be sequestered from the translational machinery and degraded or stored for subsequent use.

Mechanisms of miRNA-mediated translational repression

The mechanism of translational repression by miRNAs is still a matter of controversy. Two distinct mechanisms have been proposed to explain how miRNA-mediated translational repression is accomplished without affecting the abundance of target mRNAs. One hypothesizes that miRNAs inhibit translation initiation and the other hypothesizes inhibition of a ‘post-initiation' step in translation, which also elicits co-translational degradation of the nascent peptide. We refer readers to three recent excellent reviews on this controversial issue (Jackson and Standart, 2007; Nilsen, 2007; Pillai et al, 2007). Here, we focus on several new studies, all of which indicate that miRNAs can inhibit translation initiation.

It was found that the miRNA–RISC complex associated with an anti-translation initiation factor, eIF6, which inhibits joining of the 60S to the 40S subunits, thus preventing translation initiation (Chendrimada et al, 2007). Depleting eIF6 in either human cells or C. elegans effectively abolishes miRNA-mediated translational repression. In another study (Thermann and Hentze, 2007), a cell-free system was developed using D. melanogaster embryo extracts, which recapitulated translational repression mediated by the miRNA miR-2 without affecting mRNA stability. The authors found that the translational repression depended on the presence of a physiological cap structure, m7GpppG, at the 5′ of the mRNA substrate, a feature required for cap-dependent translation initiation in eukaryotes (Jacobson, 1996; Gingras et al, 1999). Intriguingly, miR-2 mRNPs co-sedimented with polyribosomes in a sucrose gradient but they did not possess features of a polyribosome. The miRNPs (heavier than 80S monosome) can still form when polyribosome formation and 60S ribosomal subunit joining are blocked, indicating the mRNAs associated with these miRNPs were not translated. In many past studies, it was assumed that the cosedimentation of miRNA-containing complexes with polysomes meant that these complexes contained ribosomes, but the study by Thermann and Hentze (2007) indicates that miRNA-mRNPs co-sedimented with polysomes are not necessarily being translated.

The observation (Kiriakidou et al, 2007) that Argonaute proteins, the catalytic components of RISC (Rand et al, 2005), contain a highly conserved motif binding to the m7G-cap structure also supports that miRNAs inhibit translation initiation. It is possible that Argonaute proteins compete with eIF4E for cap binding, thereby preventing the formation of eIF4F complex on the 5′-cap necessary for cap-dependent translation initiation. This is consistent with the observations that Ago2, but not its variant with mutations in the cap-binding motif, blocks translation when artificially tethered to the 3′UTR of mRNAs (Pillai et al, 2004). In the other two studies, let-7 miRNA-mediated translational repression was recapitulated in two different cell-free systems established with extracts prepared from either mouse Krebs-2 ascitic cells (Mathonnet et al, 2007) or human HEK293F cells over-expressing miRNA pathway components (Wakiyama et al, 2007). In these systems, the poly(A) tail and 5′-cap are both required for the translational repression, suggesting that let-7 represses translation by impairing the synergistic enhancement of translation by the 5′-cap and 3′ poly(A) tail. Collectively, these in vivo and in vitro studies support that inhibition of translation initiation by miRNAs represents one way by which miRNA-mediated translational repression is achieved.

miRNA-mediated RNA decay

Although similar in length, miRNAs are generated by a distinct mechanism from that producing small interfering RNA (siRNA). siRNAs are chopped from long dsRNAs by Dicer (Bernstein et al, 2001). The antisense strand of the siRNA is assembled into RISC, which then degrades RNA molecules with sequences completely complementary to the siRNA by endonucleolytic cleavage (reviewed in Hannon, 2002; Dykxhoorn et al, 2003). On the other hand, miRNAs are derived from native genes and form imperfect matches with target mRNAs that do not elicit endonucleolytic cleavage of target mRNAs (Ambros, 2004; Bartel, 2004). Instead, recent evidence indicates that miRNA-mediated decay can be triggered by deadenylation (see below).

A general picture of miRNA-mediated RNA decay emerges from recent studies in D. melanogaster cells (Behm-Ansmant et al, 2006), zebrafish embryos (Giraldez et al, 2006), and human cells (Wu et al, 2006), namely, mRNAs targeted by miRNAs for degradation undergo prior deadenylation. In zebrafish, miRNA miR-430 was shown to target several hundred maternal mRNAs for decay by first triggering rapid deadenylation. This massive destruction of maternal mRNAs is required to silence maternal mRNA expression into proteins so that the embryo development of zebrafish can proceed. This example well illustrates how gene silencing by miRNA is accomplished mainly at the level of mRNA decay triggered by deadenylation. In Drosophila cells, deadenylation is mediated by the Ccr4–Caf1–Not poly(A) nuclease complex (Behm-Ansmant et al, 2006). However, detailed mechanism of miRNA-induced deadenylation and participating poly(A) nucleases and many issues related to miRNA-induced mRNA decay in other organisms remain to be addressed. Given that miRNA-induced deadenylation does not necessarily lead to decay of the RNA body (Behm-Ansmant et al, 2006), it is possible that deadenylation is one way on which different modes of miRNA-mediated mRNA silencing, including miRNA-mediated translational repression and miRNA-mediated RNA decay, can converge. Because the mechanisms of only a few miRNAs have so far been characterized in detail, the generality of any mode of miRNA-mediated mRNA silencing remains to be seen.

The role of deadenylation in miRNA-mediated translational repression

Cytoplasmic PABP proteins interact with both poly(A) tails and the eIF4F complex bound to the 5′ cap, thereby bringing the two ends of the mRNA together (Kahvejian et al, 2001; Mangus et al, 2003). Because this interaction is important for both translation initiation and mRNA stability (Jacobson, 1996), it is not surprising that poly(A) tails are crucial for mRNA stability and in translation initiation. As one major stage at which miRNAs repress translation is the initiation step, it is possible that promoting deadenylation by a miRNP formed on the target mRNA to disrupt 5′–3′ end interaction may represent an effective and immediate way of reducing translation initiation.

Several observations suggest that deadenylation is a cause, but not a consequence, of miRNA-mediated translational repression, particularly at initiation step. Blocking translation initiation by a stem-loop in the 5′UTR of the target mRNA does not abolish its rapid deadenylation and decay induced by miRNA (Wu et al, 2006). Mishima et al (2006) showed that miR-430 directs the deadenylation and translational repression of nanos1 mRNA during zebrafish embryogenesis (Mishima et al, 2006). When the miR-430 target mRNA was provided a non-natural ApppG cap, which significantly impairs normal translation initiation, the rapid deadenylation was unaffected. Using a cell-free system, Wakiyama and co-workers showed that deadenylation triggered by the miRNA let-7 does not require active translation, and can proceed in the presence of cycloheximide, a potent translation inhibitor. Moreover, let-7-mediated deadenylation is independent of the structure of the mRNA 5′-terminus, while the cap and the poly(A) tail are both required for the translational repression by let-7 (Wakiyama et al, 2007). These observations suggest that let-7 miRNAs recruit miRNP complexes to let-7 target mRNAs, resulting in deadenylation, which in turn abolishes the cap-poly(A) synergy, thereby repressing translation initiation. This is reminiscent of the mechanism by which translation repression of maternal mRNAs is accomplished by shortening of the poly(A) tail during Xenopus laevis oocyte maturation (Richter, 1996; Gray and Wickens, 1998).

Although several studies support the idea that accelerated deadenylation induced by miRNAs represents a major way to repress the translation of target mRNAs without affecting mRNA stability, it is unlikely to be the universal mechanism by which this is achieved. For example, in D. melanogaster cells, blocking mRNA deadenylation by knocking down Caf1 poly(A) nuclease complex does not relieve miRNA-mediated translational repression (Behm-Ansmant et al, 2006). In this case, it appears that translational repression and deadenylation are two independent events in miRNA-mediated mRNA silencing. Furthermore, Wu et al (2006) showed that translation of mRNAs lacking a poly(A) tail remains repressed by miRNAs, indicating that deadenylation is not the cause of miRNA-mediated translational silencing in this case. Alternatively, it is possible that when deadenylation is impaired, an alternative fail-safe mechanism that can also effectively block translation initiation (e.g., decapping) is activated to bypass the requirement for deadenylation (Eulalio et al, 2007c).

The role of P-bodies in mRNA quality control

P-bodies are specific cytoplasmic foci that contain proteins known to function in mRNA metabolism (Kedersha and Anderson, 2007; Parker and Sheth, 2007; Eulalio et al, 2007a). These foci are also referred to as GW bodies as they carry GW182 proteins that are required for miRNA-mediated translational repression (Eystathioy et al, 2002; Jakymiw et al, 2005; Meister et al, 2005; Rehwinkel et al, 2005; Liu et al, 2005a; Behm-Ansmant et al, 2006). The function of P-bodies is not yet fully understood, but it is clear that the mRNA in P-bodies can either be degraded or re-enter the translating pool of mRNAs. One important aspect of P-body's protein composition is the presence of enzymes, which promote mRNA decay, including the deadenylase CCR4 (Sheth and Parker, 2003; Andrei et al, 2005) and the DCP1–DCP2 decapping complex (Ingelfinger et al, 2002; Sheth and Parker, 2003). As P-bodies contain the 5′–3′ exonuclease XRN1 (Ingelfinger et al, 2002; Sheth and Parker, 2003) but lack the exosome complex (which contains 3′–5′ exonucleases), it is likely that mRNAs are degraded via 5′ to 3′ decay pathway in P-bodies. P-bodies lack ribosomal components, most translation initiation factors, and PABP, which supports the notion that P-bodies are sites of translational repression. This feature of P-bodies also indicates that ribosomes, PABP, and translation initiation factors must dissociate from mRNPs before or immediately after they enter or aggregate to form P-bodies.

In addition to general decay factors, factors required for NMD (Upf1, Upf2, Upf3, Smg5, and Smg7), as well as PTC-containing mRNAs, are found in P-bodies (Unterholzner and Izaurralde, 2004; Sheth and Parker, 2006). The first NMD factor shown to localize to P-bodies was the human Smg7 protein (Unterholzner and Izaurralde, 2004). As Smg7 is known to bind phosphorylated Upf1 (Kashima et al, 2006), it is possible that after Upf1 detects a PTC-containing mRNA, the interaction between Smg7 and phosphorylated Upf1 targets the NMD substrate to P-bodies for subsequent mRNA degradation. In yeast, Upf1, Upf2, and Upf3 localize to P-bodies, and Upf1 localization is enhanced in upf2 and upf3 mutants (Sheth and Parker, 2006). Collectively, these observations suggest that NMD can occur in P-bodies.

P-bodies also contain protein factors involved in miRNA-mediated translational repression, including the Argonaute proteins, Rck/p54, and GW182 (reviewed in Kedersha and Anderson, 2007; Parker and Sheth, 2007; Eulalio et al, 2007a). Depleting Rck/p54 leads to a loss of P-bodies and a defect in miRNA-mediated translational repression (Chu and Rana, 2006) and miRNA-mediated mRNA decay (Eulalio et al, 2007c), suggesting that P-bodies and miRNA-mediated events are inter-related. However, it is clear that P-bodies are not absolutely required for miRNA function, as depletion of Lsm1 or GW182 in human cells and D. melanogaster cells, which causes a loss of P-bodies and disperses Argonaute proteins throughout the cell, does not affect miRNA function (Chu and Rana, 2006; Stoecklin et al, 2006; Eulalio et al, 2007b). Moreover, it has been reported that miRNAs are associated with polysomes, which seems inconsistent with the notion that P-bodies are required to keep miRNA–mRNPs translationally silenced (Nelson et al, 2004; Maroney et al, 2006; Nottrott et al, 2006). Thus, although there clearly is a close link between P-bodies and miRNA-mediated translation repression, the precise nature of this link remains to be determined.

We suggest that rather than being required for mRNA decay and translational repression, P-bodies increase the efficiency of these events. One possibility is that concentrating repressed mRNPs in P-bodies facilitates additional mRNP-remodeling steps, which reinforce this repression for long-term storage in a repressed form. In other cases, these remodeling events may trigger more efficient mRNA degradation. Sequestration in P-bodies may also provide a rapid means to prevent accidental translation of aberrant mRNAs, such as PTC-containing transcripts, prior to degradation. Moreover, as mRNAs may leave P-bodies and re-enter the translating pool (Brengues et al, 2005; Bhattacharyya et al, 2006), P-bodies could function as temporary storage sites for repressed mRNAs. Thus, P-bodies have the potential to regulate gene expression under various conditions and also provide an additional quality-control point where mRNA that has been mistakenly repressed can be reactivated. In so doing, P-bodies provide an additional layer for fine-tuning gene expression to maintain cellular homeostasis.

Common and distinct features of NMD and miRNA-mediated silencing

NMD and miRNA-mediated silencing have common features, but they clearly differ in many respects. Both occur in the cytoplasm and result in mRNA degradation, but miRNAs have the additional ability to inhibit translation, which provides for the possibility of reversible repression. Mammalian NMD is facilitated by nuclear processing events that deposit the EJC signal (Chang et al, 2007), whereas it is not clear whether miRNA-mediated silencing requires nuclear events other than the Drosha-mediated cleavage that generates miRNA precursors (Lee et al, 2006). Both NMD and miRNA-mediated silencing appear to require sequential mRNP-remodeling steps, raising the possibility that they may use common remodeling events, but this will not be known until these steps are better defined. A clear difference between the two is that NMD absolutely requires translation to define the PTC, whereas miRNA-mediated mRNA decay can occur in the absence of translation. Both NMD and miRNA-mediated silencing appear to be able to take place in P-bodies (Liu et al, 2005b; Sheth and Parker, 2006), but the proportion of these two events that occurs in P-bodies versus other cytoplasmic sites may be quite different, as inhibition of P-body formation down-regulates NMD but has no obvious impact on miRNA-mediated silencing (Chu and Rana, 2006; Eulalio et al, 2007b). Finally, both NMD and miRNA-mediated mRNA silencing can use deadenylation as a crucial step toward mediating their effects, but both can also use deadenylation-independent pathways, possibly as a fail-safe mechanism, to achieve their goals (Yamashita et al, 2005; Behm-Ansmant et al, 2006; Wu et al, 2006).

Future directions

There are many issues in the field that require addressing and clarifying as to how miRNAs determine whether to exert their action through translational repression or mRNA decay as well as how mRNPs are remodeled and what changes in mRNP components occur during NMD and miRNA-mediated mRNA silencing. One key issue is to develop methods to independently examine the many steps required for NMD and miRNA-mediated silencing. It now is apparent that there are multiple separable steps in NMD, including the recognition of the PTC, remodeling of the mRNP, targeting to P-bodies, and mRNA decay. Therefore, simply monitoring steady state level of total mRNA, including both nuclear and cytoplasmic mRNA, is insufficient to address these challenging issues. More attention should be paid to monitoring decay kinetics and studying precursor-production relationship by methods such as the transcriptional pulsing (Yamashita et al, 2005; Chen et al, 2007). While Hrp1p, eRF3, PABP, and EJC factors probably serve to distinguish normal stop codons from PTCs, they could also act on downstream events, including the mRNA degradation event itself. For example, analysis of translation termination in yeast in vitro translation extracts indicates that Upf1p and Upf2p are required in the PTC recognition step (Amrani et al, 2004). Also, it has been shown that Upf1 preferentially associates with NMD substrates in vivo in worms and S. pombe (Rodriguez-Gabriel et al, 2006; Johns et al, 2007). A major challenge for the future will be to clarify the roles of each NMD factor in the various steps of NMD.

A key unanswered question regarding miRNAs is what determines whether they will trigger mRNA decay or translational repression. It is possible that the primary effect of the miRNA machinery is to remodel the mRNP to either avoid forming or disrupt a closed loop structure between 5′-cap and 3′ poly(A) tail that is critical for translation initiation. The subsequent downstream effects of mRNP remodeling may vary depending on physiological conditions, developmental cues, and other factors. In some cases, miRNA-targeted mRNAs may be subjected for rapid degradation, whereas in other cases, they may be simply repressed for translation and stored in P-bodies until needed later; for example, during cellular stress response. Since P-bodies are not absolutely necessary for miRNA-mediated mRNA silencing and normal mRNA decay, a close examination of P-body status during embryogenesis, cell growth and differentiation, and various diseased states may shed new light on the physiological function and significance of P-bodies in regulating gene expression.

The importance of mRNP remodeling may be revealed further by studying when and how PABPs dissociate from an mRNP. This is a particularly critical issue that has not been addressed since PABPs are not present in P-bodies. PABP exhibits a very high binding affinity for its RNA substrate (in the nanomolar range) (Görlach et al, 1994; Kuhn and Pieler, 1996; Deardorff and Sachs, 1997) and thus removal of PABP is particularly challenging if mRNP remodeling exerts its effect on translational repression per se without deadenylation. This raises a question as to what drives PABPs off the P-body entrapped mRNPs so that they can enter existing or form new P-bodies, a key step determining their fate. Future research addressing the key changes in mRNP composition at each critical remodeling step of an mRNA, as it goes on its journey from the nucleus to the cytoplasm, will be crucial for understanding how mRNA decay, translation, and RNA quality-control mechanisms are regulated through an interplay of different mechanisms.


We thank Chyi-Ying A Chen for critical reading and valuable comments on the paper, and Nader Ezzeddine for assistance with artwork.


  • Ambros V (2004) The functions of animal microRNAs. Nature431: 350–355 [PubMed] [Google Scholar]
  • Amrani N, Ganesan R, Kervestin S, Mangus DA, Ghosh S, Jacobson A (2004) A faux 3′-UTR promotes aberrant termination and triggers nonsense-mediated mRNA decay. Nature432: 112–118 [PubMed] [Google Scholar]
  • Amrani N, Sachs MS, Jacobson A (2006) Early nonsense: mRNA decay solves a translational problem. Nat Rev Mol Cell Biol7: 415–425 [PubMed] [Google Scholar]
  • Andrei MA, IIngelfinger D, Heintzmann R, Achsel T, Rivera-Pomar R, Luhrmann R (2005) A role for eIF4E and eIF4E-transporter in targeting mRNPs to mammalian processing bodies. RNA11: 717–727 [PMC free article] [PubMed] [Google Scholar]
  • Arciga-Reyes L, Wootton L, Kieffer M, Davies B (2006) UPF1 is required for nonsense-mediated mRNA decay (NMD) and RNAi in Arabidopsis. Plant J47: 480–489 [PubMed] [Google Scholar]
  • Aronoff R, Baran R, Hodgkin J (2001) Molecular identification of smg-4, required for mRNA surveillance in C. elegans. Gene268: 153–164 [PubMed] [Google Scholar]
  • Bagga S, Bracht J, Hunter S, Massirer K, Holtz J, Eachus R, Pasquinelli AE (2005) Regulation by let-7 and lin-4 miRNAs results in target mRNA degradation. Cell122: 553–563 [PubMed] [Google Scholar]
  • Baker KE, Parker R (2006) Conventional 3′ end formation is not required for NMD substrate recognition in Saccharomyces cerevisiae. RNA12: 1441–1445 [PMC free article] [PubMed] [Google Scholar]
  • Bartel DP (2004) MicroRNAs: genomics, biogenesis, mechanism, and function. Cell116: 281–297 [PubMed] [Google Scholar]
  • Bartel DP, Chen CZ (2004) Micromanagers of gene expression: the potentially widespread influence of metazoan microRNAs. Nat Rev Genet5: 396–400 [PubMed] [Google Scholar]
  • Behm-Ansmant I, Gatfield D, Rehwinkel J, Hilgers V, Izaurralde E (2007) A conserved role for cytoplasmic poly(A)-binding protein 1 (PABPC1) in nonsense-mediated mRNA decay. EMBO J26: 1591–1601 [PMC free article] [PubMed] [Google Scholar]
  • Behm-Ansmant I, Rehwinkel J, Doerks T, Stark A, Bork P, Izaurralde E (2006) mRNA degradation by miRNAs and GW182 requires both CCR4:NOT deadenylase and DCP1:DCP2 decapping complexes. Genes Dev20: 1885–1898 [PMC free article] [PubMed] [Google Scholar]
  • Bernstein E, Caudy AA, Hammond SM, Hannon GJ (2001) Role for a bidentate ribonuclease in the initiation step of RNA interference. Nature409: 363–366 [PubMed] [Google Scholar]
  • Bhattacharya A, Czaplinski K, Trifillis P, He F, Jacobson A, Peltz SW (2000) Characterization of the biochemical properties of the human Upf1 gene product that is involved in nonsense-mediated mRNA decay. RNA6: 1226–1235 [PMC free article] [PubMed] [Google Scholar]
  • Bhattacharyya SN, Habermacher R, Martine U, Closs EI, Filipowicz W (2006) Relief of microRNA-mediated translation repression in human cells subjected to stress. Cell125: 1111–1124 [PubMed] [Google Scholar]
  • Bleichert F, Baserga SJ (2007) The long unwinding road of RNA helicases. Mol Cell27: 339–352 [PubMed] [Google Scholar]
  • Bonetti B, Fu L, Moon J, Bedwell DM (1995) The efficiency of translation termination is determined by a synergistic interplay between upstream and downstream sequences in Saccharomyces cerevisiae. J Mol Biol251: 334–345 [PubMed] [Google Scholar]
  • Brengues M, Teixeira D, Parker R (2005) Movement of eukaryotic mRNAs between polysomes and cytoplasmic processing bodies. Science310: 486–489 [PMC free article] [PubMed] [Google Scholar]
  • Brogna S (1999) Nonsense mutations in the alcohol dehydrogenase gene of Drosophila melanogaster correlate with an abnormal 3′ end processing of the corresponding pre-mRNA. RNA5: 562–573 [PMC free article] [PubMed] [Google Scholar]
  • Brown CM, Stockwell PA, Trotman CNA, Tate WP (1990) Sequence analysis suggests that tetra-nucleotides signal the termination of protein synthesis in eukaryotes. Nucleic Acids Res18: 6339–6345 [PMC free article] [PubMed] [Google Scholar]
  • Buhler M, Steiner S, Mohn F, Paillusson A, Muhlemann O (2006) EJC-independent degradation of nonsense immunoglobulin-mu mRNA depends on 3′ UTR length. Nat Struct Mol Biol13: 462–464 [PubMed] [Google Scholar]
  • Cao D, Parker R (2003) Computational modeling and experimental analysis of nonsense-mediated decay in yeast. Cell113: 533–545 [PubMed] [Google Scholar]
  • Chan WK, Huang L, Gudikote JP, Chang YF, Imam JS, MacLean JA II, Wilkinson MF (2007) An alternative branch of the nonsense-mediated decay pathway. EMBO J26: 1820–1830 [PMC free article] [PubMed] [Google Scholar]
  • Chang Y-F, Imam JS, Wilkinson MF (2007) The nonsense-mediated decay RNA surveillance pathway. Ann Rev Biochem76: 51–74 [PubMed] [Google Scholar]
  • Chen C-YA, Shyu A-B (2003) Rapid deadenylation triggered by a nonsense codon precedes decay of the RNA body in a mammalian cytoplasmic nonsense-mediated decay pathway. Mol Cell Biol23: 4805–4813 [PMC free article] [PubMed] [Google Scholar]
  • Chen C-YA, Yamashita Y, Chang T-C, Yamashita A, Zhu W, Zhong Z, Shyu A-B (2007) Versatile applications of transcriptional pulsing to study mRNA turnover in mammalian cells. RNA13: 1775–1786 [PMC free article] [PubMed] [Google Scholar]
  • Chendrimada TP, Finn KJ, Ji X, Baillat D, Gregory RI, Liebhaber SA, Pasquinelli AE, Shiekhattar R (2007) MicroRNA silencing through RISC recruitment of eIF6. Nature447: 823–828 [PubMed] [Google Scholar]
  • Chu CY, Rana TM (2006) Translation repression in human cells by microRNA-induced gene silencing requires RCK/p54. PLoS Biol4: e210. [PMC free article] [PubMed] [Google Scholar]
  • Coller JM, Gray NK, Wickens MP (1998) mRNA stabilization by poly(A) binding protein is independent of poly(A) and requires translation. Genes Dev12: 3226–3235 [PMC free article] [PubMed] [Google Scholar]
  • Culbertson MR, Underbrink KM, Fink GR (1980) Frameshift suppression in Saccharomyces cerevisiae. II. Genetic properties of group II suppressors. Genetics95: 833–853 [PMC free article] [PubMed] [Google Scholar]
  • Czaplinski K, Weng Y, Hagan KW, Peltz SW (1995) Purification and characterization of the Upf1 protein: a factor involved in translation and mRNA degradation. RNA1: 610–623 [PMC free article] [PubMed] [Google Scholar]
  • Deardorff JA, Sachs AB (1997) Differential effects of aromatic and charged residue substitutions in the RNA binding domains of the yeast poly(A)-binding protein. J Mol Biol269: 67–81 [PubMed] [Google Scholar]
  • Dreyfuss G, Kim VN, Kataoka N (2003) Messenger-RNA-binding proteins and the messages they carry. Nat Rev Mol Cell Biol3: 195–205 [PubMed] [Google Scholar]
  • Dykxhoorn DM, Novina CD, Sharp PA (2003) Killing the messenger: short RNAs that silence gene expression. Nat Rev Mol Cell Biol4: 457–467 [PubMed] [Google Scholar]
  • Eulalio A, Behm-Ansmant I, Izaurralde E (2007a) P bodies: at the crossroads of post-transcriptional pathways. Nat Rev Mol Cell Biol8: 9–22 [PubMed] [Google Scholar]
  • Eulalio A, Behm-Ansmant I, Schweizer D, Izaurralde E (2007b) P-body formation is a consequence, not the cause, of RNA-mediated gene silencing. Mol Cell Biol27: 3970–3981 [PMC free article] [PubMed] [Google Scholar]
  • Eulalio A, Rehwinkel J, Stricker M, Huntzinger E, Yang S-F, Doerks T, Dorner S, Bork P, Boutros M, Izaurralde E (2007c) Target-specific requirements for enhancers of decapping in miRNA-mediated gene silencing. Genes Dev21: 2558–2570 [PMC free article] [PubMed] [Google Scholar]
  • Eystathioy T, Chan EKL, Tenenbaum SA, Keene JD, Griffith K, Fritzler MJ (2002) A phosphorylated cytoplasmic autoantigen, GW182, associates with a unique population of human mRNAs within novel cytoplasmic speckles. Mol Biol Cell13: 1338–1351 [PMC free article] [PubMed] [Google Scholar]
  • Frischmeyer PA, Dietz HC (1999) Nonsense-mediated mRNA decay in health and disease. Hum Mol Genet8: 1893–1900 [PubMed] [Google Scholar]
  • Gatfield D, Izaurralde E (2004) Nonsense-mediated messenger RNA decay is initiated by endonucleolytic cleavage in Drosophila. Nature429: 575–578 [PubMed] [Google Scholar]
  • Gatfield D, Unterholzner L, Ciccarelli FD, Bork P, Izaurralde E (2003) Nonsense-mediated mRNA decay in Drosophila: at the intersection of the yeast and mammalian pathways. EMBO J22: 3960–3970 [PMC free article] [PubMed] [Google Scholar]
  • Gehring K, Neu-Yilik G, Schell T, Hentze MW, Kulozik AE (2003) Y14 and hUpf3b form an NMD-activating complex. Mol Cell11: 939–949 [PubMed] [Google Scholar]
  • Gehring NH, Kunz JB, Neu-Yilik G, Breit S, Viegas MH, Hentze MW, Kulozik AE (2005) Exon-junction complex components specify distinct routes of nonsense-mediated mRNA decay with differential cofactor requirements. Mol Cell20: 65–75 [PubMed] [Google Scholar]
  • Gingras A-C, Raught B, Sonenberg N (1999) eIF4 initiation factors: effectors of mRNA recruitment to ribosomes and regulators of translation. Annu Rev Biochem68: 913–963 [PubMed] [Google Scholar]
  • Giraldez AJ, Mishima Y, Rihel J, Grocock RJ, Van Dongen S, Inoue K, Enright AJ, Schier AF (2006) Zebrafish MiR-430 promotes deadenylation and clearance of maternal mRNAs. Science312: 75–79 [PubMed] [Google Scholar]
  • Gonzalez CI, Ruiz-Echevarria MJ, Vasudevan S, Henry MF, Peltz MF (2000) The yeast hnRNP-like protein Hrp1/Nab4 marks a transcript for nonsense-mediated mRNA decay. Mol Cell5: 489–499 [PubMed] [Google Scholar]
  • Görlach M, Burd CG, Dreyfuss G (1994) The mRNA poly(A)-binding protein: localization, abundance, and RNA-binding specificity. Exp Cell Res211: 400–407 [PubMed] [Google Scholar]
  • Gray NK, Wickens M (1998) Control of translation initiation in animals. Annu Rev Cell Dev Biol14: 399–458 [PubMed] [Google Scholar]
  • Hannon GJ (2002) RNA interference. Nature418: 244–251 [PubMed] [Google Scholar]
  • He F, Jacobson A (1995) Identification of a novel component of the nonsense-mediated mRNA decay pathway by use of an interacting protein screen. Genes Dev9: 437–454 [PubMed] [Google Scholar]
  • Hilleren P, Parker R (1999) mRNA surveillance in eukaryotes: kinetic proofreading of proper translation termination as assessed by mRNP domain organization?RNA5: 711–719 [PMC free article] [PubMed] [Google Scholar]
  • Hodgkin J, Papp A, Pulak R, Ambros V, Anderson P (1989) A new kind of informational suppression in the nematode Caenorhabditis elegans. Genetics123: 301–313 [PMC free article] [PubMed] [Google Scholar]
  • Holbrook JA, Neu-Yilik G, Hentze MW, Kulozik AE (2004) Nonsense-mediated decay approaches the clinic. Nat Genet36: 801–808 [PubMed] [Google Scholar]
  • Hori K, Watanabe Y (2005) UPF3 suppresses aberrant spliced mRNA in Arabidopsis. Plant J43: 530–540 [PubMed] [Google Scholar]
  • Ingelfinger D, Arndt-Jovin DJ, Luhrmann R, Achsel T (2002) The human LSm1-7 proteins colocalize with the mRNA-degrading enzymes Dcp1/2 and Xrnl in distinct cytoplasmic foci. RNA8: 1489–1501 [PMC free article] [PubMed] [Google Scholar]
  • Jackson RJ, Standart N (2007) How do microRNAs regulate gene expression?Sci STKE2007: re1. [PubMed] [Google Scholar]
  • Jacobson A (1996) Poly(A) metabolism and translation: the colsed-loop model. In Tanslational Control, Hershey JWB, Mathews MB, Sonenberg N (eds), pp 451–480. Plainview: Cold Spring Harbor Laboratory Press [Google Scholar]
  • Jakymiw A, Lian S, Eystathioy T, Li S, Satoh M, Hamel JC, Fritzler MJ, Chan EK (2005) Disruption of GW bodies impairs mammalian RNA interference. Nat Cell Biol8: 1267–1274 [PubMed] [Google Scholar]
  • Jankowsky E, Bowers H (2006) Remodeling of ribonucleoprotein complexes with DExH/D RNA helicases. Nucleic Acids Res34: 4181–4188 [PMC free article] [PubMed] [Google Scholar]
  • Johns L, Grimson A, Kuchma SL, Newman CL, Anderson P (2007) Caenorhabditis elegans SMG-2 selectively marks mRNAs containing premature translation termination codons. Mol Cell Biol27: 5630–5638 [PMC free article] [PubMed] [Google Scholar]
  • Kahvejian A, Roy G, Sonenberg N (2001) The mRNA closed-loop model: the function of PABP and PABP-interacting proteins in mRNA translation. Cold Spring Harb Symp Quant Biol66: 293–300 [PubMed] [Google Scholar]
  • Kashima I, Yamashita A, Izumi N, Kataoka N, Morishita R, Hoshino S, Ohno M, Dreyfuss G, Ohno S (2006) Binding of a novel SMG-1–Upf1–eRF1–eRF3 complex (SURF) to the exon junction complex triggers Upf1 phosphorylation and nonsense-mediated mRNA decay. Genes Dev20: 355–367 [PMC free article] [PubMed] [Google Scholar]
  • Kedersha N, Anderson P (2007) Mammalian stress granules and processing bodies. Methods Enzymol431: 61–81 [PubMed] [Google Scholar]
  • Keeling KM, Lanier J, Du M, Salas-Marco JOE, Gao LIN, Kaenjak-Angeletti A, Bedwell DM (2004) Leaky termination at premature stop codons antagonizes nonsense-mediated mRNA decay in S. cerevisiae. RNA10: 691–703 [PMC free article] [PubMed] [Google Scholar]
  • Kim VN, Yong J, Kataoka N, Abel L, Diem MD, Dreyfuss G (2001) The Y14 protein communicates to the cytoplasm the position of exon–exon junctions. EMBO J20: 2062–2068 [PMC free article] [PubMed] [Google Scholar]
  • Kim YK, Furic L, Desgroseillers L, Maquat LE (2005) Mammalian Staufen1 recruits Upf1 to specific mRNA 3′UTRs so as to elicit mRNA decay. Cell120: 195–208 [PubMed] [Google Scholar]
  • Kiriakidou M, Tan GS, Lamprinaki S, De Planell-Saguer M, Nelson PT, Mourelatos Z (2007) An mRNA m7G cap binding-like motif within human Ago2 represses translation. Cell129: 1141–1151 [PubMed] [Google Scholar]
  • Kuhn U, Pieler T (1996) Xenopus poly(A) binding protein: functional domains in RNA binding and protein–protein interaction. J Mol Biol256: 20–30 [PubMed] [Google Scholar]
  • Le Hir H, Gatfield D, Izaurralde E, Moore MJ (2001) The exon–exon junction complex provides a binding platform for factors involved in mRNA export and nonsense-mediated mRNA decay. EMBO J20: 4987–4997 [PMC free article] [PubMed] [Google Scholar]
  • Le Hir H, Moore MJ, Maquat LE (2000) Pre-mRNA splicing alters mRNP composition: evidence for stable association of proteins at exon–exon junctions. Genes Dev14: 1098–1108 [PMC free article] [PubMed] [Google Scholar]
  • LeBlanc JJ, Beemon KL (2004) Unspliced rous sarcoma virus genomic RNAs are translated and subjected to nonsense-mediated mRNA decay before packaging. J Virol78: 5139–5146 [PMC free article] [PubMed] [Google Scholar]
  • Lee Y, Han J, Yeom KH, Jin H, Kim VN (2006) Drosha in primary MicroRNA processing. Cold Spring Harb Symp Quant Biol71: 51–57 [PubMed] [Google Scholar]
  • Lewis BP, Burge CB, Bartel DP (2005) Conserved seed pairing, often flanked by adenosines, indicates that thousands of human genes are microRNA targets. Cell120: 15–20 [PubMed] [Google Scholar]
  • Lewis BP, Shih IH, Jones-Rhoades MW, Bartel DP, Burge CB (2003) Prediction of mammalian microRNA targets. Cell115: 787–798 [PubMed] [Google Scholar]
  • Lim LP, Glasner ME, Yekta S, Burge CB, Bartel DP (2003) Vertebrate miRNA genes. Science299: 1540. [PubMed] [Google Scholar]
  • Lim LP, Lau NC, Garrett-Engele P, Grimson A, Schelter JM, Castle J, Bartel DP, Linsley PS, Johnson JM (2005) Microarray analysis shows that some microRNAs downregulate large numbers of target mRNAs. Nature433: 769–773 [PubMed] [Google Scholar]
  • Liu J, Rivas FV, Wohlschlegel J, Yates JR III, Parker R, Hannon GJ (2005a) A role for the P-body component GW182 in microRNA function. Nat Cell Biol7: 1261–1266 [PMC free article] [PubMed] [Google Scholar]
  • Liu J, Valencia-Sanchez MA, Hannon GJ, Parker R (2005b) MicroRNA-dependent localization of targeted mRNAs to mammalian P-bodies. Nat Cell Biol7: 719–723 [PMC free article] [PubMed] [Google Scholar]
  • Longman D, Plasterk RHA, Johnstone IL, Caceres JF (2007) Mechanistic insights and identification of two novel factors in the C. elegans NMD pathway. Genes Dev21: 1075–1085 [PMC free article] [PubMed] [Google Scholar]
  • Losson R, Lacroute F (1979) Interference of nonsense mutations with eukaryotic messenger RNA stability. Proc Natl Acad Sci USA76: 5134–5137 [PMC free article] [PubMed] [Google Scholar]
  • Lykke-Andersen J, Shu MD, Steitz JA (2001) Communication of the position of exon–exon junctions to the mRNA surveillance machinery by the protein RNPS1. Science293: 1836–1839 [PubMed] [Google Scholar]
  • Mangus DA, Evans MC, Jacobson A (2003) Poly(A)-binding proteins: multifunctional scaffolds for the post-transcriptional control of gene expression. Genome Biol4: 233 2007/2223[PMC free article] [PubMed] [Google Scholar]
  • Maquat LE (2004) Nonsense mediated mRNA decay: splicing, translation and mRNP dynamics. Nat Rev Mol Cell Biol5: 89–99 [PubMed] [Google Scholar]
  • Maquat LE, Kinniburgh AJ, Rachmilewitz EA, Ross J (1981) Unstable beta-globin mRNA in mRNA-deficient beta o thalassemia. Cell27: 543–553 [PubMed] [Google Scholar]
  • Maroney PA, Yu Y, Fisher J, Nilsen TW (2006) Evidence that microRNAs are associated with translating messenger RNAs in human cells. Nat Struct Mol Biol13: 1102–1107 [PubMed] [Google Scholar]
  • Mathonnet G, Fabian MR, Svitkin YV, Parsyan A, Huck L, Murata T, Biffo S, Merrick WC, Darzynkiewicz E, Pillai RS, Filipowicz W, Duchaine TF, Sonenberg N (2007) MicroRNA inhibition of translation initiation in vitro by targeting the cap-binding complex eIF4F. Science317: 1764–1767 [PubMed] [Google Scholar]
  • McCaughan KK, Brown CM, Dalphin ME, Berry MJ, Tate WP (1995) Translational termination efficiency in mammals is influenced by the base following the stop codon. Proc Natl Acad Sci USA92: 5431–5435 [PMC free article] [PubMed] [Google Scholar]
  • Meaux S, van Hoof A, Baker KE (2008) Nonsense-mediated mRNA decay in yeast does not require PAB1 or a poly(A) tail. Mol Cell (in press) [PMC free article] [PubMed] [Google Scholar]
  • Meister G, Landthaler M, Peters L, Chen PY, Urlaub H, Luhrmann R, Tuschl T (2005) Identificiation of novel argonaute-associated proteins. Curr Biol15: 2149–2155 [PubMed] [Google Scholar]
  • Mendell JT, ap Rhys CM, Dietz HC (2002) Separable roles for rent1/hUpf1 in altered splicing and decay of nonsense transcripts. Science298: 419–422 [PubMed] [Google Scholar]
  • Mishima Y, Giraldez AJ, Takeda Y, Fujiwara T, Sakamoto H, Schier AF, Inoue K (2006) Differential regulation of germline mRNAs in soma and germ cells by zebrafish miR-430. Curr Biol16: 2135–2142 [PMC free article] [PubMed] [Google Scholar]
  • Mitchell P, Tollervey D (2003) An NMD pathway in yeast involving accelerated deadenylation and exosome-mediated 3′ → 5′ degradation. Mol Cell11: 1405–1413 [PubMed] [Google Scholar]
  • Moore MJ (2005) From birth to death: the complex lives of eukaryotic mRNAs. Science309: 1514–1518 [PubMed] [Google Scholar]
  • Muhlrad D, Parker R (1994) Premature translational termination triggers mRNA decapping. Nature370: 578–581 [PubMed] [Google Scholar]
  • Muhlrad D, Parker R (1999) Recognition of yeast mRNAs as ‘nonsense containing' leads to both inhibition of mRNA translation and mRNA degradation: implications for the control of mRNA decapping. Mol Biol Cell10: 3971–3978 [PMC free article] [PubMed] [Google Scholar]
  • Nagy E, Maquat LE (1998) A rule for termination-codon position within intron-containing genes: when nonsense affects RNA abundance. Trends Biochem Sci23: 198–199 [PubMed] [Google Scholar]
  • Nelson PT, Hatzigeorgiou AG, Mourelatos Z (2004) miRNP:mRNA association in polyribosomes in a human neuronal cell line. RNA10: 387–394 [PMC free article] [PubMed] [Google Scholar]
  • Neu-Yilik G, Gehring NH, Thermann R, Frede U, Hentze MW, Kulozik AE (2001) Splicing and 3′ end formation in the definition of nonsense-mediated decay-competent human {beta}-globin mRNPs. EMBO J20: 532–540 [PMC free article] [PubMed] [Google Scholar]
  • Nilsen TW (2007) Mechanisms of miroRNA-mediated gene regulation in animal cells. Trends Genet23: 243–249 [PubMed] [Google Scholar]
  • Nottrott S, Simard MJ, Richter JD (2006) Human let-7a miRNA blocks protein production on actively translating polyribosomes. Nat Struct Mol Biol13: 1108–1114 [PubMed] [Google Scholar]
  • Olsen PH, Ambros V (1999) The lin-4 regulatory RNA controls developmental timing in Caenorhabditis elegans by blocking LIN-14 protein synthesis after the initiation of translation. Dev Biol216: 671–680 [PubMed] [Google Scholar]
  • Page MF, Carr B, Anders KR, Grimson A, Anderson P (1999) SMG-2 is a phosphorylated protein required for mRNA surveillance in Caenorhabditis elegans and related to Upf1p of yeast. Mol Cell Biol19: 5943–5951 [PMC free article] [PubMed] [Google Scholar]
  • Palacios IM, Gatfield D, St Johnston D, Izaurralde E (2004) An eIF4AIII-containing complex required for mRNA localization and nonsense-mediated mRNA decay. Nature427: 753–757 [PubMed] [Google Scholar]
  • Parker R, Sheth U (2007) P bodies and the control of mRNA translation and degradation. Mol Cell25: 635–646 [PubMed] [Google Scholar]
  • Peltz SW, Brown AH, Jacobson A (1993) mRNA destabilization triggered by premature translational termination depends on at least three cis-acting sequence elements and one trans-acting factor. Genes Dev7: 1737–1754 [PubMed] [Google Scholar]
  • Pillai RS, Artus CG, Filipowicz W (2004) Tethering of human Ago proteins to mRNA mimics the miRNA-mediated repression of protein synthesis. RNA10: 1518–1525 [PMC free article] [PubMed] [Google Scholar]
  • Pillai RS, Bhattacharyya SN, Filipowicz W (2007) Repression of protein sysnthesis by miRNAs: how many mechanisms?Trends Cell Biol17: 118–126 [PubMed] [Google Scholar]
  • Rajavel KS, Neufeld EF (2001) Nonsense-mediated decay of human HEXA mRNA. Mol Cell Biol21: 5512–5519 [PMC free article] [PubMed] [Google Scholar]
  • Rajewsky N (2006) microRNA target predictions in animals. Nat Genet38(Suppl): S8–S13 [PubMed] [Google Scholar]
  • Rand TA, Petersen S, Du F, Wang X (2005) Argonaute2 cleaves the anti-guide strand of siRNA during RISC activation. Cell123

Translate removal


Select Add or remove programs and uninstall Hoolapp for Android.
Seleccione Agregar o quitar programas y desinstalar Hoolapp para Android.
Go to Add or remove programs and uninstall GamingAssassin Toolbar.
Ir a Agregar o quitar programas y desinstalar GamingAssassin Toolbar.
Go to Add or remove programs and unisntall Hold Page.
Ir a Agregar o quitar programas y unisntall Hold Page.
Tips to remove Auslogics Driver Updater manually from your PC.
Consejos para eliminar Auslogics Driver Updater manualmente desde su PC.
Tips to remove Carambis Driver Updater manually from your PC.
Consejos para eliminar Carambis Driver Updater manualmente desde su PC.
Tips to remove Xtron Optimizer Pro manually from your PC.
Consejos para eliminar Xtron Optimizer Pro manualmente desde su PC.
A user may add or remove those accounts with ease.
Un usuario puede añadir o eliminar esas cuentas con facilidad.
Tips to remove MBytes Clean Pro manually from your PC.
Consejos para eliminar MBytes Clean Pro manualmente desde su PC.
Go to Add or remove programs and uninstall Desktop Recipe.
Ir a Agregar o quitar programas y desinstalar Desktop Recipe.
Very useful to remove images and any type of advertisement.
Muy útil para quitar imágenes y cualquier tipo de publicidad.
Untranslated regions : how 5' and 3' UTRs regulate transcription and translation

And outside the door, pumping herself, Marina released her penis from her mouth, got up, bent over and rested her hands on the bathtub, exposing her elastic pumped up and tanned. Ass. Nice ass Sasha ran his fingers over her very wet pussy and unceremoniously drove his cock into it.

He entered like clockwork. Marina screamed, Sasha immediately covered her mouth with his hand.

You will also be interested:

Partly I agree. I said I would get a divorce and she begged me not to leave her. As a result, we settled everything and decided to change a lot in sex and relationships, they became much stronger, but hot betrayal is still unpleasant, and now. I'm recovering in Israel after penis enlargement surgery.

9196 9197 9198 9199 9200