Language-independent Model for Machine Translation Evaluation with Reinforced Factors

Lifeng Han, Derek F. Wong, Lidia S. Chao, Liangye He Yi Lu, Junwen Xing, Xiaodong Zeng

Research output: Chapter in Book/Conference proceedingConference contributionpeer-review

Abstract

The conventional machine translation evaluation metrics tend to perform well on certain language pairs but weak on other language pairs. Furthermore, some evaluation metrics could only work on certain language pairs not language-independent. Finally, no considering of linguistic information usually leads the metrics result in low correlation with human judgments while too many linguistic features or external resources make the metrics complicated and difficult in replicability. To address these problems, a novel language-independent evaluation metric is proposed in this work with enhanced factors and optional linguistic information (part-of-speech, n-grammar) but not very much. To make the metric perform well on different language pairs, extensive factors are designed to reflect the translation quality and the assigned parameter weights are tunable according to the special characteristics of focused language pairs. Experiments show that this novel evaluation metric yields better performances compared with several classic evaluation metrics (including BLEU, TER and METEOR) and two state-of-the-art ones including ROSE
Original languageEnglish
Title of host publicationProceedings of the XIV Machine Translation Summit
Subtitle of host publicationNice, September 2–6, 2013
Pages215-222
Number of pages8
Publication statusPublished - 2013

Fingerprint

Dive into the research topics of 'Language-independent Model for Machine Translation Evaluation with Reinforced Factors'. Together they form a unique fingerprint.

Cite this