ABSTRACT

Various qualities of machine translation inevitably need a multitude of attempts to post-edit the output. Human evaluations of machine translation are extensive, but they take a huge amount of time and hence are expensive. Evaluation of machine translation is a difficult task due to different possible interpretations. There are a few proposed methods to facilitate the evaluation of machine translation systems, as the human evaluation is expensive and takes a great deal of time. Quality estimation provides a quality indicator to inform the reader whether or not a given translation is good enough for publishing. This chapter discusses an evaluation based on Translation Error Rate (TER) and Human-targeted Translation Error Rate (HTER). TER is an error metric for machine translation that estimates the required post-editing to modify a system output into one of the references. HTER employs human annotators to generate another targeted reference to measure the required number of edits accurately with the same meaning as the references.