HOW FAR DO WE AGREE ON THE QUALITY OF TRANSLATION?
Vol.1, Issue 1, 2015, pp.18-31 Full text
DOI: https://doi.org/10.33919/esnbu.15.1.2
Web of Science: 000449158700002
Author
Maria Kunilovskaya https://orcid.org/0000-0002-1473-4684
Affiliation:
Tyumen State University, Tyumen, Russia
Abstract
The article aims to describe the inter-rater reliability of translation quality assessment (TQA) in translator training, calculated as a measure of raters' agreement either on the number of points awarded to each translation under a holistic rating scale or the types and number of translation mistakes marked by raters in the same translations. We analyze three different samples of student translations assessed by several different panels of raters who used different methods of assessment and draw conclusions about statistical reliability of real-life TQA results in general and objective trends in this essentially subjective activity in particular. We also try to define the more objective data as regards error-analysis based TQA and suggest an approach to rank error-marked translations which can be used for subsequent relative grading in translator training.
Keywords: TQA, translation mistakes, inter-rater reliability, error-based evaluation, error-annotated corpus, RusLTC
Article history:
Submitted: 10 April 2014;
Accepted: 21 December 2014;
Published: 1 February 2015
Citation (APA):
Kunilovskaya, M. (2015). How far do we agree on the quality of translation. English Studies at NBU, 1(1), 18-31. https://doi.org/10.33919/esnbu.15.1.2
Copyright © 2015 Maria Kunilovskaya
This open access article is published and distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0), which permits non-commercial use, distribution, and reproduction in any medium, provided the original author and source are credited. If you want to use the work commercially, you must first get the authors' permission.
References
Artstein, R. & Poesio, M. (2008). Inter-Coder Agreement for Computational Linguistics. Computational Linguistics, 34(4), 555–596. https://doi.org/10.1162/coli.07-034-R2
Freelon, D. G. (2010). ReCal: Intercoder Reliability Calculation as a Web Service. International Journal of Internet Science, 5(1), 20–33. http://www.ijis.net/ijis5_1/ijis5_1_freelon.pdf
Kelly, D. (2005). A Handbook for Translator Trainers. A Guide to Reflective Practice. Manchester: St. Jerome Publishing.
Knyazheva, E & Pirko, E. (2013). Otsenka kachestva perevoda v rusle metodologii sistemnogo analiza [`TQA and Systems Analysis Methodology`]. Journal of Voronezh State University. Linguistics and Intercultural Communication Series, 1, 145-151. http://www.vestnik.vsu.ru/pdf/lingvo/2013/01/2013-01-25.pdf
Krippendorff, K. (2004). Content Analysis: An Introduction to Its Methodology. Sage.
Krippendorff, K. (2011). Computing Krippendorff's Alpha-Reliability. Retrieved from http://repository.upenn.edu/asc_papers/43/
Neubert, A. (2000). Competence in Language, in Languages, and in Translation. In Schäffner, C. & Adab, B. (Eds.). Developing Translation Competence. John Benjamins (pp. 3–17). https://doi.org/10.1075/btl.38
Strijbos, J.-W. & Stahl, G. (2007). Methodological Issues in Developing a Multidimensional Coding Procedure for Small-group Chat Communication. Learning and Instruction, 17(4), 394-404. https://doi.org/10.1016/j.learninstruc.2007.03.005
Waddington, Ch. (2001) Should Translations be Assessed Holistically or through error analysis?. Hermes, 26, 15-37. https://doi.org/10.7146/hjlcb.v14i26.25637
Williams, M. (2009). Translation Quality Assessment. Mutatis Mutandis, 2(1), 3–23. https://dialnet.unirioja.es/descarga/articulo/5012668.pdf
Zwilling, M. (2009). O kriteriiakh otsenki perevoda ['On Translation Quality Assessment Criteria']. In Zwilling, M. (Ed.), O perevode i perevodtchikakh [On Translation and Translators] (pp. 56–63). Vostotchnaia kniga.