Huszarne Prikler Renata PHD 2020
Huszarne Prikler Renata PHD 2020
2020
ii
Table of contents
iii
3.4.1 Participants .............................................................................................................. 52
3.4.2 Data collection instrument ....................................................................................... 52
3.4.3 Procedures................................................................................................................ 52
3.4.2 Results and discussion ............................................................................................. 53
3.5 Phase 2: Follow-up interviews on student essays .......................................................... 54
3.5.1 Participants .............................................................................................................. 54
3.5.2 Data collection instrument ....................................................................................... 54
3.5.3 Procedures................................................................................................................ 55
3.5.4 Results and discussion ............................................................................................. 55
3.6 Summary of findings ...................................................................................................... 63
3.7 Phase 3: Student questionnaire ....................................................................................... 64
3.7.1 Constructing the questionnaire ................................................................................ 64
3.7.2 Piloting..................................................................................................................... 64
3.7.3 Participants .............................................................................................................. 67
3.7.4 Procedures................................................................................................................ 67
3.7.5 Results and discussion ............................................................................................. 67
3.7.6 Summary .................................................................................................................. 92
3.8 Conclusions .................................................................................................................... 94
Part II........................................................................................................................................ 96
Chapter 4 Autonomy in language teaching and learning ......................................................... 96
4.1 Introduction .................................................................................................................... 96
4.2 Learner autonomy........................................................................................................... 97
4.2.1 Definitions of learner autonomy .............................................................................. 97
4.2.2 Models of learner autonomy .................................................................................. 100
4.2.3 Learner-centered syllabus ...................................................................................... 103
4.2.4 Relationship between motivation and learner autonomy....................................... 104
4.3 Teacher autonomy ........................................................................................................ 106
4.3.1 Definitions of teacher autonomy ........................................................................... 106
4.4. Autonomy in the classroom: the interdependence of learner and teacher autonomy .. 109
4.5 Autonomy in translation ............................................................................................... 110
4.6 Study 2: Learner autonomy in Translation Studies BA classes ................................... 113
4.6.1 Research questions................................................................................................. 113
4.6.2 Participants ............................................................................................................ 113
iv
4.6.3 Data collection instruments and procedures........................................................... 113
4.6.3 Results and discussion of questionnaire data ......................................................... 114
4.6.4 Teacher and learner autonomy in the syllabi ......................................................... 121
4.7 Summary ....................................................................................................................... 127
Part III ..................................................................................................................................... 129
Chapter 5 Assessment in Translation Studies BA classes ...................................................... 129
5.1 Introduction ................................................................................................................... 129
5.2 Translation assessment in translation training programs in Hungary ........................... 143
5.3 Translation assessment at the Institute of English Studies at the University of Pécs ... 149
5.4 Study 3: An inquiry into how the ‘old’ UP scale of assessment worked ...................... 151
5.4.1 Research questions ................................................................................................. 151
5.4.2 Participants ............................................................................................................. 152
5.4.3 Data collection instruments and procedures........................................................... 152
5.4.4 Results and discussion: the translations ................................................................. 153
5.4.5 Results and discussion: the teacher-rater interviews .............................................. 165
5.4.6 Summary ................................................................................................................ 175
Chapter 6 Working towards a new assessment tool ............................................................... 178
6.1 Introduction ................................................................................................................... 178
6.2 Study 4: Lexical characteristics and readability of the translated texts chosen for
assessment ........................................................................................................................... 179
6.2.1 Research questions ................................................................................................. 179
6.2.2 Participants ............................................................................................................. 180
6.2.3 Data collection instruments and procedures........................................................... 180
6.2.4 Results and discussion............................................................................................ 181
6.2.5 Summary ................................................................................................................ 187
6.3 Study 5: Developing a new assessment tool for translations (PIER) ........................... 187
6.3.1 Research questions ................................................................................................. 188
6.3.2 Participants ............................................................................................................. 189
6.3.3 Procedures .............................................................................................................. 189
6.3.4 Results and discussion............................................................................................ 194
6.3.5 Raters’ opinions...................................................................................................... 199
6.3 Summary ....................................................................................................................... 202
Chapter 7 Conclusions ............................................................................................................ 205
7.1 Summary of findings .................................................................................................... 207
v
7.1.1 Motivation in Translation Studies BA classes ....................................................... 208
7.1.2 Autonomy in Translation Studies BA classes ....................................................... 209
7.1.3 Assessment in Translation Studies BA classes ...................................................... 211
7.2 Limitations of the research ........................................................................................... 212
7.3 Pedagogical implications and suggestions for further research ................................... 213
References .............................................................................................................................. 215
Appendices ............................................................................................................................. 231
Appendix A: BA students’ motivation in translation classes. Planned interview questions
and transcriptions ............................................................................................................... 231
Appendix B: Motivation and autonomy in translation classes. Student Questionnaire ..... 264
Appendix C: The institutional background of translator training in Hungary: universities271
Appendix D: The assessment scale used for assessing exam translations. Teacher
interviews ........................................................................................................................... 276
Appendix E: Rater questionnaire on the two (PIER vs. UP) assessment scales ................ 289
vi
Abstract
vii
Acknowledgements
There are numerous people who have supported me throughout my years of studies which
resulted in a last, exciting adventure of writing this dissertation. Without them I would not
have been able to reach this point.
First, I express my gratitude to my consultant, Lehmann Magdolna, for her insightful
guidance throughout my doctoral research. Second, I would like to thank all my teachers in
the doctoral programme. I have learnt a lot and enjoyed working with them. I thank Nikolov
Marianne for having faith in me and being generous in sharing her knowledge and experience
over the years. Third, I am grateful to my fellow students in the doctoral programme whose
knowledge and attitudes inspired me to study harder, to participate more actively in the
classes and who set me an example how to achieve the ultimate goal.
I am deeply indebted to all the teacher and student participants for their invaluable
contribution to the different phases of my research, including the tedious and exhausting
assessment process, as well as to those who did not participate in the studies but helped me in
a number of ways.
I also thank my colleagues at Zipernowsky Károly Technical School for substituting
me when I had to be in my doctoral seminars and my headmaster for ensuring me the days
when I had to be at university or at a conference. I owe special thanks to Pelcsi for giving me
all those lifts to the university on Friday mornings.
Over the years of my studies at the University of Pécs and its predecessor the Teacher
Training College, I enjoyed the encouragement of several teachers; however, Martsa Sándor
stands out as a key figure who shaped my professional development. I will always remember
his humanity with gratitude.
Finally, I would like to thank my family for their unconditional love and support over
the years, in particular my husband, Zoli for encouraging me to start my doctoral studies and
for standing by me in difficult times. I am grateful to my two sons, Kristóf and Gergő, for
their night sessions helping me with statistics, tables and figures and for being there when I
needed them the most.
viii
List of tables
x
50 Raw scores and grades given by two raters for 16 test-takers’ HU-EN translation
test
51 The SPSS frequency statistics for inter-rater reliability in the judgment of major
errors (H) in HU-EN translations
52 The SPSS frequency statistics for inter-rater reliability in the judgment of minor
errors (h) in HU-EN translations
53 Translations from Hungarian into English (HU-EN): Examples for differences in
R1 and R2’s coding of errors
54 The SPSS frequency statistics for inter-rater reliability of the grades given by the
two raters for HU-EN translations
55 Grades given by the raters for the two components and the final grades
56 Translations from English into Hungarian (EN – HU): Examples for inconsistent
error coding by the same rater
57 Translations from Hungarian into English (HU – EN): Examples for inconsistent
error coding by the same rater
58 The raters’ opinion of the assessment system applied in the translation program
59 The lexical frequency profile of the translated texts
60 The Flesh Reading Ease scores
61 Flesch-Kincaid Grade Level
62 The value of FRE and FKRL indices compared to CEFR levels
63 The Coh-Metric profile of the 14 translated texts
64 The list of preselected items and their translations for PIER evaluation
65 Raw scores given according to PIER by four expert raters for 14 test-takers’ EN-
HU translation tests
66 Inter-item correlation matrix of the four raters using PIER
67 The Intraclass Correlation Coefficient of the assessment by four raters using PIER
68 Number of errors identified using the UP scale by four expert raters in 14 EN-HU
translation tests
69 Inter-item correlation matrix of the four raters concerning major (H) mistakes
using UP scale
70 The Intraclass Correlation Coefficient of the assessment by four raters (UP scale)
71 Inter-item correlation matrix of the four raters concerning minor (h) errors using
the UP scale
72 Strengths and weaknesses of the two scales
xi
xii
List of figures
xiii
List of abbreviations and acronyms
BA Bachelor o
cr credit
EN English
EU European Union
HU Hungarian
ID Individual Differences
IT Information Technology
L1 First language
L2 Second language
MA Master of Arts
MT Mother tongue
PE post editing
xiv
PM Project Management
ST Source text
TL Target language
TS Translation Studies
TT Target text
xv
Introduction
1
schools – including universities – is the most effective way to teach and learn translation
skills and to test translators’ abilities to provide the market with reliable professionals (Kiraly,
2000; Nadstoga, 2008; Pym, 2012). However, looking at the problem from the viewpoint of
translation students, the main questions are as follows: What motivates prospective translators
to learn all the painstaking nuances to pursue a profession they do not have enough
information about? How autonomous are they when, finishing their studies, they start to work
in the chosen field? In the case of institutional training, exploring how their translations are
evaluated has primary importance, as translators will be weighed on the market of translators
by the quality of their translations.
The context of the study I conducted to answer these questions was the University of
Pécs which offers a translation studies program for BA students. However, the seemingly
easy access to participants and data did not prove to be easy at all. My plan to implement a
study with the participation of the second and third year students in the academic year of
2016-2017 had to be modified as I proceeded, mostly because of the unwillingness of the
students to become participants in a research. Due to the low number of volunteers I repeated
the data collection procedure in the next academic year (2017-2018). However, the data I
could collect, is representative of the University of Pécs programme, and offers an insight into
the nature of translator training at BA level, highlighting its values and shortcomings.
My aim was to explore participants’ motivation and autonomy, and how their
translations are assessed in the programme, with a special focus on how students’ motivation
and autonomy interact with assessment practices used in the translation courses. In addition, I
designed, piloted, and validated a new assessment tool in order to make the assessment of
translations more valid and reliable than the current practice in the programme.
Unfortunataley, this phase was hindered by a pandemic which turned the world knowb for us
upside down: due to Covid 19, reaching volunteers to participate was nearly impossible, a fact
which made the research period much longer than expected.
As the present dissertation addresses problems which cannot be researched by using
exclusively quantitative or qualitative measures, the findings are based on the use of a mixed
methods. approach. The following section outlines the structure of the dissertation.
2
An overview of the dissertation
The dissertation addresses three different but interrelated fields of language pedagogy and
their cross-sections the discipline of translation studies. The text is organized in three main
parts, each comprising an overview of the relevant research literature and an empirical study
conducted, as illustrated in Table 1.
Part I is dedicated to exploring learner motivation in Translation Studies (TS). Chapter
1 provides an overview of the discipline in focus. It gives an introduction into Translation
Studies, explains what translation students are expected to know, where they can acquire the
necessary knowledge, and how satisfied they are with their studies.
Chapter 2 first focuses on motivationin general by explaining the construct, comparing
and contrasting different definitions of the term. Then the narrower field of translation is
discussed by investigating the role of motivation in TS classes, motivating and demotivating
factors are identified, instruments for measuring this seemingly unmeasurable construct are
introduced: questionnaires, interviews and documents. The section on questionnaires
describes how this data elicitation instrument can be constructed and used. The application of
interview and documents such as student essays and course syllabi as data collection
instruments are discussed in a similar fashion.
Chapter 3 discusses the empirical study conducted in order to collect data on student
motivation in the Translation Studies BA classes. To draw the background, sections 3.1 and
3.2 offer an outline of translator training programs in Hungary and at the University of Pécs,
where the research was conducted. Section 3.3 presents the rationale for research
methodology, along with the research questions for Study 1. It also describes the context, the
participants, the data collection instruments, as well as the procedures and phases of data
collection, and, finally, the analysis of the dataset. For an overview of the main research
questions see Table 1. The three phases of this study, each conducted with different data
elicitation instruments (student essays, follow-up interviews, student questionnaire), are
discussed in individual sections.
Chapter 4, Part II deals with autonomy, first in L2 acquisition in general. The section
on learner autonomy (4.2) offers a range of definitions and an overview of milestone models.
It also examines how learner autonomy is represented in the course syllabi of the translation
studies programme and how motivation and learner autonomy are related. Section 4.3 is about
teacher autonomy, also comparing definitions, and discussing how it can be traced in the
course syllabi, followed by a discussion of how autonomy works in the L2 classroom, and in
3
translation studies seminars in particular. The interdependence of learner and teacher
autonomy essential in creating a student-centered, autonomous learning environment is
discussed in section 4.4.
The empirical part of Chapter 4 on the degree of autonomy of displayed by BA
students specializing in translation is forestalled by a short literature review of autonomy in
translation (4.5). This small-scale study seeks answers to two research questions, as shown in
Table 1, using questionnaire as a data collection instrument.
Part III, which puts assessment in Translation Studies classes in the focus, comprises
of three empirical studies with thegoal of exploring the assessment practices in Hungary,
especially at the Univerity of Pécs, and developing an assessment instrument which includes
the best features of existing grading scales, and, at the same time, eliminates their drawbacks.
To achieve this goal, the assessment scales of numerous institutions were examined and
compared, including the one currently used to evaluate translations at the University of Pécs.
The small-scale study discussed in Chapter 5 examined how this scale worked by analyzing
inter-rater and intra-rater reliability. The findings indicated the necessity of a new, more valid
and more reliable intrument, a norm-referenced method which is independent of priori
judgments about the source text, based on the practice of using preselected items,
presupposing a dichotomous approach of the text segments: a translated segment is either
acceptable (correct) or not.
Chapter 6 focuses on the steps of working toward new assessment tool named PIER
(PIE Revised), the adaptation of Kockaert and Segers’ (2017; Van Egdom et al., 2019) norm-
referenced assessment method called Preselected Items Evaluation (PIE). Section 6.1 contains
a preliminary study on the lexical characteristics of 14 student translations. The aim of this
small-scale study was to establish the lexical quality and readability of the translated texts
before being assessed by expert-raters in order to provide pre-assessment information for the
raters concerning what to expect and what to focus on. Section 6.2 reports the details of
developing and piloting the new tool, PIER for assessing translations. Comparing it to the
‘old’ UP scale, advantages and disadvantages are identified based on assessment data and
rater opinion.
Chapter 7 draws the final conclusions about what was analyzed and discussed in the
three main parts of the dissertation; findings concerning each study are summarized and
integrated. Finally, the last section outlines possible limitations of the research, the
pedagogical implications of the studies, and suggestions for further research.
4
Table 1
The overview of the main research questions
Method of
Research questions Data elicitation instruments Participants
analysis
How do they want to use their Student interviews 3 BA students; Content analysis,
translation skills after Student questionnaires 24 BA students Statistical analysis
graduating?
How does teacher and learner Student questionnaire; 24 BA students; Content analysis
autonomy affect student Syllabi 4 teachers
Statistical analysis
motivation?
How does the rating scale in Rated exam translations (2016); 16 BA students; Statistical analysis
use work in terms of inter-rater Rating scale used at UP 4 teacher-raters Content analysis
reliability? Teacher interviews
How consistent the raters were Rated exam translations (2016); 16 BA students; Statistical analysis
in their assessment? Rating scale used at UP; five teacher-raters Content analysis
5
Teacher interviews
How do the raters evaluate the Teacher interviews; 4 teacher-raters; Content analysis
assessment instrument system head of the
they apply to assess translation translation
students’ work? specialization
program
Study 4: Lexical characteristics and readability of the translated texts chosen for assessment
What are the most important Unrated translations (2020 14 BA students Statistical analysis
lexical characteristics of BA Compleat LexTutor
students’ HU – EN Coh-Metrix
translations?
How many preselected items The source text of the 5 expert raters; Content analysis
are necessary to create a norm- translation test 1 financial expert
referenced, sufficiently
discriminating translation
assessment tool?
How does PIER perform in Rated translations (2020) 14 BA students (the Statistical analysis
use? The new assessment tool (PIER) translators);
- How does it SPSS 5 expert raters
discriminate different
qualities in
translation?
- In which ways does it
improve inter-rater
reliability?
- Does it help raters to
become more
consistent in their
assessment?
How do the raters evaluate the The new assessment tool 5 expert raters Content analysis
new tool (PIER) compared to (PIER),
the ‘old’ UP scale? The ‘old’ UP scale
Rater questionnaire
6
Part I
Chapter 1
1.1 Introduction
Translation Studies is a subject that “focuses its professional attention in all forms of transfer
of written, spoken or signed texts in one language (the source language) into texts of related
meaning or effect in another (the target language)” (Laver & Mason, 2018, p. 146). Applied
Translation Studies is concerned with the training of translators, the policy and regulation of
their qualification, norms and working conditions. As an independent academic discipline, it
grew to recognition in the seventies and eighties, and, because of its perspective to include
cultural, historical and ideological aspects, it has expanded into an interdisciplinary subject
since then (Laver & Mason, 2018).
Many people think that professional translation is only a matter of languages; that
anyone who learnt translation at school or who is somewhat fluent in a foreign language can
become a translator. Weber (1989, p. 6) calls it an “image problem”, emphasizing that
language knowing is essential, but insufficient. There are people who speak two languages at
a high level, yet have great difficulty in transferring information from one language to the
other (Gouadec, 2007; Nida, 1981). If we add that translating means converting a text in one
language into “as nearly as possible a functionally or linguistically equivalent text in another”
(Laver & Mason, 2018, p. 141), we immediately see that it is a highly complex process, as it
is reflected in the many definitions throughout the related literature (Table 2). The
“equivalence between source and target texts embraces both semantic and pragmatic meaning,
and style” (Laver & Mason, 2018, p. 142), and it is also the core concept concerning
translation quality (House, 2015). However, the degree of resemblance may vary according to
the purpose of the translation and the intended audience.
7
Table 2
Definitions of “translation” by different authors
Author Definition
Catford (1965, p. 20) the replacement of textual material in one language by equivalent
textual material in another language
Nida (1984, p. 83) consists of reproducing in the receptor language the closest natural
equivalent of the source language message, first in terms of meaning
and secondly in terms of style
Reiss (1989, p. 161) a bilingual mediated process of communication which ordinarily aims
at the production of a TL text that is functionally equivalent to a SL
text
Bell (1991, p. 8) involves the transfer of meaning from a text in one language into a
text in another language
Spivak (1992, pp. 398-400 ) the most intimate act of reading… when the translator surrenders to
the text and responds to the special call of the text
Robinson (1997, p. 74) an intelligent activity, a constant learning cycle involving complex
processes of conscious and unconscious learning, …requiring creative
problem-solving in novel, textual, social, and cultural conditions, in
conscious analytical ways
Gouadec (2007, p. 21) importing or exporting ideas, concepts, rationales, thought processes,
discourse structures, services, myth across cultures
Baker (2011, p. 5) a process which is intended to find meaning equivalence in the target
text
Levý (2011, p. 23) a process of communication in which translators decode the message
contained in the text of the original author and encode it into their
own language
Robinson (2012, p. 6) different things for different groups of people. For people who are not
translators, it is primarily a text; for people who are, it is primarily an
activity that aims at the production of a text
Laver and Mason (2018, p. 142) the process and the product of all forms of transfer of written, spoken
or signed texts originating in one language (the source language) into
texts that resemble them in some way in another
So professional translators, except mastering at least two languages (the mother tongue and
the target language) are highly skilled experts. They, beyond absolute linguistic proficiency,
have to possess a perfect, or at least very good knowledge of the relevant cultural, technical,
8
legal, commercial background, also a full understandding of the subjects involved, not to
mention the sophisticated IT tools and software they must be able to use (Gile, 2009;
Gouadec, 2007; Nida, 1981; Risku, Dickinson, & Pircher, 2010; Robinson, 2012). Linguistic
creativity is also an important feature, especially in case of literary translations (Eco, 2004;
Kenny, 2014). Although some researchers of the field may think differently (Fazekas &
Sárosi-Márdirosz, 2015); with a few exceptions, translators are not born. Translation skills are
learned, either through training or practice, even if in most countries anyone can become a
translator if they have some prior inclination and qualities for the job (Gouadec, 2007; Limon,
2010).
In general, people tend to come into the profession from two directions: (1) from the
so called language sector, and (2) from the world of industry, including experts from the most
various fields, including commerce, law, mathematics, engineering, medicine, etc. Whatever
their background may be, good translators must share certain qualities: the perfect mastery of
their two languages, multi-cultural competence, good familiarity with the domains they
specialize in through their education and a deep knowledge of what translation requires (Gile,
2009; Gouadec, 2007; Risku, Dickinson, & Pircher, 2010; Robinson, 2012). In this context,
the debate on whether “translators are born or made” (Nida, 1981), seems pointless. While no
one denies that certain natural qualities can be advantageous in high-quality, especially
literary translation, it would be difficult for natural talents to unfold without proper training
(Gile, 2009; 2010).
It is increasingly recognized that formal training in translation schools – including
universities – is the most effective way to teach skills and test abilities to provide the market
with reliable professionals (Kiraly, 2000; Nadstoga, 2008; Pym, 2012). As a result, the
number of translator training programs has been spectacularly increasing over the past two to
three decades in many parts of the world; the study of translation and translator training have
become an integral element of intercultural relations and the transmission of scientific and
technological knowledge (Gile, 2009; Koskinen, 2010). However, the diversity of situations,
needs and relevant variables and parameters is enormous; extended research is required to be
able to discriminate between excellent, good and sub-optimal methods on a solid basis
(Robinson, 2012). The most intriguing questions are how the existing programs can help
students to learn to translate and what the best ways are to help them retain the linguistic and
cultural knowledge and master the learning and translation skills they will need to become
effective and successful professionals. If translation is viewed as a special kind of writing,
9
then the relevant writing skills need to be learned, as well as the mastery of new genres and
styles of discourse in a target language (Limon, 2010).
When describing the prevailing pedagogical assumptions in translator training
programs Robinson (2012) states that “there is no substitute for practical experience – to learn
how to translate one must translate, translate, translate – and there is no way to accelerate that
process without damaging students’ ability to detect errors in their own work” (p. 1). While
“faster” is advantageous in the professional world, as it may – provided that translators do
their work accurately – result in higher payment, in the pedagogical world it can easily
become the synonym of careless, sloppy and superficial attitude, and also can foster bad
habits. The primary emphasis should be placed on a pedagogy that “balances conscious
analysis with subliminal discovery and assimilation” (Robinson, 2012, p. 2). The more
consciously, analytically, rationally and systematically the students are expected to process
the materials presented, the more slowly those materials are internalized. This is a good thing,
as professional translators often need to be able to slow down to examine a problematic world
or expression, and slow analysis can be a powerful source of new knowledge.
It is generally accepted that translator training can take many forms. According to Pym
(2012), the majority of professional translators in the world probably have had no training in
translation beyond experience, which should not be underestimated. At the next level, there
are short-term training courses, which offer their students the required competences. These
courses might involve new translation technologies, area-restricted technology or specific
communication skills. Finally, there are long-term training programs offered by different
institutions, increasingly by universities at BA or MA levels dating back to the second half of
the twentieth century. This relatively late development is the reason why most practicing
translators have probably not received formal training.
Most researchers agree that the training of professional translators is based essentially
on professional experience, intuition and negotiations between trainers on methods rather on
research, whereas at language departments of universities translation is essentially part of
instruction (Gile, 2009). There is no use to start translator training until students read a source
language accurately, write in their target language effectively, and research their information
lacunae competently, said Rose (2008), arguing that translator training can be described as
elitist in some parts of the world, e. g. in the US. Training in literary translation simply
assumes that the students already have their skills in foreign and native language usage under
control and lets them proceed to develop their own resources as writers. These are formidable
prerequisites implying privilege.
10
When examining formal translator training, we can conclude that it can perform at
least two important functions. One is to help translator candidates enhance their performance
to the full realization of their potential. The other one is to develop their skills more rapidly
and effectively than through experience and self-instruction. Formal training programs can
also help raise professional standards by selecting the best candidates at admission and
standardize working methods. Finally, they provide excellent observation opportunities for
research into translation (Gile, 2009).
The prestige of training highly depends on the relative status of translators’
educational qualifications (Pym, Grin, Sfreddo, & Chan, 2011); however, the specific legal
status of educational qualifications when translators are recruited or hired is an issue. It is
assumed to depend on who recruits or hires. As has been discussed, almost anyone can be
called a “translator”, the title is virtually unprotected. There are, however, some exceptions,
and different countries have different ways of protecting who can translate. In most European
countries, especially if recruitment happens by intergovernmental institutions or national
governments, translators need to be professionally qualified with a degree either in translation
and interpreting or in the languages concerned (Slovakia, Germany, Hungary, Spain, Greece),
although there are ways of getting around this requirement. However, within the European
Commission the translator-candidate “has to be successful in an open competition; must have
two foreign languages and a university degree, not necessarily in languages, meaning that
candidates do not require a degree or diploma in translation” (Pym et al., 2011, p. 14). In case
of recruitment by translation companies, three requirements are emphasized: (1) formal higher
education in translation (recognized degree); (2) an equivalent qualification in any other
subject plus a minimum of two years of documented experience in translating; or (3) at least
five years of documented professional experience in translating (Pym et al., 2011). Thus, a
degree in translation might be regarded as a rough equivalent of five years of professional
experience.
Therefore, for future translators the question often is whether to study or not to study.
If we think of translators as intercultural communication experts, and the extensive
knowledge they require – both tacit and explicit – to carry out their roles and continually refer
to this knowledge throughout the translation process, as well as the formal training
opportunities offered by universities and other institutions, the answer is a definite yes.
11
1.2 What competences do prospective translators need?
Table 3 Types of required knowledge, its’ most important aspects and instruments (based on Risku et al., 2010).
Language, linguistic and grammar, terminology, regional and glossaries, databases, style guides,
translation knowledge professional conventions, register and terminology guidelines, handbooks
writing conventions, translation
methodologies and strategies, project
management
Country and cultural knowledge economic, legal and regulatory databases, websites, literature,
requirements, conventional linguistic media
and cultural differences
General and subject matter reference material, journals, industry databases, publications, knowledge
knowledge guidelines portals, expert systems, knowledge
and topic maps
Client and business knowledge terminology, glossaries, contact, CRM (customer relationship
reference material, stylistic guidelines, management) and PM (project
industry information management) tools, style guidelines,
terminologies, knowledge portals
12
The primary objective of university-level translation programs, which is the focus of present
paper, is to provide prospective translators with the types of knowledge and skills they will
need to function as professional mediators between writers and readers of different languages,
as Kiraly claimed (1995). While translators are often seen primarily as language
professionals, their knowledge and skills extend far beyond their language pairs. Translation
requires extensive background knowledge of the source and target languages and cultures, as
well as the subject matter of the text, the purpose of the translation, the requirements of the
target audience and the translation methods and strategies suitable for different cultures and
communication situations.
Although long-term translation training offered at universities, as Pym (2012) stated,
is a relatively new form, other forms of extensive formal training programs existed in the
expansive empires, e. g., in the very sophisticated Chinese institutions for the translation of
Buddhist texts. European colonizations were also associated with some kind of translator
training, mainly at the points where civilizations met. A good example for that is the Oriental
Academy which was founded by Empress Maria Theresa in Vienna in 1754 (Pym, 2012, p.
314). When an institution was established, it usually happened with the basic aim to insure a
certain quality of teaching and performance. The world wars provided further needs for the
institualization of formal training; especially the Second World War when translation schools
and independent university-level institutions were founded in the bordering regions of the
“Third Reich”. Rooting in the needs of diplomacy, the Nuremberg trials definitely highlighted
the role of highly qualified translators and interpreters. By the 1960s, a string of specialized
institutions was developed all over Europe, and translator training was integrated into foreign
language institutes, which is still a model in some European countries. Since the 1990s
translator training has been centered at universities, or if the training is offered by another
institution, there is typically some kind of relationship with a university (Ségiunot, 2008). The
value of translator education has been increasingly presumed at least in training circles and
the discussion has progressed to the issues of what and how it should be taught (Kearns,
2008).
According to Pym ( 2012; 2014), university translation courses are most often offered
as part of degree programs in foreign languages, imparting knowledge and skills which are
specific to translation. In most cases training involves a language department, which runs the
program with the participation of their teaching staff. Training can be divided into full long-
term training (BA followed by MA, adding up to five years) or MA-level programs (one, or
13
more typically two years). Nadstoga (2008) gave a good example for this in his paper when
describing translator training as an important component of a teacher training program offered
by the Institute of English at Adam Mickiewicz University, Posnan. It is a graduate program
which offers an MA degree in English. Translation is offered in the third and fourth years. It
is not considered as an aim but a means for improving the students’ practical command of
English and, as was suggested by Kiraly (2000) and his followers, translation is taught by
applied linguists with considerable experience in translation. Generally, in the long-term
model, students are usually required to complete solid training courses in language and
communication skills and then specialize in their final years. MA level programs, on the other
hand, rather focus on translator skills.
Although university-level training has become one of the main ways to address
translator training, it has certain caveats: according to some researchers, it generally does not
serve market needs; it is inefficient, sometimes even misleading, too theoretical, and out of
touch with market developments (Gouadec, 2007). To bring the training closer to the market
might include inviting professionals into the classroom (Pym, 2012; 2013). It is generally
agreed that learning the necessary skills should always be based on the combination of
instruction and practice (Kiraly, 2000). Kiraly distinguishes “translation competence” and
“translator competence”, the former refers to training mostly associated with linguistic skills
which are needed to produce acceptable translations, whereas the latter concerns a wide range
of interpersonal skills and attitudes.
As research into the field of translator training shows the need of formal instruction,
which is more and more connected to universities, is undoubtedly necessary. However, it is
full of challenges that mainly concern pedagogical practice and curriculum design (Pym,
2012). The steady growth in research, especially studies indicating the ways current training
is failing might help to answer the question of what to teach and how to teach.
The European Master’s in Translation (EMT) expert group, a quality label for MA
university programmes in translation worked out a descripitive model that „serves as a
recommendation for translator training institutions” (Eszenyi, 2016, p. 18), where the
objective should be to educate translators who are equipped with all six competences included
(Figure 1). These competences make up a full circle, with translation service provision in the
centre, which is the core of a translator’s activity. In this model, the translator is a service
provider, as the name of the competence also suggests, and should be able to handle the most
different tasks from translating to invoicing. This competence is not taught at translator
training institutions, but EMT highly recommends including it, teaching prospective
14
translators about prices, giving a price offer, translation assignments, client requirements,
framework agreements, deadlines, time management, working in teams on longer texts.
Translator’s self-assessment is also an important aspect in the description of translator
services provision.
Figure 1
The descriptive model of translator competences
(Eszenyi, 2016, p. 19; EMT, 2009)
The competences included in this model are not much different from the ones that already
have been discussed, however, they are described in a different way, focusing on the
interrelated features a translator, at least at professional level, should possess in our rapidly
changing world. Language competence contains an excellent command of the mother tongue,
and a similarly good command of a foreign language at Level C1 at least in the CEFR.
Excellent knowledge of the mother tongue (L1) is of high importance and has to be beyond
good writing skills, as translation is also a creative process (Eszenyi, 2016; Kussmaul, 2015).
The intercultural competence consists of a sociolinguistic and textual aspect. To
acquire the necessary skills translators should become aware of the differences between their
working cultures at both dimensions. “The target language text should be written in a way that
can fulfil its aim with the target audience” (Eszenyi, 2016, p. 23).
Information mining competence involves identifying elements in the source text that
need to be looked up, a process which can result in compiling glossaries from the findings of
this word hunting activity. The collected information is stored in a systematic order in this
15
way, and the glossaries can be used in later translation assignments. This should not mean a
problem in our electronic age, when mastering the use of computers and software has become
a must for every translator, and also should be part of the teaching material and training
institutions (Austermühl, 2014).
Technological competence is also essential in the mirror of our rapidly changing
technological environment. The application of CAT tools has become extensive in translation,
but “only for those who are ready to learn how to use them” (Eszenyi, 2016, p. 25). Except
technological knowledge, using these tools demands an investment of time and money by the
translator.
Thematic competence includes the knowledge of the typical text types (legal,
technical, medical, etc.), concepts and terminology. As a translator cannot master all trades, it
is sensible to specialize in a field (Eszenyi, 2016, p. 26).
Eszenyi also offers a complex description of modern translator (2016, pp. 26-27), who is
an entrepreneur who knows their place in the market, the opportunities and how to run
a business,
a linguist who is not content to have just a C1 level in a foreign language, but…
undertakes research if questions arise in order to find the answers;
an expert whose linguistic and thematic knowledge in several languages and objects
goes beyond average, their competences are dynamic and follow the changes in the
translation profession, languages and the world;
is a technician who devotes time and energy to acquiring the use of CAT tools and is
able to manage editing and search programmes, online databases and dictionaries.
The above listed qualities describe translators as people who have extraordinary abilities, an
assumption which might not be true. As we know from Nida (1981), they are not necessarily
born with these qualities; these can be learnt in the translator training programs of different
institutions, including universities.
The previous section of the chapter focused on what translators and translation students have
to know and to be able to do to pursue their profession successfully; what kind of training
they can get in different countries, including Hungary. In 2016 a research was conducted
16
which examined how satisfied translators were with the situation of their business, and with
the training they had accomplished, if they had attended a program.
The results of the research were published by Sarah Henter (2016) of Henter &
Asociados, SL, a company which, according to Henter’s LinkedIn profile, offers translation
and creation of texts for the pharmaceutical industry, medical services, clinical trials and apps
(https://www.linkedin.com/in/sarahhenter/?locale=en_US). “The translation industry is one of
the few that were not heavily affected by recession in the last seven years”, she claims, adding
that translation services are expected to keep growing and reach $37 billion in 2018 (p. 25).
Although the US represents the largest market, Europe is close second.
Henter and her colleagues (2016) collected data in an online survey which they sent to
major translators’ associations, universities and translators asking volunteers to respond. They
were interested in demographic data, work-related questions and education satisfaction. In the
course of three months they collected answers from 155 volunteers living in 19 European
countries (including four respondents from Hungary), in the Americas and in Australia (pp.
27-29). Although the number of respondents is not big, the collected data offers useful insight
into the nature of translation, as a profession.
What is interesting for us is the fact that most of the research findings, which represent
the “receiving side”, i. e. the translators themselves, are very similar to the corresponding data
describing the “offering side”, i. e. the teaching institutions and the industry. The findings on
work-related questions established that the most popular working language was English (29%,
nearly one third of the respondents), followed by French (18%), Spanish (17%) and German
(11%) (pp. 35-36).
Concerning education satisfaction, the results reinforced what several authors established
in their studies (Pym, 2013; 2014; Pym et al., 2011): perhaps due to the lack of formal
training, especially at MA level, most of the working professionals do not have any formal
qualification. The majority of the respondents (57%) said that they did not hold a Masters or
other postgraduate degree (p. 34), while 50% of the participants answered that their studies
had prepared them for their current job. Only 37% said that the subjects taught were related to
real-life market needs. Concerning teachers, 70% of the respondents agreed that they were
well-prepared, and only 14% said they had acquired skills to run a business and work as a
freelance translator. Less than the half of the respondents (39%) learned during their studies
how to use CAT tools, 54% said that, all in all, they were happy with their university
education. The majority (61%) said they would choose the same studies again, whereas 31%
would change most of the subjects they had (Henter, 2016, pp.41-45). One third is a high ratio
17
and it shows that the teaching institutions still have a lot to do concerning their translation
training programs all over the world. Even if the sample size per country was small, the
findings, especially as they very similar to what is discussed in the special literature, can be
considered a hint in a certain direction.
1.4 Summary
The aim of this chapter was to frame my research, positioning it in the field of Translation
Studies, where it belongs. I overviewed the works of major authors, focusing on the domains
of knowledge and skills translators, to be able to do their work, need to possess, the main
arenas of translator training, the prevailing pedagogical assumptions in translator training
programmes and the different forms they may take.
The relevant literature showed that the requested knowledge and skills are diverse,
however, the way they are included in translator training programmes are different, as are the
programmes themselves. Some authors describe translator training as offering “everything at
every level” (Kóbor & Lehmann, 2018), however, most researchers agree that translation
studies are most often offered as part of degree programs at universities, imparting translation
specific knowledge and skills (Kiraly, 2000; Nadstoga, 2008; Pym, 2011; 2012; 2014;
Ségiunot, 2008) in the frames of long-term and short training.
The knowledge and skills translators need extend far beyond the language pairs they
use for their work, including extensive background knowledge of the source and target
languages and cultures, as well as the subject matter of the text, the purpose of the translation,
the requirements of the target audience etc. As translations are often made for a special
market, acquiring high-level client and business knowledge also has become part of the
instruction. Even being a good translator is not always enough. In our modern world, it is
necessary to master technical skills, which enable translators to rely on online sources and
tools that help to meet the formatting guidelines and produce edited, publishable texts.
As it was highlighted in Chapter 1, translation is a highly complex activity, which,
besides knowledge and skills, presumes characteristics defining wether a person is suitable for
the profession or not. The first, and perhaps the most important such characteristic is
motivation, which gives direction to everything an individual does and stimulates one to act in
a focused and consistent way to reach the aim of the given activity (Józsa, Wang, Barrett, &
Morgan, 2014).
18
Chapter 2
Motivation in second language acquisition
2.1 Introduction
BA students majoring in English at the University of Pécs are offered to take up one of a
selection of specializations at one point of their studies. In the translation studies classes
students learn the basic skills and abilities which are necessary to translate a variety of texts in
an intelligent and principled way. During their studies, they also get the opportunity to acquire
knowledge that will equip them with the ability to use what they learnt in a constructive,
solution-oriented manner, and creatively apply their language- and problem-solving skills.
According to the website of the university (www.pte.hu), English majors at BA level, have a
good chance to become professionals who can use the English language at advanced (C1)
level and are able to interpret, mediate and create colloquial, cultural, economic, political,
social, linguistic and literary texts in a responsible manner.
All these seem to be in accordance with the requirements of the globalized world we
live in. The establishment of official multilingualism in world organizations such as the
European Union, the presence of multinational companies throughout the world, and the new
waves of migration resulted in a major incentive for massive translation activity at the turn of
the 20th and the 21st centuries. Linked to this type of incentive is the official recognition of the
rights of linguistic groups and individuals not speaking the local languages to be provided
with interpreters and translators in courts and offices in everyday situations, as well as official
documents in their own languages, and in the language which seems to be the lingua franca in
or new world – English. As a result, the discipline of applied linguistics has come to include
interculturality as an area of study due to worldwide changes that are best expressed in the
words globalization and internationalization (Dombi, 2013; Holliday, Hyde, & Kullman,
2004; Menyhei, 2014). Today, it would appear that the main impetus for studying translation
is the official policies which recognize and support linguistic heterogeneity, including official
bilingualism (Baker, 2001). Therefore, when students decide to take up translation as a
specialization, they might act in response to the needs of the surrounding world, where they
are likely to encounter not only people with a set of beliefs, values, ideologies and behaviors
19
very different from their own, but also multilingual-multicultural texts, which may hold a
variety of beliefs, values and ideologies. To become successful participants of this global
community they have to learn how to get one’s message across accurately and appropriately
(Menyhei, 2014).
If we think of translation as an action that “enables cooperative, functionally adequate
communication to take place across cultural barriers” (Baker, 2001, p. 3), it is not difficult to
see how inspiring the choice can be for an English major student. There has never been a
bigger need for individuals who are able “to meet the challenges imposed by our changed
world; in the pluralistic societies that comprise people from different cultural and language
backgrounds, representing various hues, nations and religions” (Dombi, 2013, p.11).
However, these large-scale ideas usually work in the background. It can be assumed that
students who choose translation as a specialization are guided by more mundane incentives
and, most of all, by their own goals, which, reflecting their individual differences, can be very
different.
The next sections of the paper aim to highlight these incentives, including the planned
instruments of the investigation and the most important force that drives BA students to study
translation as a specialization: motivation.
20
why people decide to do something, how hard they are going to pursue it and how long they
are willing to sustain the activity” (Dörnyei, 2001, p. 7).
Motivation as a significant dimension in language learning (Gardner, 1985; Gardner,
Masgoret, Tennant, & Mihic, 2004; Lightbown & Spada, 2013) is often seen as something
that “energizes human behavior and gives it direction” (Dörnyei, 1998, p. 117). The internal
structure of motivation, at the same time, “undergoes dynamic changes in the language
learning process, and, depending on the instructional setting, the momentary influences of the
social context, as well as the given mid- and short-term goals, a complex interplay of different
factors influences effort and persistence in language learning” (Kormos & Csizér, 2014, p.
20).
As most researchers argue, motivation is a highly complex construct used not only in
everyday life but in many areas of social sciences, educational studies and applied linguistics
to explain reasons for human behavior (Dörnyei, Csizér, & Németh, 2006; Józsa, Wang,
Barrett, & Morgan, 2014; Oxford, 2011; Ushioda, 2016). It is also a term that both teachers
and learners use widely when they speak about language learning success or failure. Actually,
“the meaning of the concept spans such a wide spectrum that sometimes we wonder whether
people are talking about the same thing at all” (Dörnyei, 2014, p. 518). However, there is one
thing most researchers agree on: motivation, by definition, concerns the basic question of why
people act as they do. It determines the choice of a particular action, the persistence with it
and the effort expended on it (Dörnyei, 2014, Dörnyei & Ottó, 1998, Ushioda, 1996, 2016).
It is easy to see why motivation is so important in SLA research: “It provides the primary
impetus to initiate L2 learning and later the driving force to sustain the long, often tedious
learning process” (Dörnyei & Ryan, 2015, p. 72; see also Gardner, 2001; Ushioda & Dörnyei,
2012). Gardner’s socio-educational model of second language acquisition (Gardner, 2010)
gives us a broad schematic outline of how motivation is related to other learner and contextual
characteristics. It places motivation within the system of four distinct aspects of the second
language acquisition process: antecedent factors (gender, age, learning history), individual
differences, language acquisition contexts, and learning outcomes (Gardner & MacIntyre,
1993, p. 8). Although age is a crucial aspect in L2 acquisition, and a range of studies have
been published about how the age factor impacts learning in a variety of educational contexts
(Józsa et al., 2014; Nikolov, 2009; Nikolov & Mihajlević Djigunović, 2006), in case of BA
students, especially concerning motivation, learning history and individual differences seem
to be more decisive. In the case of young adults, we can see dramatic person-to-person
disparity in both the quality and the quantity of L2 knowledge and language skills (Dörnyei &
21
Ryan, 2015).
As ID factors concern background learner variables that modify the general learning
process, they definitely have to be taken into consideration in all kinds of research on
motivation. Furthermore, recent studies have started to conceptualize motivation as a “process
orientated and situated construct that shows regular fluctuation” (Dörnyei & Ryan, 2015, p.
183; Kormos & Csizér, 2014), and also as one of “the most consistent predictors of second
language success” (Dörnyei & Skehan, 2003, p. 589) . However, it is not only motivation that
fluctuates; learners’ beliefs, attitudes and other cognitive and affective attributes also undergo
changes (Larsen-Freeman, 2001; Ryan & Deci, 2017). As a result, there are periods in the
learning process which show decline in motivation, moreover those learners, who, due to ID
factors, do not manage to keep up with the curriculum or their peers, might become
demotivated (Wu, 2016).
Demotivation by Dörnyei (Dörnyei & Ushioda, 2011, p. 143) is the counterpart of
motivation; “concerns specific external forces that reduce or diminish the motivational basis
of a behavioural intention or an ingoing action”. It does not mean that a learner losts his or her
motivation completely, but if “motivation energizes human behavior” (Dörnyei, 1998, p. 11),
demotivatin definitely deenergizes it. However, there are conditions, which can result in the
complete loss of motivation. Deci and Ryan (Deci & Ryan, 1985, p. 110) call it amotivation,
described as “the relative absence of motivation, that is not caused by a lack of initial interest
but rather by the individual feelings of incompetence and helplessness when faced with an
activity”.
As it is evident in Gardner’s model language learning contexts also play an important
role in motivation during L2 learning. As research and experience prove, motivation, as a
many-faced aspect, “can be successful only in a learning environment which is student
centered and meets the effective needs of students” (Christison & Murray, 2014, p. 42). In
addition to all these, research suggests that learner autonomy strongly contributes to
motivation. Dörnyei (2001a; 2001b; Dörnyei & Csizér, 1998) even lists the “ingredients” of
autonomy-supporting teaching practices, including increased learner involvement in
organizing the learning process. All this is in accordance with what Nikolov did in her
English classes between 1977 and 1995 by involving her students in decision making
(Nikolov, 1999, 2000). Ushioda (1996) and Heitzmann (2014) also argued that autonomous
language learners are by definition motivated learners. Also, the motivating role of the teacher
(who is often a role model) is indisputable (Dörnyei, 2001a; Gardner, 2001; Nikolov, 1999;
2000; Ryan & Deci, 2017); students tend to assess their development on the basis of feedback
22
from the teacher as well as by their self-perception of their competences (Heitzmann, 2014)
(Ryan & Deci, 2017).
Gardner (1985, p. 6) reported that students’ attitudes towards a specific language group
are bound to influence how successful they will be in mastering the target language, as it,
according to Williams (1994), most often becomes a part of the learner’s identity. It involves
an alteration in self-image, the adoption of new social and cultural behavior and therefore has
a significant impact on the social nature of the learner (Adolphs et al., 2018). We cannot
neglect the importance of classroom atmosphere, either, as it definitely helps learners achieve
their goals (Dörnyei, 1994; 2007; Heitzmann, 2014; Nikolov, 2000).
Research has identified a few ID related factors that undermine learning effectiveness
and second language motivation. One of them is students’ anxiety, which prevents learners
from performing well in several situations (Nagy, 2007; Tóth, 2008). Teachers play a crucial
role in creating a safe classroom environment which definitely facilitates the learning process
(Dörnyei, 2007; MacIntyre, 2002; Young, 1999). If they fail in this respect, they can become
the most demotivating factors in language learning (Wu, 2016).
After studying the huge reservoir of studies on motivation in second / foreign language
learning, the question still arises: what motivates a student majoring in English to choose
translation as a specialization (Doró, 2010)? We should keep in mind that translation, as a
linguistic domain, struggled with recognition for a long time, was underestimated as a
profession (Baker, 2011, p. 2), and translators were commonly deemed as “questionable
sources” (Flanagan, 2016, p. 150), which did not help the prestige of the profession. Because
of these factors, not few students were inclined to major in translation studies at universities
or to choose it as a specialization (See Appendix C). The nature of activity might also have
added to the unpopularity of the discipline. As all practicing translators know, translating long
texts from one language to another is a hard, tedious, sometimes boring, both intellectually
and physically exhausting activity. A student who wants to become a translator has to be
linguistically well prepared (Doró, 2011), has to master two languages at least, the source
language and the target language, and has to make choices all the time. Kinga Klaudy, while
examining the character of translation as an activity emphasizes the huge scale of choices the
translator faces. “The result of his or her activity – the corpus (text) created in the target
23
language – is the result of numberless choices and decisions... When comparing the different
translations of the same text, we always find identical and different solutions, suggesting that
the subjective decisions of the translator have an objective base” (Klaudy 1997a, p. 21;
1997b). To achieve this, certain skills have to be developed and trained, as it was already
discussed in Chapter 1(Doró, 2010; 2011).
The numberless studies quoted earlier discuss intrinsic and extrinsic motivation in
detail. However, in translation classes it is not enough to possess deep-rooted desires to
complete a task or to know that performing it well will be rewarding. It is mastery motivation
that forces us to train and master the necessary knowledge and skills. Mastery motivation
stimulates the individual to attempt in a focused and consistent way to solve a problem (Józsa
et al., 2014), which is essential if one wants to translate long texts successfully. Under
adequate conditions, “mastery motivation operates as long as the challenge persists and as
long as acquisition is not complete; i.e., until mastery has been reached” (Józsa, 2014, p. 39).
It functions as the basis of learning at all levels, with all age groups including adults, who
pursue their profession with expertise or look for ways to solve a problem or to accomplish a
task which is at least moderately challenging for them (Józsa et al., 2014).
There is a new conceptualization of motivation, which involves “a prolonged process
of engagement in a series of tasks which are rewarding primarily because they transport the
individual towards a highly valued end” and was referred to as directed motivational currents
(DCM) (Henry, Davidenko, & Dörnyei, 2015, p. 330). Dörnyei called it an injection of
motivation into the system which involves a greater surge of urgency than normal
motivational behavior (Dörnyei, Muir, & Ibrahim, 2014). As the name specifies it, it is
targeted at a definite goal. In case of translation, the goal is to transfer a text from the source
language into the target language as successfully as possible. Seemingly, it is not a
complicated task. However, as translation is most often a time and energy consuming activity,
the motivation for this kind of language use has to be not only prolonged, but also very
strong. Kormos and Csizér (2014) argue that goals and attitudes play an important role in
influencing motivated action, and translation is definitely a goal directed activity.
The success of translation often depends on the translators’ confidence in recognising
the structures and layers of the foreign language they work with, and on their ability to find
the native language equivalents of culture-specific expressions or of sophisticated figures of
speech. As discussed in Chapter 1, all these aspects need solid background knowledge which
cannot be acquired without conscious learning or being exposed to the target language in its
native environment (Baker, 2011, pp. 67-68 ). It is not a problem if translators do not feel at
24
home in the economic, historic, political or religious mazes of a given culture, if they are
aware of their shortcomings and take the pain to look up the necessary information in the
appropriate sources, thus avoiding inaccurate, inappropriate or misleading translation. So the
renowned Hungarian literary historian Mihály Szegedy-Maszák is correct when he claims that
“because of the differences between the languages, the translator sometimes has to fight
extreme obstacles as the values of the original and the target language are incommensurable”
(Szegedy-Maszák, 2008, p. 14). Regarding cultural values, translating from English into
Hungarian becomes even more difficult “because of the very few historic links between the
two cultures” (Szegedy-Maszák, 2008, p. 15). That is why interculturality (Dombi, 2013;
Holliday et al., 2004; Menyhei, 2014) is an important – and presumably also a motivating –
element in translation classes. However, if these are beyond translators, they may turn out to
be demotivating (Dörnyei, 2001b).
2.4.1 Questionnaires
The most common and sensible way to examine participants’ motivational background is to
use a questionnaire, which can provide a cheap and effective way of collecting data in a
structured and manageable form (Dörnyei, 2003; Wilkinson & Birmingham, 2003). However,
putting together a questionnaire is not a simple task. Despite the common knowledge that
“they are easy to construct, extremely versatile, and uniquely capable of gathering a large
amount of information quickly in a form that is readily processable” (Dörnyei, 2003, p. 1),
they often result in poorly collected data (Gillham, 2000, p. 1). The reason is that their main
strength – the ease of their construction – is also their main weakness: most questionnaires
applied in second language (L2) research are ad hoc instruments; questionnaires with
sufficient psychometric reliability and validity are not that easy to come by in the field
(Dörnyei, 2003, p. 3). All these points suggest that the researcher has to be extremely careful
while accomplishing the task. After deciding what type of questionnaire to use, there are
several steps and issues that have to be taken into consideration: the types of the questions,
the design, the length, the timing, the administration, the steps of processing, and, of course,
ethics, including sensitive topics, confidentiality and anonymity (Dörnyei, 2003; Griffee,
2012; Wilkinson & Birmingham, 2003). Dörnyei (2003) elaborates on the advantages and
25
disadvantages of using questionnaires, directing the researcher’s attention to other research
methods such as personal interviews which can enrich the investigation with useful data.
Over the past few decades questionnaires of various kinds have become one of the
most widely used data elicitation instruments in second language acquisition (SLA) research
(Dörnyei, 2010; Gao, 2004). They help the researcher to establish a shared understanding of
the examined phenomenon. They are relatively easy to construct and capable of gathering a
large amount of data in a simple way, in a short time, and in a format that is relatively
straightforward to analyze. However, the strength of questionnaires is also their main
weakness: everybody knows what questionnaires look like, so most people tend to think that
every educated person can put together a questionnaire that works (Dörnyei, 2010a).
By definition, “questionnaires are any written instruments that present respondents
with a series of questions or statements to which they are to react either by writing out their
answers or selecting from among existing answers” (Brown, 2001, p. 6). We can distinguish
two types: the so called “paper and pen” form and their modern version, a computerized or
web-based questionnaire which can reach out to a larger and more diverse pool of potential
participants (Wilson & Jean-Marc, 2010). However, in case of a small-scale research it is still
more adequate to use traditional paper-based questionnaires, especially if the researcher
works with a convenience sample and wants to make sure that respondents return their forms.
According to Dörnyei (Dörnyei, 2010a) questionnaires yield three types of data:
factual, behavioral, and attitudinal. Factual questions are used to find out who the respondents
are, so they are aimed at the participants’ age, gender, level of education and different kinds
of background information that may be relevant when it comes to interpreting the findings. As
it often happens in L2 research, the additional data in this case include facts about the
respondents’ language learning history, their L2 competence, parents’ L2 knowledge, amount
of time spent doing L2 related activities, etc.
Behavioral questions aim to find out what respondents typically do or did in the past.
As the present questionnaire is aimed at students in translation as a specialization in English
studies, the items here include language questions about past experience in this field.
The third type of data elicited by questions on attitudes are used to find out what
participants think, concerning their attitudes, opinions, beliefs, interests, and values.
Using questionnaires in L2 research has advantages and disadvantages. They are
efficient in terms of researchers’ and respondents’ time, effort and financial resources, as a
huge amount of information they can be collected in a relatively short time (e.g., in an hour).
If the questionnaire is well constructed, data processing is also fast and straightforward, and
26
modern computer software (e. g., SPSS) makes it even faster and more reliable. However,
they have limitations; in fact, a few researchers agree that “no single method has been so
much abused” (Dörnyei, 2010, p. 6; Gillham, 2000, p. 1). The most serious problems are:
When a researcher constructs a questionnaire, the logical starting point is to establish the aims
of the research and to come up with specific, focused research questions (de Vaus, 2014;
Gillham, 2000) after reading others publications on the same or similar focus. A series of
steps and procedures have to be taken into consideration:
(1) deciding the general features of the questionnaire (length, format, main parts);
(2) writing questions / items to draw up an item pool;
(3) selecting and sequencing the items;
(4) writing appropriate instructions and examples;
(5) piloting the questionnaire; and
(6) conducting item analysis once data have been collected (Dörnyei, 2010a).
Constructing a questionnaire is especially difficult for a novice researcher (Nunan & Bailey,
2009). Although experts agree that borrowing questions from other research studies is
acceptable, it has its caveats (Blair, Czaja, & Blair, 2014). It is not always possible, either; in
cases when one wants to study a poorly researched area for specific reasons, it would not
work. There are several questions the researcher has to deal with. What is the optimal length?
Too long questionnaires can become counterproductive, because the respondents grow tired
or bored. According to researchers experienced in the field, the optimal length is four to six
pages, which would not require more than half an hour to complete (Dörnyei, 2010, p. 12).
The layout is also an important aspect, as it may have a significant impact on respondents.
Dörnyei (2010, pp. 13-14) offers a list that summarizes the five most important points:
27
(1) A questionnaire not only has to be short, but it has also to look short;
(2) it should have appropriate density with full pages, which, at the same time, do not
look crowded;
(3) an orderly layout can create a good impression;
(4) the quality and the color of the paper or background can also make difference;
(5) sequence marking makes it user-friendly.
Frary (1996) also suggests keeping questionnaires brief and concise. Sensitive topics should
be avoided; the respondents’ anonymity should be guaranteed. They are both serious issues
that need to be considered throughout the developmental and administration process. The
voluntary nature of participation and other ethical issues cannot be emphasized enough (Blair
et al., 2014; Creswell, 2003; Dörnyei, 2010; Gillham, 2000; 2008; Mackey & Gass, 2005).
It is also a good idea to establish what the main parts of a questionnaire would be,
including an informative title followed by a general instruction which should cover the
following points: (1) what the study is about; (2) the organization / person responsible for the
study; (3) promising confidentiality; (4) saying “thank you” (Dörnyei, 2010, p. 19).
The specific instructions refer to how respondents should go about answering the
questions (ticking, circling, marking on a scale, etc.). They are typically followed by the
central part of the questionnaire: the actual items. According to Dörnyei (2010), they rarely
take the form of actual questions; they are more often statements that the respondents have to
agree or disagree with to a certain extent on Likert scales. The items have to be separated
from the instructions clearly by using different typefaces or font styles or other markers.
Another fundamental issue is to decide how we want respondents to answer the
questions (Creswell, 2003). It is a good idea to start with drawing up a shortlist of specific
content areas. After that, it is much easier to eliminate the redundant or unnecessary items and
keep only those ones, which are directly related to the variables and the hypotheses a
questionnaire is designed to investigate.
Questionnaires can be used in both quantitative and qualitative research. When
designing a qualitative study, where there are no pre-determined variables and hypotheses, we
will most probably expect extended answers, so open items should be included which allow
the researcher to figure out how things are (Nunan & Bailey, 2009).
Dörnyei (2010, pp. 26-39) mentions several question types in his book, listing their
advantages and disadvantages (Table 4). He emphasizes the importance of question design,
28
creativity and common sense that should be employed in order to create good, working items.
Researchers should follow an old rule, “tests of practicability must play a crucial role in
questionnaire construction” (Moser & Kalton, 1971, p. 350).
As Table 4 indicates, although the literature offers a great number of question types, to
start creating a questionnaire is very much similar to entering a maze. To be able to find the
way out, I will depend on Dörnyei’s do’s and don’ts list (2010, pp. 40-48), as it seems to be a
secure crutch for a novice researcher, and I am sure it was created to fulfill this very aim.
Table 4
Types of questionnaire items (based on Dörnyei, 2010)
unreliable
responses
29
respondents are asked to relatively in case of
mark one or more of the straightforward; easy to omissions, it is
offered options, including construct reader-friendly difficult to decide
Multiple-choice items
leaving them unanswered items the reason (was it
conscious or just an
accident?)
30
involve a real exploratory can be motivating for the Answers need
inquiry about an issue; respondent; coding
inquire a more free- enable the researcher to
Short-answer questions ranging, unpredictable look for the unknown /
response unexpected;
a good way to finish a
questionnaire
Dörnyei’s list also serves as a kind of checklist, which contains all the necessary “ingredients”
of the activity:
(1) Draw up an “item pool”, which contains at least one and half to twice more items
than the final scales;
(2) Aim for short and simple items;
(3) Use simple and natural language;
(4) Avoid ambiguous or loaded words and sentences;
(5) Avoid negative constructions;
(6) Avoid “double barreled” questions;
(7) Avoid items that are likely to be answered the same way by everybody;
(8) Include both positively and negatively worded items;
(9) Write translatable items.
All this means, that writing effective questionnaire items requires special attention to detail,
including item sequence, based on three main ordering principles (Dörnyei, 2010, pp. 46-48):
(1) use a clear and orderly structure with a user-friendly, easy to follow item format;
(2) start with carefully selected opening questions to create a pleasant first impression;
(3) place factual (e. g., personal) questions and open-ended questions at the end.
Gillham also emphasizes the importance of the uncluttered look and offers advice to
use a variety of different question types, so that the respondents would not get bored as they
read and answer the questions (Gillham, 2000, pp. 39-40).
There is one thing that surfaces quite clearly from the related literature: the rules are
straightforward, and questionnaires seem to be easy and simple to create, if one follows the
rules and organizing principles offered by the questionnaire specialists. Unfortunately, reality
has its own ways; what seems to be straightforward at the theoretical level can turn out to be
tedious work when it comes to practice.
31
2.4.2 Interviews
Interviews are not an easy option, but are regarded to be indispensable in case study research.
Interviewing, on any scale, is time-consuming but is worth their considering provided you can
identify a small number who are key or representative (Creswell, 2003; Gillham, 2000a;
Wilkinson & Birmingham, 2003). Interviews and questionnaires serve different purposes: to
carry out a larger scale or preliminary survey we use questionnaires, to achieve a depth of
understanding we can use an appropriate form of interview (Mackey & Gass, 2005).
Interviews, independently of their purpose (medical, selection, therapeutic, research
etc.) have a great deal in common, however, they are different in the extent they are structured
(Gillham, 2000c). The most structured forms of the interview are those where the interviewer
knows what he or she wants to find out, so the interviwee is asked direct questions planned
and created in advance. All interviewees are asked the same questions in the same order. In
unstructured interviews the questions are not planned, they arise spontaneously, like in a
natural conversation. In semi-structured interviews the same predetermined questions are
asked of all those involved, plus supplementary questions which are not planned in advance,
combining structured and unstructured styles, offering the advantages of both (Gillham, 2005;
2008a).
Interviews, independently of their level or structure, are flexible; in case of
questionnaires, only limited inferences can be made because the researcher cannot explore
what lies behind the answers to the questions. Gillham (2005, pp. 3-4 ) summarizes the main
features of an interview as follows:
(1) Questions asked or topics raised are fully open with the interviewee determining their
own answers – a key distinction from questionnaires where normally the researcher
not only asks the questions but also provides the answers in some sort of choice
format.
(2) The relationship between interviewer and interviewee is responsive or interactive,
allowing some degree of adjustment (clarification, exploration etc.)
(3) There is a structure and purpose on the part of the interviewer even when the context
is natural or at least naturalistic in the sense of taking advantage of the arising
opportunities.
These criteria are most satisfactorily met in the semi-structured interviews, which, because of
their flexibility and the quality of the data obtained are the most effective ways to conduct
research interviewsn (Gillham, 2005).
32
An interview can be conducted at a distance or face-to-face. Although distance
interviewing has some advantages (cost, including time and energy is lower and access may
be easier). The most frequent ways of distance interviewing are the telephone interview and
most recently the e-mail interview. In case of telephone interview the emphasis on the use of
relatively brief structured interviews, the results of which can be analysed in a standardized
format (Gillham, 2005). The number one advantage of telephone interviewing is clear: you
are talking ‘live’ to the respondent, so you can be reactive. Misunderstanding can be clarified
easily, prompts can be used, and there is a mutual responsiveness. People talk more easily
than they write, so they are more ready to conduct a telephone interview than an e-mail
interview, which involves writing. E-mail interview has three applications. The first one being
when the respondent is too busy to meet personally or lives in another city or country so
would otherwise be difficult to reach. The second one is, when it is the preferred option of a
respondent, who is reluctant to participate in a face-to-face interview. Finally, it can be a good
way to clarify minor factual details, as many people respond to e-mails more rapidly than to
traditional letters or messages left on answering machines. The e-mail interview’s greatest
advantage is its spee and flexibility (Lowndes, 2005). However, dealing with personal topics
via telephone or e-mail will probably lead to caution on the part of the respondent (Gillham,
2005). It is sometimees vulnerable or extraordinary what an interviewee may disclose to
someone they have not met before, so there are some ethical rules the interviewer should
follow. It is important not to encourage inappropriate disclosures, however, if they occur, the
researcher needs to know how to treat them.
Keeping to the basic ethical rules of interviewing helps the interviewer to avoid being
intrusive and to maintain a friendly, but not confiding tone. The most important elements that
help to set that tone are as follows (Gillham, 2005):
33
information, especially in case of highly sensitive data, (d) the possible publication of
the collected data or making it otherwise accessible, (e) exceptional uses, e.g., to use
the gained information for presentation purposes, for which explicit permission is
necessary and (f) data lifetime: data destruction when they served they purpose should
be a routine form of protection;
(4) Bear in mind the right of interviewees to review the transcript of their interview and to
modify them.
As with other research instruments, there are several stages of developing and using
interviews: (1) drafting, (2) piloting the questions, (3) selecting the interviewees, (4)
conducting the interviews, (5) transcribing oral data, (6) coding and (7) analyzing the
interview data (Wilkinson & Birmingham, 2003, p. 44).
Interviews are traditionally less structured than other research instruments. There are
three types of interviews: (1) an unstructured interview is the most flexible approach; (2)
semi-structured interviews allow the interviewer to direct the interview more closely but
questions can be changed and added on the go, and (3) structured interviews, according to the
literature, is “no more than a questionnaire completed face-to-face” (Wilkinson &
Birmingham, 2003, p. 45).
There is an agreement that unstructured interviews are controlled by the interviewee,
whereas semi-structured interviews offer predefined areas for open discussion. It is the
structured interview, which, with its predictability, provides the easiest dataset for analysis.
The number and also the types of the questions, as well as their sequence should be
clarified in the drafting stage. Each question must be phrased in order to gather as much
information as possible. As no research instrument is perfect, the piloting phase is crucial,
because it helps to eliminate ambiguous questions and provides useful feedback on the
structure and the flow of the intended interview (Wilkinson & Birmingham, 2003).
Selecting the interviewees requires extra care, mostly because interviews take a long
time to plan and also to conduct and analyze, including the transcription of the recorded data.
For these reasons, it seems to be sensible to work with a small and if possible, a representative
sample. The physical organization of the interview is also an important element in the
process; the setting and the arrangement of the interview situation should put both parties at
ease.
Using interviews has advantages and disadvantages. The greatest advantage is that,
because of the researcher’s indirect involvement, a 100 per cent response rate can be
34
achieved. The researcher can decide on follow-up questions if the initial answer is not
satisfactory, can observe body language and interpret the tone of the response. However,
interviews are time-consuming. As a rule, two days’ transcription time should be allowed for
one full day of interviewing (Wilkinson & Birmingham, 2003). The other drawbacks are that
the information we get is filtered through the views of interviewees and is provided in a
designated place (Creswell, 2003, p. 186).
When preparing for the interview, as in the case of questionnaires, it is sensible to go
through a checklist (Wilkinson & Birmingham, 2003, p. 64), focusing on:
The final stage of the interview process begins with drawing together the data and
making them ready for coding and analysis, for which the relevant literature (Mackey & Gass,
2005; Saldana 2009) offers a long list of methods, including first cycle coding methods,
second cycle coding methods and post-coding. In small-scale research this would involve
grouping the responses to each question from all participants. This will allow themes and
issues to be easily identified and quantified. When analyzing a large number of transcripts, it
may be necessary to use computer-based tools.
35
including pictures, artefacts and even music (Creswell, 2003; Denscombe, 2014). The greatest
attraction of using documentary sources, most often written texts, is their accessibility. The
most frequent types of documentary data include: government publications, official statistics,
different reports, newspapers and magazines, minutes of meetings, letters, memos, diaries,
essays, website pages and data on the Internet (Denscombe, 2014, p. 229).
Letters, memos, diaries and essays including student essays are written by people
whose thoughts and behaviors the researcher wants to study. They usually contain an account
of personal feelings and emotions concerning the topic or event described, in this way they
provide rich information on the studied phenomenon. On the other hand, they may be
protected, unavailable to public access, and may require the researcher to search out the data
in places which are hard to find; a process that can make information mining time and energy
consuming (Creswell, 2003).
A collection of one’s students’ written work can be done easily by asking the students
to write an essay or a composition on a given topic, expressing their thoughts and their related
experiences. It offers easy accessibility, and also saves the teacher-researcher the time and
expense of transcriibing. A basic concern is that what is collected should be authentic (J.
Horváth, 2001). In case of student essays, written in-class or at home, their authenticity
cannot be questioned. Student essays also offer primary or ‘first-hand’ data, as they are
obtained from the original source as part of the applied aspect of research (Scheurich, 2007).
They also represent data that are thoughtful, in which participants have given attention to
compiling. In addition, they enable the researcher to obtain the language and words of
participants (Creswell, 2003).
To analyze the content of any document the most sensible method is to apply content
analysis (Creswell, 2003; Denscombe, 2014; Dörnyei, 2007b; Mackey & Gass, 2005; Nunan
& Bailey, 2009), which is a logical and relatively straightforward procedure. Having an
appropriate sample of texts, we break them into smaller units (in case of short texts it is not
necessary). The unit of analysis can be each and every word, complete sentences or whole
paragraphs. Qualitative data analysis invariably starts with coding, which involves
highlighting extracts in a way that they can be easily identified, retrieved, grouped or
categorized. Dörnyei (2007b) calls this procedure initial coding, usually followed by second-
level coding which helps the researcher go beyond the descriptive labeling of the relevant data
segments. For analyzing the data, the researcher has to develop relevant categories and count
the frequency with which the units occur. (It can be a tally of the times, when various units
36
occur.) Another useful method is to produce a hierarchy of codes, e. g., in the form of a tree
diagram, which helps to clarify how the categories are related to each other (Dörnyei, 2007b).
Content analysis reveals what appear in the texts as relevant, the priorities portrayed
through the texts, the values conveyed in the text and also how ideas are related. The main
strength of this method is that at intial level it provides a means for quantifying the contents
of a text, whereas at second-level interpretive analysis the underlying deeper meaning of the
data. It also has a main limitation: it has an in-built tendency to dislocate the units and their
meaning from the context in which they were made (Denscombe, 2014; Dörnyei, 2007b).
As a summary, we can say that documentary research, both as an alternative method or
an addition to other methods, has compelling advantages. The first one is the relatively easy
access to data and the vast amounts of information on can find in documents. It is also cost-
effective and provides data which is permanent and easily accessible. Its main drawback is
that documents, including essays, can owe more to the interpretations and beliefs of those
who produce them than to objective reality.
There are several conflicting views on what makes syllabus design and curriculum
development different. The “narrow” view draws a clear distinction between the two, stating
that curriculum development is concerned with planning, implementation, management and
administration of education programmes (Nunan, 1988), while syllabus design is concerned
with the selection and grading of content. Some language specialists, e.g., Allen (1984), Stern
(1984), Yalden (1984) believe, that syllabus and methodology should be kept separate, others,
like Breen (1984) and Widdowson (1984) think otherwise.
According to a rather traditional definition, a syllabus is a statement of content which
is used as the basis for planning courses; the task of the syllabus designer is to select and
grade course content (Nunan, 1988). Breen (1987a) also sees it as a plan of what is to be
achieved through teaching and learning. It can provide detailed information for students on
what is to be achieved in the course and therefore, it can act as an implicit contract. In
Brumfit’s definition, a “syllabus for a second language programme is not a guide for private
use by teacher and learner. It is a public document, a record, a contract an instrument which
represents negotiation among all the parties involved” (Brumfit, 1984, p. 13). Perhaps the
37
most noted function of a syllabus is administrative: it provides an organizational structure to
the course (Cullinan, 2016).
From a learner perspective, the most important questions are: What does the learner
want to do with the target language? What activities will stimulate or promote language
learning? To be able to answer these questions a “needs analysis” can be performed, which, in
simple terms, refers to “the procedures used to collect information about learners”, including
motivation, expressed needs, likes, dislikes, learning styles with the aim of designing a course
that is tailored to the specific needs of the students involved (Cullinan, 2016, p. 58). Espinosa
underlines the necessity and the motivating nature of what he calls “autonomy-fostering
syllabus” (Espinosa, 2015, p. 114). A course, based on such a syllabus allows the students to
identify what they deem important to learn, what readings, texts they would welcome, and
also to plan how to get them; this is a process motivating in itself. Althouh the syllabi can
differ from teacher to teacher, the topics, covered in them, should be in accordance with the
overall content of the curriculum taught in the course.
Nunan (2003, p. 193) speaks of “learner-centeredness” and “learner centered”
curriculum development and syllabus design, when the syllabus is the result of a collaborative
effort between teachers and learners, since the latter are closely involved in the decision-
making process regarding the content of the curriculum and syllabus. Nikolov (2000), based
on a research conducted with young learners, also writes about the motivating force of
negotiated syllabus and classroom work. Christison and Murray (2014), when stating that
“motivation, as a many-faced aspect, can be successful only in a learning environment which
is student centered and meets the effective needs of students” (2014, p. 47), mean a similar
point. Those, who focus on the language learning process, emphasize that learners also should
contribute to that process, including syllabus planning (Breen, 1987b). Working on a process
syllabus can result in a pedagogical partnership between teachers and learners through a
teaching process in which learners become equipped with the knowledge, skills and attitudes
that enable them to play an active role in the planning, implementation and evaluation of their
own learning. A negotiated syllabus is created, which, on the one hand, motivates the learners
to carry out what they planned, and on the other, leads to a higher degree of autonomy.
38
2.5 Summary
This chapter aimed to overview the literature on one of the main determinants of second /
foreign language learning, motivation, with special focus on a neglegted area, TS classes at
BA level, including the different means and methods of measuring this very complex
construct. Sections 2.1 and 2.2 discussed literature on motivation in general, comparing
different definitions of the construct, arriving at the conclusion that motivation, as an
imoprant ID factor, has a huge impact on language learning success. In order to show the
many-faceted nature of motivation, the most influential theories and models were listed
(Adolphs et al., 2018; Dörnyei, 1994; (Dörnyei & Ottó, 1998; Dörnyei & Ryan, 2015;
Dörnyei & Ushioda, 2009; Gardner, 2010), arriving at the general conclusion that motivation
concerns the basic question why people act as they do.
Section 2.3 focused on motivation in BA translation studies classes, seeking the
answer to the ever-rising question: Why do students majoring in English choose translation as
a minor or a specialization? Why do they want to study a discipline, which still struggles for
recognition, but, at the same time, requires the high-level knowledge of at least two languages
(Baker, 2011; Doró, 2010; Flanagan, 2016)? The studies quated emphasized the importance
of mastery motivation (Józsa et al., 2014) and of a relatively new conceptualization called
directed motivational currents (Henry et al., 2015). Other authors pointed out those aspects,
which make translation more difficult than the other language learning activities: its goal-
directed and energy consuming nature, the culture-specific knowledge it requires, the most
different skills that have to be developed and trained (Doró, 2011; Klaudy, 1997a; Szegedy-
Maszák, 2008); traits which easily can result in demotivation.
Section 2.4 concentrated on different ways and methods, both quantitative and
qualitative that can be used to examine and measure motivation: questionnaires, interviews
and documents, including student essays and course syllabi as possible data collecting
instruments. The part on questionnaires (2.4.1) was based on Dörnyei’s (2003b; 2010a;) and
Gillham’s (2000b) book on developing questionnaires, which discussed each step of this
complex and tedious process, emphasizing the advantages and pointing out the drawbacks of
the tool. Other approaches also were studied (Creswell, 2003; Mackey & Gass, 2005; Nunan
& Bailey, 2009) in order to find out how qualitative research can add extra information to
quantitative findings; how interviews and documents can be used for deeper or background
analysis.
39
The works listed in the literature overview in Chapter 2 confirmed that motivation is a
highly complex phenomenon. Although we know a lot about it, it is difficult to research, so
we have to be really careful and attentive when we decide on the research procedures,
including the instruments we want to use. Chapter 3 aims to examine BA students’ motivation
in translation studies classes based on a study carried out at the University of Pécs between
2016 and 2019.
40
41
Chapter 3
Student motivation in Translation Studies BA classes
Following Pym’s (2013) suggestion to rethink the translation skills in translation training
programs, Hungarian researchers have attempted to map the most important fields and
directions, and define new foci for teaching (Kóbor & Csikai, 2017). One of the greatest
merits of this recent inquiry is the fact that it also aimed to meet the professional needs of the
translation market (Kóbor, 2017).
The question of competences has been an important and challenging research field for
Hungarian researchers during the last decades. The list of the necessary competences they
identified (competences connected to translation, as an activity; technology; the management
of the translation process) (Fischer, 2017) strongly corresponds with the categories offered in
the international literature (Chesterman, 2005; Dickinson, 2002; Risku et al., 2010). Some
features, which have been part of instruction at European universities for a long time,
appeared as a new phenomenon in discussions authored by Hungarian experts. The translation
market is more and more often referred to as “language industry” (Fischer, 2017, p. 24),
demanding new terminologies such as language service providers (LSP), language service
companies, (LSC), knowledge management instruments (KMI), project management (PM),
machine translation (MT), post editing (PE), etc .
The competences, important for the industry and also for the profession are best
highlighted when international models are compared. In a study, Krajcsó (2017) compared
four well-known competence models (EMT, CIUTI, TransCert, ISO). Although the lists seem
to be similar, the grouping of competences is remarkably different (See Table 5). It is also
apparent that competences which can be connected to translation as a service are listed as
especially important ones in each presented model.
Linguistic competences create an independent group in each system, similarly to
intercultural competences, although different terms (intercultural, transcultural and cultural)
are used in the lists. Technological competences also are also named differently
42
(technological, technical and thematic domain) and with slightly different emphasis.
Competences, which are regarded less important, are typically taught within another one,
mostly as part of language or linguistic competence. Krajcsó (2017) points out that translator
training institutions mostly cover competences connected to translation, as an activity
(linguistic, intercultural, translation competences), while “domain competence” is typically
missing from their curricula. According to Dróth (2017), there is another “underrepresented”
category: translation management, which she, in accordance with Kiraly (2000), discusses not
as translation, but as translator competence (see 1.2).
Table 5
Comparison of four international translator competence models (Krajcsó, 2017)
Native language
Language Language skills Linguistic competence
Foreign language
Information mining
Info mining (within foreign language) Cultural competence
&terminology skills
Competence in research,
(within foreign language &
Technological Technological skills information acquisition and
translator’s competence)
processing
After looking at what competences four formal translator training institutes cover, we can
move to the institutions (universities) which offer translator training programs in Hungary.
These programs have undergone substantial, possibly overdue changes during the last ten
years. The reasons are discussed by Válóczi (2010) as follows: (1) Due to Hungary’s
membership in the European Union, the demand for well-trained translators and interpreters
has significantly increased; (2) the content of translation and interpreter training had to be
43
adjusted to the social and economic changes characterized by the diverse needs of the market.
The role of translators / interpreters has been re-interpreted: now they provide “services”,
based on high-level problem solving, negotiating, organizing and communicating qualities;
they are experts with thorough linguistic, professional and often interdisciplinary knowledge.
(3) Education policy has put an end to the ongoing debate about the place of translation
training in the education system.
As a result, training programs have become part of university curricula at BA level,
initially as complementary courses and specializations, later as independent subject blocks. In
response to the increasing demand for highly trained graduates on the market, translator and
interpreter training was introduced at MA level, as well, in addition to language schools
which also offer translator training courses.
The first translator and interpreter training program was introduced by Kinga Klaudy
at the Eötvös Loránd University in 1973 (Klaudy, 2013, p. 9). Since then other universities
have joined the successful and increasingly popular training form. A 2010 study by Válóczi
lists 21 faculties of 14 universities or colleges, which offer their prospective students 40
different, accredited training programs in Hungary. The list incudes Corvinus University of
Budapest; Budapest Business School; Budapest University of Technology and Economics;
University of Debrecen; Eötvös Loránd University; Kodolányi János University of Applied
Sciences; University of Miskolc; University of Nyíregyháza; University of West-Hungary,
Savaria Campus; Pannon University, Pázmány Péter Catholic University; University of Pécs,
University of Szeged and Szent István University in Gödöllő. The programs differ in duration
(generally two to four semesters, ten to fourteen classes a week), content and fees. A 2016
summary (Vermes, 2016) mentioned 26 faculties of 17 universities offering translator training
(see Appendix C), which suggests that the demand is huge.
The number of institutions, which offer master programs, is still very low in Hungary.
In 2010, there were only three accredited translator training programs at this level: at Eötvös
Loránd University, at Pannon University and at University of Miskolc (Válóczi, 2010). The
admission requirements are high: a complex C level exam in one, a complex B level exam in
another foreign language plus passing a complex admission test. At MA level 300 contact
hours of instruction are offered every semester for x semesters. The master programs include
a 100-hours professional practice. The best graduates can carry on with their studies in
doctoral programs. Although there is only one doctoral school in Translation Studies in
Hungary (Eötvös Loránd University, lead by Kinga Klaudy), there are several doctoral
schools in linguistics which offer their students translation topics.
44
The situation has changed a lot by 2017; Kóbor and Lehmann (2018) describe it as
“everything at every level” (which is definitely true in international arenas, as well), referring
to the diverse opportunities prospective translation students can choose from, including BA
and MA levels and also special postgraduate programs (see Appendix C). In literal meaning
what they say suggests, that – concerning the skills – there is no difference between the listed
levels; the improvement is rather superficial, there is still a lot to do in order to optimize the
quality of the training. Since 2010, new institutions have “joined the club”: Károli Gáspár
University of the Reformed Church in Hungary, Eszterházy Károly College in Eger and
Semmelweis University. In 2016, 17 universities offered 78 different training courses: 13 at
BA, 56 at post-graduate, eight at MA and one at PhD level (Vermes, 2016). The unique
feature of the Hungarian training system is that there is no professional translator training at
BA level; those, who are interested, can study translation in the frameworks of a 50-credit
specialization, as the part of their bachelor studies in a foreign language major. The admission
requirement for this program is B2 or C1 level proficiency in a foreign language, depending
on the applicants’ choice. However, the certificate the graduates get does not qualify them to
work as certified translators.
In March 2020 seven Hungarian universities offered MA level translation and
interpreting programs in the official site of Hungarian Oktatási Hivatal (https://www.felvi.hu):
Eötvös Loránd University in Budapest, University of Debrecen, Eszerházy Károly College,
University of Miskolc, Pázmány Péter Catholic University, Pannon University, and
University of Szeged. The admission capacity of the seven universities was 208 alltogether,
including 152 state-financed and 56 self-financed positions (https://www.felvi.hu). (Table 6).
The tution fee in case of self-financed positions was betveen 250,000 and 375,000
Forints per semester. The published admission requirements were the same at each institution,
including one C 1 level, and one B2 level complex language exam in the chosen languages,
and an entrance test, consisting of a a written and an oral (interview) part. However, COVID-
19 restrictions rewrote the admission procedure for the academic year of 2020 – 2021. At
Eötvös Loránd University, for example, the applicants were expected to hand in a CV in
Hungarian language, including the applicants’ preliminary studies (800-1,000 characters) plus
a summary of their relevant achievements (publications, conference participations, grants, etc.
(800-1,000 characters), and also a motivation letter written in the target language (3,500-
4,000 characters). The oral exams were replaced by scheduled online interviews. At Pázmány
Péter University both the CV and the motivation letter were expected to be written in
Hungarian and in the chosen foreign language, and the applicants also had to hand in a
45
summary of their BA portfolio (2,500 characters) in both Hungarian and the target language.
The procedure, with minor differences, was similar at each university.
Table 6
Translation and Interpreting MA programs in Hungary, 2020-2021 (Source: felvi.hu)
University of
20 5 written 4 120
Debrecen
Eötvös Loránd
50 6 written + oral 4 120
University
Eszterházi Károly
8 8 oral 4 120
University
University of
6 24 written 4 120
Miskolc
Pázmány Péter
25 5 written + oral 4 120
Catholic University
The so-called specialized post-gradual training courses, where the admission requirement is a
BA or MA degree, also occupy an important place in translator training in Hungary. The
duration of the training, similarly to MA level is three to four semesters. In the academic year
of 2017/2018 applicants could apply for specialized training programs at nineteen faculties of
fifteen universities (Kóbor & Lehmann, 2018).
It is important to mention that in Hungary the translation training programmes should
accommodate the demand for two dominant languages, English and German. Some
traditionally important European languages (French, Russian, Italian and Spanish) are also
offered by some universities; all other languages remain marginal (Vermes, 2016).
Examining the most important features of translator training in Hungary we can
conclude that it is highly diverse; the offer prospective students can choose from is wide, and
hopefully creates a good basis for positive competition at every level.
46
3.2 Translation Studies at the University of Pécs
Translator training at University of Pécs has significant traditions, which can serve as a good
example for a possible MA translation program. The four-semester long, 120 credit post-
gradual translation training programs in English (btk.pte.hu/angol_magyar_szakford), French
(btk.pte.hu/francia_magyar_szakford) and Italian (btk.pte.hu/olasz_magyar_szakford) at the
Faculty of Humanities, as well as a similar program at the Faculty of Medicine of the
University of Pécs (UP) (aok.pte.hu/hu/egyseg/60/index/almenu/164 also may attract new
students.
Currently, BA students at the Institute of English Studies, Faculty of Humanities, UP
can choose translation as a specialization in the second year of their studies, after passing their
English proficiency exam. As it can be chosen instead of a compulsory minor, only a small
number of students takes it up. Table 7 shows how many students applied for the program
between 2014 and 2020, the years my research covers. It can bee seen that there was a
significant drop in 2015, which, concerning present study, is important, because it shows the
population from which I could recruit participants.
Table 7
The number of translation students between the academic years of 2014/15 and 2020/21 (Neptun data, retrieved
on 29/07/2020)
Number of 35 10 11 10 11 17 18
students
The enrolling students can earn only 50 credits as opposed to the 120 credits of the
specialized post-graduate programs in the field of Humanities and Health Sciences (Kóbor &
Lehmann, 2018, p. 25). It means that they have significantly fewer contact hours in the
subjects that comprise their curriculum than those who study in the other two forms of
program. The significance of different competences and the emphasis put on them is well
reflected in the corresponding credits (Table 8). As is apparent in the table, Translation and
interpretation techniques and the connected skills constitute a relatively large number of
classes in each program: one third of the total credit in case of postgraduate programs, and
two thirds of the total credit in the case of the BA specialization.
The credits, as they reflect the emphasis on the different components, speak for
themselves. There is hardly any emphasis on IT skills, and the difference is huge between the
47
programs concerning language and culture. Special terminology also seems to be a neglected
area, although it should be a significant element in special translation programs. What is also
worth noting is the fact that the Language and culture component, which can earn high credits
even in the Health Sciences program, is not given a larger emphasis neither in the planned
MA program, nor in BA specialization. Those, who want to become translators in the field of
Health Sciences, typically do not have a degree in the target language, while the participants
of the BA level training learn target language culture, history and civilization in a relatively
high number of classes in the frames of their major studies.
Table 8
The representation of different components of translator training at UP, including the planned MA program; the
BA specialization and the two special post-graduate programs (Kóbor & Lehmann, 2018, p. 25)
Translation and
8 – 10 5 10 11
interpretation theory
Translation and
44 – 46 33 45 49
interpretation techniques
IT competences, tools,
10 – 12 3 5 4
language technology
Having compared the program elements and the syllabi, we can state that there are significant
differences in proportions, however, we can detect a great degree of similarity between the
training programs at different levels. They basically include the same components, even if
they are weighted differently, and each of them puts the greatest emphasys on translation and
interpretation techniques, which cannot be learnt in any other classes.
48
Table 9
Study plan for English Studies, BA translation specialization programme (4 semesters, 50 credits)
Tot. cr: 0 0 11 12 12 15
Source:
(https://btk.pte.hu/sites/btk.pte.hu/files/files/hallgatoinknak/golyahir/2016/ba_napp/szakfordito_anfranb2_09-10-
12267idoterv.html)
49
practical skills. The theory is concentrated in the first and the second semesters of the
programme, the third and the fourth semesters are totally devoted to practice.
Looking at Tables 8 - 9, we can see that the spectrum of the fields the students do
translations in is diverse. To be able to translate texts in them needs special background
knowledge in politics, law, business and finance, IT, social sciences and literature, which is
not offered in the study-plan (Table 9). Table 8 also reveals that there are no classes on
special terminology, either. This means that students, if they want to do proper translations,
have to do a lot of extra research. Perhaps an intriguing question is why English majors
choose this specialization, when the certificate they get does not qualify them to work as
certified translators. To find the answer to this question I conducted an empirical study on
students’ motivation to study translation at BA level.
3.3.1 Introduction
Although motivation seems to be a well researched area in the huge arena of applied
linguistics (Doró, 2010; Dörnyei, 1994; 1998; 2010; 2014; Dörnyei et al., 2014; Dörnyei &
Csizér, 1998; Dörnyei & Ryan, 2015; Dörnyei & Ottó, 1998; Gardner, 2010; Heitzmann,
2014; Józsa et al., 2014; Kormos & Csizér, 2014; MacIntyre, 2002; Nikolov,1999; Nikolov &
Mihajlević Djigunović, 2006; Ushioda, 2016; Ushioda & Dörnyei, 2012), so far little attention
has been paid to English majors who choose translation studies as a specialization at BA level
at the University of Pécs (https://felveteli.pte.hu/kepzes_pdf/618). This four-semester
program offers the applicants special knowledge, but no certification, however, it can prevent
them from getting their diploma if they fail the final translation test. According to 2019 data
(https://felveteli.pte.hu/ponthatarok), the required points for English Studies at BA level were
356, and between 10 and 65 applicants could be accepted. These numbers mean that the pool
is shallow, when it comes to choosing a minor or a specialization. In the academic year of
2016-2017, when the first part of the research was conducted there were only 30 students in
the program, including the second and the third year, so it was the total number to take into
50
consideration for my research. It is small, but as my study serves local needs of starting
translation training at MA level, I did not want to spread my research to other institutions.
Although the literature offers several methods to study motivation, this complex and
dynamically changing phenomenon (21., 2.2, 2.3), regarding my small sample in phases 1 and
2, I decided on qualitative methods (student essays, follow-up interviews), while in the third
one I used a questionnaire.
The first and second phases were carried out in the spring semester of the 2016-2017
academic year with the aim of preparing a solid background for the third, more
comprehensive phase, implementing a questionnaire study, which was conducted in the fall
semesters of the 2017-2018 and 2018-2019 academic years. The research plan is shown in
Table 10.
Research Number of
Phase Semester Research questions
instruments participants
1 2016/17 Spring 1, 3, 4 Student essays 8 BA students
Follow-up
2 2016/17 Spring 1, 2, 3, 4, 5 3 BA students
interviews
2017/18 Fall Student
3 1, 2, 3, 4, 5 24 BA students
2018/19 Fall questionnaire
51
The first phase aimed to gather information on the resons of students’ enrollment in the
program, asking them why they liked translation classes. The follow-up interviews in the
second phase focused on the patterns that emerged in the student essays. Motivating and de-
motivating factors were identified and grouped, which helped to create the initial question
pool for the student questionnaire to be carried put in the third phase of the research.
The participants in the first phase were eight anonymous second-year BA students majoring
in English who took up translation in the third semester of their studies, after passing their
proficiency exam (C1 level) in the end of their first academic year. However, heir estimated
level of English proficiency was between B2 and C1 on the six level scale described in the
Common European Framework of Reference (CEFR, Council of Europe, 2001) and the
Hungarian school-leaving examination all students had passed at advanced (B2) level before
being admitted to university. (423/2012. Korm. rendelet).
To be able to establish a baseline for a motivation map, in this phase the participants were
asked to continue the first line of a short composition: “I like translation classes because...” I
chose this simple writing task as my data collection instrument because this kind of short task
was familiar to them from their previous studies, it was easy to complete, and I assumed it
was suitable to gather basic information on the participants’ main motives to enroll in the
specialization.
3.4.3 Procedures
The written task was completed in-class, and the allocated time was 15 minutes. As a result, I
got eight compositions, which were short, and to the point. To examine the data I conducted
content analysis based on the main principles of qual research (Creswell, 2003; Dörnyei,
2007b; Saldana, 2009). The reasons, given by the students, were identified and, in order to
group them, categories were set up. The results are shown in Table 6.
52
3.4.2 Results and discussion
The reasons the students gave in their compositions typically could be organized into six
categories: (1) language use, (2) teachers, (3) fellow students / peers, (4) texts, assignments,
instruments, (5) atmosphere, and (6) personal goals (Table 11: The numbers in brackets refer
to the number of students who mentioned the given reason in their composition.)
Table 11
Motivating factors mentioned by students in their compositions (frequency)
Seven students liked translation classes, both seminars and lectures, because they improved
their different skills, vocabulary, and grammar, and highlighted the similarities and
differences between the source language and the target language. Five students mentioned
teachers as motivating factors, most often because of the usefulness of the feedback and the
assessment they give, and for other personal reasons (“they are between tolerable and
awesome”). Three students found important that they could discuss the solutions of their peers
and meet different viewpoints in translation classes. Nobody referred to peer-assessment,
either because it is not used in the classes or they did not recognize it as something worth
mentioning. The texts the students had to deal with, the assignments they got and the
instruments they used were mentioned in seven compositions. The diversity and the
53
interesting nature of the texts and assignments were emphasised; whereas the instruments they
used were mentioned because they were either “new” or “ancient.” Interestingly, in both cases
they were described as motivating. The working environment (the atmosphere) was
mentioned by three participants. One found it motivating because it was interactive, one
because it was funnier than other classes and one because it was not as boring as other classes.
It was also described as safe: an environment where “nobody gets hurt”. The personal goal of
becoming a professional translator was mentioned by three students out of the seven.
3.5.1 Participants
As a follow-up to the written task, I invited participants to interviews through their tutor. The
interviews happened in two sessions. In the first session (30/04/2017), I interviewed two
volunteers, a female (P1) and a male (P2), in the second session (05/03/2017) my interviewee
was a male student, (P3) all from the group who wrote the texts (second-year group from
2016-2017).
I planned to conduct structured interviews focusing on surfacing patterns in the written texts,
setting up categoried for the aspects of learning translation, each because of the motivating
force they may have on someone’s decision when it comes to choosing a field to study: (1)
Background, learning environment and language competence, (2) Translation as a skill and
specialization, (3) Course content, (4) Feedback and assessment.
I also wanted to pinpoint what students found motivating, demotivating or neutral concerning
the four listed aspects.
As Rubin and Rubin say, “qualitative interviewing requires intense listening, a respect
for and curiosity about what people say, and a systematic effort to really hear and understand
what people tell you” (Rubin & Rubin, 2005, p. 17). As I always regarded myself a good
listener, and I wanted to collect as much information as possible by the simple means of
listening, I found the interview the perfect tool for this part of my research. It also ment that I
sometimes let my interviewees (and even myself) take side-tracks, and as a result, the
interviews turned out to be semi-structured in the end.
54
3.5.3 Procedures
First, I asked the interviewees if they wanted to use English or Hungarian. As they chose their
mother tongue, the interviews were conducted in Hungarian language, recorded by a
dictaphone, transcribed and translated to English afterwards. Both sessions took
approximately one hour, and the interviewees were asked the same questions (see section
below). The venue in the first case was a café, a silent room with perfect privacy, mineral
water, coke, and coffee. The second time we were in a seminar room at the university, with an
open window and chirping birds outside. As it turned out, the difference between the places
did not really count; the atmosphere was relaxed in both cases, and, perhaps because they
were volunteers, the interviewees were open and talkative.
55
Following the trends, two of the three participants chose English as a foreign language
at primary school, although they started learning it in different classes. In the first interview
the female participant (P1) took it up in the first, the male (P2) in the fifth class, so their first
encounter with English as a foreign language happened quite early. They both studied English
for several years before entering university: P1 for twelve and P2 for eight years. P3, on the
other hand, took it up only at the age fourteen, however, before starting secondary school he
went to a language preparatory class, where they had twelve English lessons a week – much
more, than in an ordinary class. By the time they enetered university, all of them had had a B2
level language exam. When preparing for the language exam, P1 had extra classes in English
with a private teacher, whereas P2 and P3 were coached by their teachers at school. They also
studied a second foreign language, which was German, the second most often chosen foreign
language in Hungary. All three respondents gave it up temporarily due to the lack of time and
for financial reasons, but they were well aware that they would have to continue learning it if
they wanted to carry on with their translation studies at MA level.
Arriving at university, they all found that they were at a similar proficiency level in
English with their fellow students, and they faced the same difficulties. The most striking was
the fact that they had to do and learn everything in English, whereas at primary and secondary
level they used English only in the English lessons. What was a difficulty at the beginning
soon became a motivating force for them: their vocabulary grew very fast and they became
much better readers in English.
I also wanted to know if all their expectations were met concerning the translation
programme. “If it takes much time to come up with an answer, it means that you cannot come
up with anything. No, I cannot say I found anything disappointing. What we have not learnt
so far, will learn in the next two semesters. I think I basically get what I expected before”,
said P3, who had already experimented with law and teacher training, but he gave them up,
because he “did not like what he got”. “I initially thought we would have more practice, but
now I see that what we do is enough. Even seems to be too much sometimes. Right now we
have assignments for three different classes”, said P2. “I think the programme is good as it
is”, added P1. “I did not expect more. As we are only eight in the group, it is homely. I did not
think it woult be this informal. And we cover so much. I am satisfied indeed”.
56
chose to study. What did it mean for them? Why did they choose it? What experience did they
have in translation before applying for the programme? What qualities did they think a
translator needed? Did they plan to continue their studies at MA level? How did they want to
use their special knowledge and skills after graduating?
I had a common experience with my interviewees, which seemed to be a good starting
point. Just two weeks before the interviews were conducted, we all listened to the renowned
translator’s, Peter Czippott’s engaging lecture about translation at the University of Pécs,
where he, as an invited lecturer, presented a long list of definitions of the construct
(https://btk.pte.hu/esemenyek/peter_czipott_mufordito_eloadasai). However, the participants
in the present study were not able to define translation either in a sophisticated or in a
practical way. They did not remember that translation, in Czipott’s poetic understanding,
meant the two sides of a tapestry, and they could not bring up any of definitions given by
scholars in relevant literature (see 1.1, Table 2). Translation for them, as it turned out, meant a
simple practice; a task, where they had to give back the words of the source text in the target
language the best way they could. They all rejected ‘word-for-word’ translation, they voted
for ‘content’ translation, even in the case of special (e.g., legal) texts.
All three chose translation as a specialization because they saw it as a potential career
that they could pursue in the future not only in Hungary, but anywhere in the world. The other
reason to choose it in the cases of P1 and P2 was the fact they learnt Hungarian literature in
an increased number of classes at grammar school, and they felt they were good at it.
Therefore, an additional motivating factor was their good knowledge of the mother tongue,
which relevant literature also mentions as basic as the good level knowledge of the target
language (Chesterman, 2005; Gile, 2009, 2010; Gouadec, 2007; Kelly, 2005; Laver & Mason,
2018; Limon, 2010; Nida, 1981; Pym, 1992, 2003; Risku et al., 2010; Robinson, 2012).
They did not really have any experience in translation before choosing the
specialization except for activities they did for fun. One interviewee, P3 acted as a family
interpreter once, and cited an experience when he translated a longer text for his brother, and
the idea that he could do it as a bread-winning job started to formulate after that. “My teacher
said that I could get good money for such a translation, and what to say… I was motivating”.
P2 experimented with translating comics and subtitles for TV series.
Answering the question “What kind of knowledge and skills do you think a translator
needs?” they mentioned the reasonable knowledge of the source language and the target
language and factual knowledge (special technology, resources available). Only P3 argued
that “the good knowledge of the mother tounge is enough. My English vocabulary is poor, but
57
that does not prevent me from translating”. Perhaps, because they were at the very beginning
of their translation studies, they did not know too much about the theory of problem solving
(options for translating – direct, literal, oblique translation, etc.), or strategies and techniques
(“We discuss these when a problem comes up”.) They did not even think that translating texts
could be physically exhausting; no wonder, so far they had translated only short texts. Only
P3 mentioned that the “ability to sleep” was important. “If you do not sleep enough, you
cannot be effective”. They laughed when I asked them if they were ready to sit 10 hours a day
in front of the computer. “We do that anyway”, was their answer. What was pronounced by
all three participants was creativity in finding the best equivalents for even the most difficult
turns of speech in the source text, like “törpe bögre görbe bögre”, mentioned by P3 or “Mézga
Géza”, brought up by P1, or in case of so called “law frequency words” and collocations
(Kenny, 2014, p. 128). Translating culture specific texts held considerable difficulties for
them, although P3 said that “we can look up these things quite easily”.
They all agreed that choosing translation as a specialization was a good decision for
them, most of all because it gave them knowledge, which was useful in their English studies
classes, as well. P1 and P2 planned to continue studying it at MA level, and later to do it as a
career, either in Hungary or abroad. P3 also wanted to become a professional translator, but,
because of financial reasons, he was not sure if he wanted to do MA studies. “I do not have
any state-financed years left, so I can do it if I can collect enough money for the tution fee”,
he said.
58
to the low number of students in their group, it tended to be interactive with practical tasks in
the end.
Concerning course content, they named two tasks they regularly did: translating from
one language into the other and discussing the translations they did. In the first term of the
first year of the programme (in the second year of their BA studies), they translated from
English into Hungarian, but from the third term they did it in the other direction, as well. P1
and P2 found translating from Hungarian into English rewarding. The teachers had warned
them that it would be more difficult, and now they were motivated by the mere fact that they
could cope with this task. “Translation itself is motivating for me, especially if I can cope
with it”, said P2, who had the strongest mastery motive (Dörnyei, 2010b; Józsa et al., 2014)
of the three. They got weekly assignments (“a one-page translation for each lesson”); they
often started doing them in class and if they did not get to the end, they could continue at
home so that the next time they could discuss their texts. P1 and P2 found discussion the most
enjoyable and useful part of the course. P3 did not like it so much, because it took much time,
and he was not interested in other students’ translation problems, although he admitted he
“could learn from them”. At the beginning of the semester they always got the syllabi for each
class, so they knew what to expect. Sometimes they could negotiate: they could choose the
texts for translation, and, as was pointed out by P1 and P2, choice was a strong motivating
factor, because it involved their interests (Nikolov, 2000). However, it did not really count for
P3.
The classes were practice oriented and taught by four teachers according to the study
plan (3.2, Table 7). Strategies, techniques and methods were discussed when a problem arose,
and they all found this helpful, because the explanations were connected to concrete texts in
this way.
Concerning the instruments, they used their unanimous answer was “Google is our
best friend.” They consulted dictionaries only if their Google search was unsuccessful. If they
used a dictionary, it was typically an online, bilingual one. Only P3 liked using traditional,
printed dictionaries, because “they were reliable and it was a good feeling to keep the book in
my hands and turn the pages”. P1 and P2 referred to a software (MemoQ) they were learning
how to use, but they still were at the basics. They all have heard about corpora from their
teachers, but they have never relied on them so far, and haven not been encouraged to use
them; no wonder they could not name any.
They typically had a written a one-page assignemment for each class, but that was not
a problem for them: “We are here to practice as much as we can”, said P1 in total agreement
59
with the others. They sometimes were late with their assignments, but they always did them in
the end. In case of one teacher, they always started to do the assignment in class. “We do a
kind of oral translation by turntaking”, said P2. If they did not get to the end of the text in
class, they could finish the translation at home. They liked doing it that way, because they got
instant feedback, which, according to literature, significantly enhances motivation (Ryan &
Deci, 2017).
60
was rather instrumental: “I am inclined to do business or legal translations, because they pay
better than literary translations. That is why these classes are so motivating for me”. As P2
pointed out, boring tasks sometimes resulted in demotivation in his case, but if the texts were
enjoyable, the translation itself was joyful for him. “That was the main reason I chose this
specialization”, he explained. “And the teachers themselves can be motivating, too, although
knowing how little they earn is rather demotivating”. For P3 it was difficult to find three
motivating elements in his classes. His main motivation to study translation was instrumental:
“I want to become a professional translator and make money with it”, he said. After some
thinking, he found one of his teachers motivating: “I really do not like to miss out on his
classes. He has great knowledge, an inspiring personality and he is fair. His opinion is
important to me”. However, he mentioned another teacher who taught a very difficult class. “I
cannot stand this teacher. He literally irritates me. He also failed me in his subject, which was
not a problem, because he was right. But… he knows what he teaches. In the end, I decided I
had to forget about my negative feelings towards him, and learn from him as much as I can.
Once we even had a great talk. So it is not the teacher’s personality which is important. It is
his knowledge”. P3 listed one more other thing, which he – in contrast with P1 and P2 –
found neutral concerning translation classes. The texts for him were not that important. “The
topic is given. It is not the teacher’s mistake, if the text is boring.”
Analyzing the answers to my questions I grouped what stood out as motivating,
neutral or demotivating for my interviewees’ in their studies (Table 12). As it is quite evident,
the participants have experienced just a few neutral, and hardly any demotivating factors so
far. In some cases, because of its nature, the same factor could turn out to be motivating,
neutral and demotivating. (E.g. an interesting text is motivating, whereas a boring one is
rather demotivating or neutral.)
family background √
language competence
61
Transltion as a skill previous experience in translation √
tasks / texts √ √ √
teachers √ √
opportunity to negotiate √ √
personal feedback √
lack of feedback √
good grades √
The participants of the interviews thought their expectations had been met in translation
classes. In the end I asked each of them to rate the programme and themselves, as well. On a
scale of 1 to 5, P1 and P2 awarded 5 points to the classes and to their teachers, P3 gave 4
points; and they put themselves or rather their knowledge and the skills they learnt between 3
and 4. They said would not change anything concerning the course, however, P2 would
appreciate one more class on cultural topics.
62
3.6 Summary of findings
The first two phases of the study aimed to examine a seemingly well researched topic:
motivation in learning English as a second language. However, there was not one in the rich
reservoir of previous studies, which would have dealt with motivation in translation studies as
a specialization at BA level. To start to examine this many faceted phenomenon, I chose two
closely related ways, and I think it turned out to be beneficial. What was written in the
compositions (3.4.2 Table 9) was reinforced in the interviews (3.5.4 Table 10), which, at the
same time, offered a great deal of new information concerning the background, the language
competence, and other things that lie behind a choice of this nature.
Knowing that the research involving a single group of eight students cannot be
regarded as representative of all BA students in the translation program, it is easy to identify
its shortcomings. What surfaced as a neutral factor in this study (e.g., family background) can
easily turn out to be a motivating one with a larger sample (e. g., in families, where both
parents speak foreign languages or make their living using them) or even demotivating (in
case of students with disadvantaged family background). However, it pointed out several
factors, which have to be taken into consideration when examining BA students’ motivation
in translation studies. It highlighted those ones, which motivate students to achieve better
results in what they do, and also those aspects which have to be changed. The students’
answers underlined the importance of language proficiency, the course content, the teachers’
personality and knowledge, the assignments they did, the amount they practiced, the way their
work was assessed and the feedback they got. The interviews proved the importance of
mastery motivation (“the simple fact that I can do my assignments is motivating for me”; “the
teachers warned us that translating texts into English would be difficult, but I see I can do it
and it makes me satisfied”; “it develops my vocabulary, so I can make better translations”).
Instrumental motivation (“It seemed I would be able to make money with it”; “I want to do it
because it pays better than literary translation”) also got significant emphasis. It also turned
out that the effect of the background depended on different, seemingly unrelated factors, for
example the politically dependent educational system of the period when the parents of the
participating students went to school: they did not have the freedom of choice in learning
foreign languages, which was demotivating for them. Now they children have a huge scale of
foreign languages to choose from, an opportunity which is motivating itself.
The results of these two phases were helpful in the design of the third data collection
phase and instrument used with a bigger sample – a questionnaire. The findings highlighted
63
the areas to be included, identified the questions which should be studied in more specific
ways, for example, questions concerning family background, (e. g., parents' level of education
and uses of a foreign language), and the aspects which emerged in the course of the
interviews (concerning translating instruments, for example) also had to be taken into
consideration.
3.7.2 Piloting
After writing the instructions and giving the necessary examples, the questionnaire was ready
for piloting in the spring semester of 2016-2017. The plan was to involve four sophomore BA
students and two professors for feedback on how the questionnaire worked. I recruited the
students through their teachers again. They got the questionnaire electronically, so they could
comment on it, and after analyzing what they wrote we also discussed some points on Skype.
In their comments, the students identified topics (e.g., in-class activities, translation tools,
assignment and feedback related questions) which were included in different forms more than
once. They also measured how much time it took to fill in the questionnaire: it was between
25 – 40 minutes, so I assumed that after removing the unnecessary questions and items it
would not take more time than the suggested 30 minutes (Dörnyei, 2010a; Frary, 1996).
64
After making the changes I thought necessary I asked two of my professors for their
feedback. One of them was the head of the doctoral school (T1), an applied linguist who had
considerable knowledge in the field. The other one (T2) was also an applied linguist with
experience in translation, who taught in the programme. They read the second draft of the
questionnaire, and also gave invaluable tips what scales or question types to use, and how to
re-word questions to make the questionnaire more to-the-point and user-friendly. (See
examples for changes in Table 13.)
T1 identified a few “double-barreled” and redundant items, which I removed, and
suggested me to “stretch” respondents, i. e. encourage them to reason their answers. She also
pointed out topics to include, e.g., questions concerning the other languages the respondents
studied (which languages, how long). Following their advice, I changed the wording of quite
a few questions, and changed the question types (instead of numerical ranking I used Likert-
scale or the open-ended questions were replaced by checklists etc.). The greatest alteration
was suggested by T2. She encouraged me to enclude a part an autonomy, which, according to
literature, goes hand-in-hand with motivation (Benson, 2007; Christison & Murray, 2014;
Csizér & Kormos, 2009; Dörnyei & Ushioda, 2009; M. Lamb, 2011; Little, 2007; Murray,
Gao, & Lamb, 2011; Nikolov, 2000; Nunan, 1997; Reinders & Lázaro, 2011; Sade, 2011;
Ushioda, 2011), and an important aspect of translation as a profession or an activity (Baer &
Koby, 2003; (Benson, 2007; 2008b; 2011; Gile, 2009; Kiraly, 1995; Klaudy, 1997; (La
Ganza, 2008; Venuti, 2013). It made a lot of sense, so I included questions on autonomy, too.
The process resulted in an eight-page questionnaire with five parts including 36 questions: (1)
Language competence (5 questions); (2) Translation as a specialization (6 questions); (3) The
content of the courses (6 questions); (4) Learner autonomy (10 questions); and (5) Feedback
and assessment (9 questions). It became a bit longer than the four-six pages suggested in
literature (Dörnyei, 2010a), which can be attributed to two reasons: (1) the added section on
autonomy and (2) the explanatory tables, e. g., a CEFR scale, which the questionnaire
includes.
65
Table 13
Examples for alterations in the questionnaire after piloting
Why did you choose translation as a Why did you choose translation as a reworded the question
specialization? specialization? List three reasons.
Do you have any experience in Did you have any experience in reworded the questin
translation? If so, please specify it by translation before you began studying
circling the answers that fit you most. translation at university? If so, please
specify it by circling the answers that
fit you most.
What do you find the most difficult Please, mark on a scale of 1 to 4 how reworded the question and
when translating a text? Rank the difficult you find the listed activities. changed the question type
activities from 1 (the easiest) to 10 Circle the answers that fit you most.
(the most difficult). (1 – the easiest; 4 – the most difficult)
Is the number of the classes enough Is the number of the classes provided reworded the question
to improve your skills? int he programme enough to improve
your skills?
How often do you get feedback on How often do you get feedback on turned a two-item question to
what you do in classes / out of class? what you do in classes two one-item questions
How do you treat your mistakes made How do you benefit from the reworded the question
in tests? evaluation received for your exam
tasks?
66
3.7.3 Participants
The questionnaire was aimed at BA students studying in the translation specialization. In the
fall semester of 2017, there were 21 students in the programme: eleven in the first, ten in the
second year of their translation studies. As the return rate was law, the questionnaire was sent
to the sophomore (1st year translation) students in the 2018 fall semester, as well. In the first
round, 14 out of the 21 students returned fully filled-in questionnaires, whereas in the second
round ten of the eleven students sent it back resulting in 24 filled-in questionnaires.
3.7.4 Procedures
The paper-and-pen questionnaires were administered in translation studies classes by the
teacher, and the students who were present filled it in. The first round took place in the last
class of the fall semester, 2017. The procedure, due to the low return rate, was repeated once
with a different group in the beginning of the fall semester, 2018. This time only one of the
eleven students did not return the questionnaire. There was no allocated time, the filling-in
process was approximately 30 minutes.
A) Language competence
The first part of the questionnaire addressed the issue of language competence, focusing on
how long the respondents studied English before entering university; what their level of
English language proficiency was on the CEFR scales; what other languages they had learnt
and for how long; and how they perceived their strengths and weaknesses concerning English.
The CEFR Companion Volume (2018, p. 114) states that “professional translators are
usually operating at a level well above C2”. From the curriculum of the program we know,
that passing the proficiency exam is a basic requirement to taking up translation as a
specialization; therefore, it is reasonable to assume that the admitted students are at C1 level.
67
As the length of the program is four semesters, this, theoretically, should be a solid base, as
two years should be enough to reach the required level. However, it turned out to be a very
optimistic assumption, as quite a high number of students in the programme are well under
C1, as the findings for the relevant part of the questionnaire and the interviews with their
teachers will show.
Answers to the second question revealed that the respondents, before being admitted
to university, had studied English for 3 to 13 years (Table 14). Despite the fact that a foreign
language is compulsory for all Hungarian students from grade 4 (age 10), very few students
indicated that they had started learning English at a young age. The number of years suggests
large differences in their knowledge, especially knowing that the students have only one year
to reach the desired C1 level at the university, which is the prerequisite of getting into the
translation program.
Table 14
Number of years learning EFL before admission to university (N=24)
Number of years 3 4 5 6 7 8 9 10 11 12 13
Number of students l 8 l 2 l 2 3 l l 3 l
Concerning their self-perceived English language proficiency level (Figure 2; Table 15)
almost half of the students (46%) identified themselves as C1 level users, whereas six (25%)
marked B2 and seven specified C2 level (29%) (Figure 2). Interestingly, the only student who
had studied English for three years marked C2 as his/her proficiency level, similarly to 50 %
of those who had learnt English for four years.
7 6
B2
C1
C2
11
Figure 2
The respondents’ perceived English language proficiency level (N=24)
68
Table 15
CEFR global scale descriptors for B2, C1 and C2 levels (Council of Europe, 2001, p. 24)
Level Characteristics
Can understand the main ideas of complex text on both concrete and abstract topics,
including technical discussions in his/her field of specialisation.
Can interact with a degree of fluency and spontaneity that makes regular interaction with
B2
native speakers quite possible without strain for either party.
Can produce clear, detailed text on a wide range of subjects and explain a viewpoint on a
topical issue giving the advantages and disadvantages of various options.
Can understand a wide range of demanding, longer texts, and recognise implicit meaning.
Can express him/herself fluently and spontaneously without much obvious searching for
expressions.
C1
Can use language flexibly and effectively for social, academic and professional purposes.
Can produce clear, well-structured, detailed text on complex subjects, showing controlled
use of organisational patterns, connectors and cohesive devices.
In Hungarian public education some children start to learn a foreign language at kindergarten
age, however, the number of kids whose parents choose to do so is low and no official data
can be found (Medgyes & Nikolov, 2014). According to Hungarian National Curriculum
(NAT 2020), learning of the first foreign language begins in grade 4 at primary school, but
depending on local curricula, it can start in the first grade, if it is a special language class and
the required conditions are given. That explains the difference in years concerning the length
of foreign language studies (Table 13). The number of lessons varies, also depending on the
type of class a child attends, which means that two students, learning the chosen foreign
language for the same number of years can arrive at different levels by the end of their
studies, depending on the number of lessons they had throughout the years. In case of the
Hungarian secondary-school leavers it is minimum 984 lessons (3 lessons a week for nine
years, starting the language in the fourth class of primary school). The average number of
lessons at grammar schools is 1,363, at technical schools is 1,240 (Nikolov, 2011, p. 1051).
The typical grammar-school student from age 15 tends to learn a second foreign language, as
well, usually in two lessons a week.
69
If we look at Table 14 again, it is easy to see that English was the second foreign
language for many respondents, more exactly for those, who learnt it for three, four or five
years, presumably taken up at secondary school. This assumption was reinforced by the
answers given on the question aimed at other foreign languages learnt by the participants.
English was the first foreign language for 14 students and the second foreign language for the
ten participants (Figure 3).
10
English - FL 1
English - FL 2
14
Figure 3
The ratio of English selected as FL1 or FL2 (N=24)
Each of the 24 students had learnt at least one other foreign language, typically German (19),
followed by Italian (3), French (2), Latin (1) and Japanese (1). Only two of the participants
had learnt a third foreign language: Spanish (1) and Portuguese (1) (Figure 4). Ten
respondents had studied German as their first, nine as their second foreign language, however,
when applying to university, they dropped it for the sake of English, which is the most
frequently learnt foreign language in Hungary. According to a brand new study, it is chosen
by 71% of secondary school students as their first foreign language, followed by German,
chosen by 28%, and only 1% learn French (SZIE, 2019, p. 8). The same ratio was 50-50
percent twenty years ago (Csapó, 2001). This trend is supported by the findings of this small-
scale study: about half of the participating students gave up German when they majored in a
foreign language – typically English – at university.
70
1 1 German
1
1 Italian
2
French
3 Latin
Japanese
19
Spanish
Portuguese
Figure 4
Other foreign languages learnt by the participants
As Figure 5 shows, 21 out of 24 participants learnt two foreign languages, including English,
their major; two students learnt three languages, whereas one claimed to have studied four.
1
2
Four FLs
Three FLs
21 Two FLs
Figure 5
Number of foreign languages learnt (N=24)
Table 16 shows data on the number of years the 24 students devoted to the languages they
chose to learn as a second or third foreign language. It is evident that the number of years
particpants spent learning a third foreign language, as well as the number of the students
doing so, are low.
71
Table 16
The number of years devoted to foreign language learning
Years / learner
Language
1 1,5 2 3 4 5 6 7 8 9 10 11 12
German - - - - 3 6 - 1 3 - 1 1 4
Italian - - - - 3 - - - - - - - -
French - 1 - 1 - - - - - - - - .
Japanese - 1 - - - - - - - - - - -
Latin - - - - 1 - - - - - - - -
Portuguese 1 - - - - - - - - - - - -
Spanish 1 - - - - - - - - - - - -
The fifth question in the first part of the questionnaire addressed the strengths and weaknesses
of the participants’ English language competence (Figure 6). The four skills were listed, plus
a few factors that may have a significant influence on their language education. Responses
reflect that the participants perceive themselves really good at the two skills which are
relevant if they want to become translators: the majority of the respondents marked reading
and writing skills as their strengths. The fact that they are less good at listening, or they are
not satisfied with their pronunciation, is not significant in this respect – even a person with
hearing or speaking difficulties may be a highly qualified and excellent translator. On the
other hand, low proficiency in reading comprehension and writing skills, poor grammar or
vocabulary can be problematic: being familiar with the grammar and vocabulary of both
languages (the source language and the target language) is a basic requirement. Lexical
variation and density are the best predictors of lexical richness (Laufer & Nation, 1995;
Nation, 2006), so poor or inadequate vocabulary can result in low quality translation.
The fact that the very same students, after admitting that they had average grammatical
and vocabulary knowledge, consider themselves exceptionally good at translation in both
directions seems controversial. It suggests that either their self-assessed proficiency is higher
than it is, or they overestimate their written production (in this case translation), relying on
their ideal L2 selves (Adolphs et al., 2018; Dörnyei & Ushioda, 2009).
72
25
2
4 5
20 7 7
9
11
15 16 16
Weakness
10 21
19 18
16 16 Strength
14
12
5
7 7
Figure 6
Students’ self-perceived strengths and weaknesses (N=23)
Summarizing the findings about the participants’ self-assessed language competences, we can
state that the picture is diverse; studying in the same course or belonging to the same study
group does not mean equal level of knowledge and skills. Figure 6 shows what different
participants felt to be their strengths (blue) or weaknesses (red). This question was not
answered by one student, so the respondents’ number dropped to 23 from 24. 21 of the 23
claimed they were good at reading, which is basic, especially when one wants to become a
literary translator. To write well is also important, and 19 respondents felt they were good at
this skill. The figure also shows that they thought they had been better at translating from
English to Hungarian than at translating to the opposite direction. Although most of the
students estimated their English language proficiency to be at C1 or C2 level on the CEFR
scale there were a few who admitted having poor grammar (11) or vocabulary (9); skills
which have to be mastered if someone wants to become a translator. Research emphasizes
that a student with this aim has to be linguistically well prepared (Doró, 2010, 2011), because
the success of translation often depends on the translators’ confidence in recognising the
structures and layers of the foreign language they work with (Baker, 2011). Not being good at
speaking or having pronuncaiton problem does not really matter in this case: thes qualitites do
not really affects one’s translation abilities.
73
B) Translation as a specialization
Table 17
Reasons for choosing translation as a specialization (N=22)
Number of
Reasons
students
I would like / plan to be a translator. / I would like to do it in the future. / Future career
l4
plans. / I want to translate books (texts). / It offers a good job opportunity.
It is practical / useful. 3
When giving their answers, the students often repeated themselves, so the two or three
reasons they gave most often counted as one:
S1: I love the English language. / I love working with languages.
S2: It seemed interesting. / It seemed challenging. / I find it interesting.
S3: Fun / challenge.
S4: I am interested in it. / It can be fun.
74
S5: I was curious. / I was interested.
S6: I am interested in it. / It does not bore me.
S7: Interest in the field / personal hobby factor.
Quite a few (17) participants chose this specialization for practical reasons; either because
they had plans with translation concerning their future careers (14) or because they found it
useful in general (3). Some of them described it as a possible hobby (3), or an activity they
were simply interested in (20), which corresponds to what the interviewees mentioned in the
earlier phase of the research: they had to take up a specialization, so they voted for something
they found interesting, hoping it “will not be as boring as the other subjects”.
The next question was aimed at the participants’ experience or, as it turned out, the
lack of experience in translation. The answers they gave clearly indicated that they were not
familiar with the true nature of the activity: the most serious experience they had was doing
translation in English classes, which usually meant translating sentences, paragraphs or short
texts (Figure 7). Four students did some interpreting activities for school or informal (family,
friendly) events, and four students did some “other” activities: one translated web articles as a
hobby, one translated subtitles of TV shows for fun, one occasionally helped out in projects,
and one did some translation at work.
4 Translating / interpreting
for events
18
Other
Figure 7
Students’ previous experience in translation (N=21)
Next, they were asked to mark on a scale of 1 to 4 how easy they found translation (1 – very
easy; 4 – very difficult) and also to explain their choice. The answers revealed that the
majority (18) found translating from English into Hungarian (EN – HU) easier than
translating in the other direction (Figure 8).
75
5
EN-HU: easier
Figure 8
Perceived difficulty of E – H direction vs. H – E direction (N = 23)
When giving the reasons, the majority (15) of respondents referred to Hungarian being their
mother tongue, e. g., “I use my mother tongue more confidently.” “It is easier to translate a
text into your native language.” “It is easier for me to express myself in Hungarian”. Others
mentioned the richness of vocabulary knowledge (4), and two respondents explained their
choice by having “more experience in it” (Figure 9).
The five respondents who found translation from English to Hungarian more difficult
explained their choices by the “nuances and differences which make the proper translation
beyond reach”. Others typically attributed their difficulties to their inadequate vocabulary and
grammar in the target (Hungarian) language: “My Hungarian grammar is not that good.”
“Hungarian [grammatical] structures are more difficult for me.” “I can express myself better
in English than in Hungarian.”
Mother tongue /
2 native language
2
Having richer
vocabulary
4
Experience
15
No reasons
Figure 9
Reasons for perceiving translating the EN – HU direction easier (N=23)
As for how difficult the respondents found translation from English to Hungarian (EN – HU),
the majority (78%) marked it easy or relatively easy, and only four students found it difficult
76
and one very difficult (Figure 10). These findings correspond with what they claimed earlier
(see Figure 5): they perceived translation from English into Hungarian as their strength.
4 1
3 4
2 16 Number of students
1 2
0 5 10 15 20
Figure 10
Perceived degree of difficulty of EN – HU direction (N=23)
Key: 1 = very easy; 4 = very difficult
Translation from Hungarian to English (HU – EN) (Figure 11), as over half of the students
(14, which is 60%) claimed, was much more difficult than translation into their mother
tongue. Only five out of 23 said that the other direction, i. e. translating from Hungarian to
English was the easier task, and four stated there was no difference in degree of difficulty
between the two directons, they both were easy for them.
4 HU - EN: more
difficult
HU - EN: easier
5
14
No difference
Figure 11
Perceived difficulty of HU – EN direction vs. EN – HU direction (N=23)
Their explanations were similar in nature to those they gave to the previous question. Out of
the 14 who claimed that HU – EN direction was more difficult six pointed out that the target
language, English, is their second language, thus, it is more difficult to use. Four students
77
attributed their problems to their lack of experience, another four to insufficient knowledge of
both grammar and vocabulary, and seven respondents mentioned equivalence and culture
specific problems. One student complained about the low number of Hungarian – English
translation classes, and one pointed out not being able to use the necessary strategies and
produced word-for-word translations (Figure 12).
insufficient knowledge
4
6 law number of classes
having no strategies
Figure 12
Reasons why HU – EN direction is more difficult (N=22)
Only two students found translation in this direction easy and three marked it
relatively easy, giving vague, non-committal explanations: “I have been doing it for years.” “I
got used to it.” “I cannot explain why, but I find it easy.” “If you know the correct word order
and the suitable collocations all you have to do is to put them in the sentence.” Thirteen of the
respondents defined this direction difficult, and one found it very difficult (Figure 13). The
four students, who said there was no difference in degree of difficulty between the two
directons, gave reasons, which did not tell us a lot: “I have been doing it for years” or “I got
used to it” (for both directions). “It is not hard as Hungarisan is my mother tongue” (for E – H
direction) and “It is not hard either because we have been reading so much in English (for H –
E direction). There was one respondent, who marked both directions rather difficult, giving
them both 3 on a scale of 4, reasoning that E – H direction was difficult because “my
Hungarian grammar is not that good”, whereas the other direction (H – E) because having
“trouble on focusing the meaning of certain phrases and words”.
Comparing what the participants claimed about their strengths and weaknesses (see
Figure 5), there is a mismatch: while 16 students regarded translation from Hungarian into
English their strength, in response to another question, 14 of the same respondents declared it
difficult or very difficult. They were either inconsistent in their answers or they thought they
were good at it but realized it was also difficult. As is evident both from the research
78
literature (Baker, 2011; Kenny, 2009; Nida, 2012) and the responses, the degree of perceived
difficulty greatly depends on the language one translates into (the target language): it is easier
to translate into one’s mother tongue than to a second (third, etc.) language. The level of
knowledge (concerning anything that is related to translation – vocabulary, grammar, set
phrases, etc.) was also listed as an important factor by the respondents, as well as the lack of
experience. The ways the students formulated their reasons suggest that they are not familiar
with the problems they have to overcome when translating, or even if they are familiar with
them, they do not know what the different concepts (e.g., genre characteristics, register,
addressing, equivalence at different levels, etc.) mean.
4 1
3 13
2 7 Number of students
1 2
0 5 10 15
Figure 13
Degree of difficulty of translating from Hungarian into English (N = 23)
Key: 1 = the easiest; 4 = the most difficult
Question 9 aimed to explore how difficult different activities and problems were perceived
when translating a text (Figure 14). The respondents were asked to mark the difficulty level
on a scale of 1 to 4 again. Based on the findings of the previous research phases nine activities
were listed, and those activities, tasks, and problems proved to be the most difficult ones,
which could be characterized as “translation specific” activities, like “expressing cultural,
social and professional differences”, marked 1 by one, 2 by one, 3 by eleven and 4 by 10
respondents. It was closely followed by “preserving genre and register characteristics” (1 by
two, 2 by eight, 3 by twelve and 4 by one student) or finding the equivalents for idioms or
other set lexical phrases (1 by two, 2 by five, 3 by 13 and 4 by 3 students). These belong to
the categories relevant literature defines as translation knowledge and country and cultural
79
knowledge (Risku et al., 2010), or, in case of idioms, equivalence above word level (Baker,
2011).
spelling 1
addressing 2
word order 3
translating specific words 4
expressing formality 4
sentence structure 5
genre characteristics 6
set lexical phrases 7
cultural, social, professional differences 8
0 1 2 3 4 5 6 7 8 9
Figure 14
Degree of difficulty of translation tasks / problems (N=23)
Key: 1 = very easy, 8 = very difficult
Spelling was marked as the easiest task (1 by twelve, 2 by seven and 3 by four students).
Word order (1 by seven, 2 by eleven and 3 by five students) or translating specific words
(finding the equivalents at word level; 1 by two, 2 by 16 and 3 by four students) were marked
as rather easy tasks, supposedly because they were frequent in the participants’ FL classes
throughout their language learning experience. There were surprising findings, as well.
Addressing, which can be quite tricky in literary translations, was marked as the second
easiest activity (1 by ten, 2 by seven and 3 by six students), whereas preserving formality was
positioned on the border of “easy” and not so difficult (1 by three, 2 by 16 and 3 by 5
students). Sentence structure, which is set in English, so it is closely connected with word
order, was marked as problematic (1 by two, 2 by 13, 3 by five and 4 by three students). It
was in line with an earlier finding where grammar was marked as weakness by eleven
students (Figure 6).
The last two questions in this part of the questionnaire inquired about the future plans
of translation specialization students. First, they were asked if they wanted to continue their
studies at a higher (MA) level. Thirteen of the 23 students had such plans, seven said no, and
three were undecided (Figure 15).
80
3
Yes
No
7 13 Haven't decided
Figure 15
Plans concerning further studies at MA level (N=23)
Eight participants wanted to become professional translators. Two students simply enjoyed
the activity, whereas two others wanted to improve their translation skills and knowledge.
Seven of the remaining ten respondents had entirely different plans or did not want to move to
another city which would have been the case if they had opted for continuing their studies, as
University of Pécs does not offer an MA in translation studies.
When answering the question about the ways they thought they would be able to use
their translation skills after graduation, over half of the respondents (14) stated that they
wanted to use them as translators, or, at least, to make translation an essential part of their
future work. The others wanted to continue it at hobby level mostly for fun (Figure 16).
1
2 as a translator
2 as a hobby, for fun
at social events
3
14
in further studies
Figure 16
Plans for using the acquired translation skills after graduation (N = 22)
This section focused on four aspects: the respondents’ reasons for choosing translation studies
as a specialization at BA level, their previous experiences with translation, their attitudes
towards the tasks and activities they were required to do in their different classes, and their
future plans. The answers revealed that the majority (63%) of the participants chose
81
translation as a specialization because they had plans with it in the future; they wanted to
become professional translators, although they were inexperienced in the field. They had done
translation tasks at school or different translation activities as their hobby for their own
pleasure.
The participants perceived translating from English into Hungarian easier than
translating in the other way, claiming that it was easier for them to express themselves in their
mother tongue. The other direction was more difficult for them, in their view, partly because
of their insufficient knowledge of grammar and vocabulary. As translation is about making
choices and decisions at different levels (Baker, 2011; Klaudy, 1997; Wilss, 1998), there are
other aspects which have to be taken into consideration: some features of the texts are rooted
in the culture and the society of the speakers of the two languages. Dealing with genre
characteristics, equivalence above word level, including set lexical phrases, and texts with
cultural, social and professional references were perceived difficult for the majority of the
participating students.
As for their plans for their future, 56% of the respondents want to continue studying
translation at MA level, even if it means that they have to move to another city. The fact that
over half of the students intend to invest in their special field of expertise indicates that they
take their studies seriously.
The third part of the questionnaire with its set of open-ended questions was more demanding
for the participants than the previous two. Here, except for one question, they were required to
give extended answers, which turned out to be quite laconic. The six questions were aimed to
elicit information on the content of the courses, including the students’ expectations, the
number of classes provided in the program, the activities and tasks the respondents found
useful or less useful for developing their translation competence.
Concerning their expectations (Figure 16), 50% of the students thought they would
learn how to translate different types of texts (legal, political, business, literary), or how to
become successful translators by practicing a lot. Also, almost half (45%) expected to learn
useful skills, techniques and strategies, 18% wanted to expand their vocabulary, including
specific words and expressions, 13% hoped to learn theory or do literary studies. Only one
student thought that there would be an IT component in the curriculum, one student expected
that translation studies would improve their style, and one respondent expected to learn a lot
82
of grammar. Each student could list as many expectations as they wanted, but the typical
answer contained only one or two. As figure 17 shows, the 22 respondents listed 31
expectations.
Figure 17
Students’ expectations concerning course content (N=22)
When answering the question on how the curriculum met their expectations, half of the
respondents said that “just as expected”, or they “(almost) got what they wanted”. Six
students had mixed feelings about what they learnt, meaning that their expectations were only
partly met, because they hoped to get more feedback, more meaningful tasks and also to have
more classes. Although five students were not totally satisfied, stated without elaboration that
they “learnt a lot of useful / practical things”.
Twenty-two of the 23 respondents found the number of classes provided in the
program enough to improve their translation skills; however, they would have liked to see
them structured differently. They would include more practice, more time for discussion,
more classes on strategies, techniques and cross-cultural topics. To get better chances to
develop, they would like to have more frequent, better and more detailed feedback, and they
would welcome more focus on their individual development. Only two students said that the
program “is good as it is”, and one opted for more tests.
In response to questions 15 and 16, the participants named the activities and tasks they
found the most and least useful for developing their translation skills and they added their
explanations (Table 18). Discussing translations, translation problems and mistakes, and
doing translation tasks turned out to be the most useful activities, both stated by nine students.
Concerning the translation tasks, the respondents referred to the texts, as well. Translating
different genres, contemporary topics, special texts, longer pieces were regarded to be useful
by five students, watching and translating TV shows by two, and home assignments by other
83
two students. The other useful activities mentioned included reading in English, vocabulary
tasks, learning about software, history of language and cultural studies. One student did not
name any task or activity as ‘most useful’.
Table 18
Activities/tasks marked as ‘most useful’ for developing translation skills and the explanations (N=22)
Number of
Activity / task Explanations
students
The twelve activities / tasks the respondents found the least useful included: meaningless
tasks (they did not specify them), texts with boring or irrelevant topics (sports, gastronomy
and history), doing too easy translations, translating in class (instead of discussing home
assignments), reading, comparing articles, and reading aloud one’s own translation.
Translating with no or insufficient feedback also was mentioned as a negative component,
although w know from literature that “even negative feedback can be useful, provided it is
offered with support” (Ryan & Deci, 2017, p. 148). Seven students said there were no
activities which would not be useful; they could learn from everything they did in their
classes. Three respondents did not answer this question, hopefully because they also were
satisfied with everything (Table 19).
84
Table 19
Activities/tasks marked as ‘least useful’ for developing translation skills and the explanations (N=20)
Noumber of
Activity / task Explanations
students
all activities / tasks are useful 7 “we can learn from everything”
meaningless tasks 2
demotivating
boring / irrelevant texts 3
reading out one’s translation 1 “the others might think it’s my best”
The findings of the third part of the questionnaire are in accordance with the literature
(Alderson, 2000; Alderson, Clapham, & Wall, 1995; Alderson, Figueras, & Kuijper, 2006;
Ryan & Deci, 2017) on giving students meaningful tasks and assignments, texts which are
interesting for them, frequent and sufficient feedback and discussions which help solve
potential translation problems and avoid the mistakes translation students are inclined to
make. If we examine the listed activities and tasks along the motives, we can find examples
for intrinsic motivation, when something, an interesting text or a reading task was found
valuable by themselves; for mastery motivation in case of tasks students found enjoyment in
and felt in control (watching tv shows in English and translating “in head”). However, the
motivation behind the listed tasks was mostly instrumental: “practice makes us better”, or “the
more we practice, the better translators we will become”, suggesting that anything they did
should prepare them for the job.
85
D) Feedback and assessment
The last part of the questionnaire addressed feedback and assessment in translation studies
classes. As has been shown, both play crucial roles in student development and in student
motivation. Earlier findings and literature (Heitzmann, 2014; Wilkinson & Birmingham,
2003) implied that meaningful and frequent feedback can be conducive to students’
development, so can a good grade. Insufficient feedback or a lack of feedback, on the other
hand, can result in demotivation. Although feedback and assessment are traditionally regarded
to be teachers’ responsibilities, peer assessment, although used rarely, also has become part of
the evaluation process.
The first question in this section was aimed at the frequency of different types of
evaluation in TS classes. The students had to mark their answers on a five-point (0-4) Likert
scale (0 – never; 4 the most frequently). As it turned out, most of the evaluation was still done
by the teacher, but participants were also invited to express their opinion regarding the work
of their fellow students (Table 20).
Table 20
Frequency of different types of evaluation in TS classes (N=23)
Evaluation Frequency
0 1 2 3 4 Mean
Comparing the means of the findings it is not difficult to see that teacher evaluation happens
most often (21 students marked so): nearly three times more often than student (peer)
evaluation, and very rarely the translations are assessed by someone else; although I asked,
the students who marked so did not clarify who this person was.
As a next step, the students were asked how useful they found the different types of
evaluation concerning their development. Their answers were given on a five-point Likert
scale (0 – not useful at all; 4 – the most useful) (Table 21).
The mean scores concerning this question show that teacher evaluation is the most
useful, which is also the most frequently applied type of evaluation. The relatively high mean
of the usefulness of peer evaluation indicates that this kind of practice should get a more
86
prominent role in the evaluation of student translations. Involving experienced translators,
“who have proved their translation skills and reliability over time have all the knowledge and
skills necessary for correcting and evaluating the work of others” (Robin, 2016, p. 46), even if
only occasionally, might be a helpful technique. This would give new dimensions to the
students’ mastery motivation, as well.
0 1 2 3 4 Mean
by the teacher 0 0 0 3 20 3.86
by peers 2 0 8 7 6 2.65
by someone else 16 0 4 3 0 0.73
The third question concerned the feedback students got on their progress (Figures 18-19). As
the figures show, the participants claimed to get regular feedback on their in-class tasks
(Figure 18); only one student claimed they got feedback rarely. The situation is a bit different
with the feedback on home assignments (Figure 19), which is not that frequent. Although
nearly half of the respondents (10) claimed they got feedback every time, the responses of the
other students moved on a wide scale between hardly ever and frequently. They either do not
get home assignment in every class or students get feedback at different frequency or they do
not perceive what is meant by feedback the same way.
monthly
weekly
weekly
1 1
3 5 every class 4 frequently
10
3 almost every class 2 rarely
11 3 hardly ever
frequently 1 2
Figures 18-19
Frequency of feedback on work in class (N=23) and home assignments (N=22)
87
When answering the question how the feedback they got helped their development, the 23
participants gave a wide range of responses, a total of 14, most of them by multiple students:
points out my weaknesses and mistakes so I can learn from them (6)
helps me understand my mistakes, work on them avoid them in the future (6)
offers solutions and answers (4)
shows me what to improve (3)
shows me ways to understand and correct my mistakes (2)
helps a lot (in general) (2)
I can improve my style (2)
I learn how to do my translations better (2)
other: boosts my experience (1), strengthens my self-consciousness (1) provides
guiding points and comparison (1), makes me experienced to realize what I did wrong
(1), helps my development (1) and answers my questions (1).
As we can see, feedback, especially when it is frequent and meaningful, can help learners in
multiple ways and at various levels and they tend to be aware of the role feedback can play.
Besides offering them practical guidance to become better at what they do, motivating
feedback also helps them overcome difficulties they face as individuals, which most often
cannot be expressed in the number and the types of their mistakes.
Motivation, as we overviewed in the literature, is the primary impetus to initiate L2
learning; as the driving force to sustain the learning process, it can make up for low language
learning aptitude and learning conditions (Dörnyei & Ryan, 2015). The last few questions
aimed to find out how all this worked in TS classes: what students found the most motivating
and the most demotivating in their studies.
As Figure 20 shows, they listed several (nine) factors which they defined as
motivating, but nothing boosted their motivation as much as meaningful, helpful, and frequent
feedback. The other motivating factors were connected to course content (interesting,
meaningful texts, in-class activities, assignments), to the learning environment, including the
teacher and the other students (peers). The participants also liked to be appreciated, which
shows the importance of the objective, righteous assessment of their work, and they, although
very subtly, referred to the motivating force of autonomy (being able to make good choices,
finding good solutions for translation problems and work on their mistakes). All this shows
that different students are motivated in different ways. Some of them are driven by extrinsic
88
motives, including other persons’ attitudes (a motivated teacher) or value judgement (good
feedback, good grades), some are by intrinsic ones (interest in the texts they translate, finding
interesting solutions. Mastery motivation appears in the form of home assignments and also
seeing and appreciating the progress they make.
Motivating factors
0 5 10
Figure 20
Motivating factors in TS classes (N=23)
The students’ actual answers to the question what they found the most motivating in their
classes, grouped according to types of motivation included:
Intrinsic:
o “Interesting materials.”
o “Translating texts that I am interested in.”
o “Texts.”
o “Interesting texts with unique problems.”
Extrinsic:
o “Regular feedback.” “Frequent feedback.” “Getting feedback.” “Meaningful
feedback.” “Helpful feedback.”
o “Sharing thoughts with each other.”
o “When the teacher is motivated.”
o “To be appreciated.”
o “A grade 5.” “Good grades.”
89
Mastery:
o “The peculiar solutions I can come up with.”
o “To see my own progress.”
o “The home assignments.”
o “When my translation is close to prefect.”
Instrumental:
o “That I am practising for something I want to do in the future for a living.”
The demotivating factors (Figure 21) were as many as the motivating ones: eight were listed,
with boring and irrelevant texts topping the list (intrinsic motives), followed by things which
earlier were defined as student responsibilities, including the quality of their translation
assignments and the mistakes they made (mastery motives). Other demotivating factors were
connected to the teacher (boring or demotivated person who could humiliate and judge
students who gave wrong answers and translations without helpful explanation; extrinsic
motives), to fellow (irritating, annoying) students and the lack of feedback (extrinsic
motives). There were only three participants who found the classes motivating in every
respect. One respondent disliked classes starting early in the morning, but did not specify
what early meant.
Demotivating factors
0 2 4 6
Figure 21
Demotivating factors in TS classes (N=23)
The students’ actual answers to the questions what they found demotivating in their classes,
grouped according to types of motivation included:
90
Intrinsic:
o “Translating texts I am not so much interested in / that are not relevant.”
o “Boring and long texts.”
o “When I have no interest in the topic.”
o “When the teacher is boring.”
Extrinsic:
o “When there is no feedback at all.”
o “When te feedback is not useful at all.”
o “Tests.”
o “Humiliation because of my mistakes.”
o “When the tutor does not warn you that you do something completely wrong.”
o “If you are told to be wrong, but there is no explanation.”
o “Too many classmates.”
o “Working with fellow students who have not passed the proficiency exam yet.”
o “8 a.m. is too early for a class. It is very demotivating!
Mastery
o “When I cannot grasp what I should change to be better.”
o “The stupid mistakes I make.”
o “When I completely mistranslate something.”
In the next question, the students were asked if they had ever lost interest in translating. It was
answered by seventeen students: twelve said, ‘Yes’ and five responded, ‘No”. The surfacing
reasons for ‘Yes” could be grouped into four categories:
quality of tasks and assignments (too long, too difficult, too frequent, boring,
irrelevant) – five students;
inappropriate feedback (infrequent, no opportunity to discuss problems and mistakes)
– three students;
translator as a profession (low prestige of the activity, low payment, difficult aspects
of the job – translating 20 pages a day) – four students;
students’ own performance and attitudes towards the activity (“I did not think it was
worth it”) – five studens.
91
All these points reinforce how important it is to emphasize and build on those things which
respondents defined as ‘motivating’, to include them in the curriculum and the syllabi and the
assessment system, otherwise students can become demotivated, as it was documented in
about half of participants’ answers, at least once. Although they could overcome the
occasional loss of their interest, when demotivation turns into a tendency, it may result in a-
motivation and complete refusal in the end (Deci & Ryan, 1985; Dörnyei & Ushioda, 2011).
Twenty-one of the 23 respondents would welcome changes in their TS classes,
concerning:
the organization and management of classes, including their structure (should start
later, should be more time for evaluation) – five students;
the number of practical classes (there should be more) – six students;
the quality of texts (more genre-specific, more technical, more interesting, definitely
shorter) – six students;
the number of tests and assignments (there should be less home assignments and
more in-class assignments) – one student;
the feedback (should be more frequent, more detailed) – two students;
the assessment (getting grades based on in-class work and home assignments) – one
student;
the methodology of instruction (teachers should give handouts, offer more visuals) –
two students;
the terms of application and acceptance to program (“Only those people should be
accepted who want to learn translation”) – one student;
the learning environment (more motivating, more challenging, more interesting) –
one student;
translation as a profession (more information about the job itself and about career
opportunities) – two students.
3.7.6 Summary
The 5-part questionnaire included 26 questions on motivation and some interrelated aspects,
including language background, translation skills, evaluation, feedback, materials, procedures,
etc., each aimed at four different aspects of learning English and translation as a specialization
92
program. The first part examined the respondents’ language competence. The answers
showed that the participants, although they belonged to the same study groups, claimed to be
at different levels (between B2 and C2) concerning their English proficiency. The reason for
it seems to be twofold: although they were offered the CEFR scale to define their English
language proficiency level, based on their language use, some of them were overconfident and
put themselves well above their actual level. The other reason can be attributed to the fact that
there was a significant difference between the numbers of years the individual respondents
devoted to studying English before entering university (ranging from 3 to 13). It must be
noted that quite a few of them learnt English as a second foreign language at secondary
school; which partly could explain why the picture the questionnaire revealed is diverse.
The second part of the questionnaire included six questions on the motives of choosing
translation as a specialization, what experience respondents had in the field and how difficult
they found translation. The answers clearly show that most of them chose this program
because they wanted to become professional translators, although they had hardly any
knowledge of the profession and only very little experience concerning the activity.
The questions of the third part aimed to elicit participants’ evaluation of the course and
program content. The students could express their opinion on how their expectations were
met, what they found motivating and demotivating, useful or not very helpful in their classes.
The answers revealed mixed views and experiences; on average, 50% of the participants were
satisfied in every respect with what they learnt. Their list of what they missed or found
demotivating can be useful information for improving the curriculum of the translation
program, the content of the syllabi, and the way courses are delivered and the program is
implemented.
The fourth part of the questionnaire included questions on learner autonomy, to which
the next part of the dissertation is devoted. The fifth and last part addressed feedback and
assessment in translation studies classes, with a special focus on their effect on student
performance. As expected, both were crucial concerning motivation: if they were positive or
critical, but helpful, they were found to be overwhelmingly motivating, whereas negative
feedback impacted learning in unfavorable ways and its demotivating effect also was
documented.
The questionnaire was a good choice to elicit data on motivation: the questions
fulfilled their goals, although the simplicity and the superficiality of the answers seemed to be
annoying in some cases. There were examples for the so called social desirability bias: some
respondents had a fairly good guess about what the desirable answer is and they will give that,
93
even if it is not true (Dörnyei, 2010a). Overall, the findings, despite the low number of
respondents (nearly everyone in the programme) and the identified caveats, can contribute to
make translation training better.
3.8 Conclusions
94
instrumental motivation. There is not any among the relatively many practical courses which
would teach students the competences described in the EMT model (EMT, 2009) of
translator competences (except linguistic and technological competences), offered for
adaptation for translator training institutions in Europe (Eszenyi, 2016; I. Horváth, 2016).
95
Part II
Chapter 4
Autonomy in language teaching and learning
4.1 Introduction
Interest in the role of autonomy in language teaching and learning has increased considerably
since the turn of the 20th century. In terms of quantity, the literature published since 2000
exceeds the literature published in the 25 – 30 years prior to the date (Benson, 2006; 2008;
Borg & Al-Busaidi, 2012; Esch, 2009; Everhard, 2016; Gao & Lamb, 2011; Holec, 2008;
Ramos, 2006). The idea of autonomy in language learning is often framed as a learner-
centered focus. As a result, most theorists of autonomy in learning are concerned with
learners’ active participation in the day-to-day process of their learning, focusing on their
ability and willingness to seek out and to make choices independently. This participation is
seen as being both essential to the development of personal autonomy and beneficial to the
learning process itself (Benson, 2007; 2008; Littlewood, 1996).
The theory and practice of autonomy in language learning was first developed
systematically in the 1970s in the context of Council of Europe’s Modern Language Project,
and since the early 1980s, autonomy has become an increasingly important concept in foreign
language education (Benson, 2007). Since its introduction into the field of language
education, it has been moving across time and space (Lamb & Murray, 2018, p. 1).
“To be autonomous meand acting in accord with one’s reflective considerations…
autonomus actions are those that can be self-endorsed and for which one takes responsibility”
(Ryan & Deci, 2017, p. 51). The first section on autonomy (4.2) aims to give an overview on
learner automy, which in simple words means the sence of volition and choice in one’s
foreign language studies.
96
4.2 Learner autonomy
The history of autonomy research in language education started with Holec’s report in 1981,
where he gave a robust definition of learner autonomy. Most of the later definitions have been
based on or referred to what he stated then (See Table 22).
Table 22
Definitions of learner autonomy
(1981, “the ability to take charge of one’s own ability; taking charge of
Holec
p. 3) language learning”
(1991,
Candy “self directed learning” self-directed
p. 6)
97
“an independent capacity to make and capacity;
carry out choices which govern our
(1996, making choices;
Littlewood actions; this capacity depends on two
p. 428) carrying out choices;
main components: ability and
willingness” ability; willingness
98
The definitions in Table 22 show a remarkable degree of consensus with Holec’s (1981) idea
that learner autonomy involves learners taking control over their own learning. In recent
works this definition is often linked to the philosophical idea of personal autonomy (Benson,
2011; Breen & Mann, 1997), referring to people struggling for greater control over their life.
If we look at the keywords in Table 22, we can see that the different definitions really
revolve around the same ideas, even if labeled differently. Most researchers call it either
ability or capacity to make choices and decisions, to take responsibility for determining
something, e.g., one’s objectives, progress, method and techniques of learning, the content of
learning, the pace and rhythm of learning and the evaluation of the learning process
(Everhard, 2016; Macaro, 1997). Pemberton, Li, Or & Pierson (1996) list various terms which
are mostly used synonymously with autonomy in literature (self-instruction, distance learning,
individualized instruction, flexible learning, self-access learning, self-direction). The
definition has been constantly refined; authors who have returned to it from time to time
added some extra feature to it (Benson, 2007; 2009, 2011; Little, 1991; 2000).
Littlewood (1996), when examining the word capacity, breaks it down into two main
components of ability and willingness: an ability to make independent choices and the
willingness to exercise these choices. We can speak about autonomy only in the case when
both components are present. Ability and willingness can themselves be divided into two
components: in the case of ability, to possess knowledge which helps you make choices and
skills to carry out the choices. Willingness depends on having both the motivation and the
confidence to take responsibility for the choices to be made. Littlewood calls it the “anatomy”
of autonomy (Littlewood, 1996, p. 427), and as it is quite evident, the listed components echo
the keywords of definitions by the different scholars (see Table 14).
In formal educational contexts, learner autonomy means reflective participation in
planning, implementing, monitoring and evaluating learning in the whole learning process.
However, its scope is always restricted by what learners can do in the language they study,
which means that it develops together with the learners’ knowledge as a target language user.
Concerning this development, Little (2009) distinguishes three guiding principles: (1) learner
involvement, (2) learner reflection, and (3) appropriate target language use. Benson (2011)
also concentrates on the underlying principles of autonomous language learning and the
different approaches to fostering learner autonomy, emphasizing that language learners
naturally tend to take control of their learning. Learners, who lack autonomy are capable of
developing it, and autonomous language learning is more effective than non-autonomous
language learning.
99
As Littlewood (1996) analyzed the meaning of autonomy, he distinguished different
levels, putting low-level choices that control specific operations at the bottom of hierarchy,
and high-level choices, which are responsible for controlling the overall activity at the top.
According to this model, autonomous language learning could be divided into phases,
depending on what learners are able and willing to do. Their progression can be manipulated
so that learners gradually increase the scope of their independent choices and become totally
autonomous in the end.
Some researchers emphasize that language learning is not limited to the formal
conditions we usually call classroom environment; it can take place at any time and in any
place, implying that willingness to use their language skills outside the classroom can be
crucial in terms of the learners’ second language development (Hyland, 2004). Out-of-class
learning, according to Benson (2001), is mostly self-instruction, where learners themselves
plan to improve their target language knowledge and search out the resources they need,
developing a learning process which is not only autonomous, but, at the same time, highly
purposeful and enjoyable. It can be assumed that a heightened level of self-determination will
increase motivation to learn both in and out of classroom, often resulting in learning activities
that happen beyond any formal course requirements or beyond directions given by an
instructor (M. Lamb, 2004). A “self-managing learner” is an autonomous learner, the “one
who is self-aware, capable of exercising choice in relation to needs, of taking an active self-
directing role in furthering his or her own learning and development” (Harrison, 2000, p.
315). The ultimate manifestation of self-directed learning requires no teacher at all, it
describes a situation in which the learner is responsible for everything, “in full autonomy
there is no need of a teacher or an institution” (L. Dickinson, 1987). This means that the
“teacher” has to refocus his or her teaching, direct it at supporting the development of learner
autonomy (T. Lamb, 2008).
Several attempts have been made to define learner autonomy, so we can only agree with
Nunan (1997): “autonomy is not an absolute concept” (p. 193). It has different levels, as
“most learners do not come into the learning situation with the knowledge and skills to
determine content and learning processes which will enable them to reach their objectives in
learning another language” (p. 201).
100
Nunan describes learner training and autonomy in a nine-step program, as a continuum
“from total dependence on the teacher to autonomy” (Nunan, 2003, p. 196), which can be
done by including a series of steps into the educational process. These steps are:
Some of the steps overlap; this is particularly true for steps 4 – 9, which focus on learning
processes, and can be introduced along steps 1 – 3 which are more content oriented.
Concerning autonomous learner behavior, Nunan reduced the nine steps to five
degrees or levels: (1) awareness, (2) involvement, (3) intervention, (4) creation and (5)
transcendence (See Table 23).
Table 23
Nunan’s degrees of autonomous learner behavior (Nunan, 1997, p. 200)
Degree /
Learner action Content Process
level
101
As Table 23 shows, there is a long path to cover between “awareness” and “transcendence”,
and there are phases, especially at the beginning, where learners need the guidance of their
teachers, and an encouraging, autonomy supporting classroom environment. The greater
degree is achieved, the more autonomous the students become.
Littlewood (1999) proposed a distinction between two levels of autonomy, which he
referred to as “self-regulation” (p. 75). The first level, proactive autonomy regulates the
direction of activity and the activity, as well. Here, the keywords are action words, implying
that learners are “able to take care of their own learning, determine their objectives, select
methods and techniques and evaluate what they have acquired” (Holec, 1981, p.3). The
second level, reactive autonomy regulates the activity once the direction has been set. It may
be a preliminary phase before the first goal or any goal in its own right: a kind of autonomy
which, once a direction has been stated, helps learners to organize their resources in order to
reach their goal. This form of autonomy works similarly to or interacts with motivation: it
stimulates learners to learn, practice, collect information, or prepare for papers and tests on
their own initiative, without being pushed. This model is mirrored in Flannery’s (1994)
distinction between cooperative learning strategies (group-oriented form of reactive
autonomy) and collaborative working strategies (group-oriented form of proactive autonomy)
(p. 76).
Tassinari’s dynamic model of autonomy (Tassinari, 2012 pp. 24-28); (Figure 22), is a
tool designed to support self-assessment and evaluation of learning competences, entailing
various dimensions and components, allowing learners to focus on their own needs and goals:
(1) cognitive and metacognitive component: cognitive and metacognitive knowledge,
awareness, learners’ beliefs);
(2) action-oriented component: skills, learning behaviors, decisions);
(3) affective and motivational component: feelings, emotions, willingness, motivation;
(4) social component: learning and negotiating with partners, advisors, tutors.
The model itself was developed on the basis of Holec (1981), Dickinson (1987), Little
(1991), Littlewood (1996; 1999) and Benson’s (2001) definitions of learner autonomy,
described in Table 22, referring to learner autonomy as a complex construct.
The components of the dynamic model are spheres of competences, skills and actions
expressed by verbs, which focus on the action- and process oriented character: structuring
knowledge, dealing with learners’ feelings, self-motivation myself, planning, choosing
materials and methods, completing tasks, monitoring, evaluating, co-operating and managing
their own learning.
102
Figure 22
Tassinari’s dynamic model of learner autonomy (Tassinari, 2012, p. 29)
Tassinari’s model is both structurally and functionally dynamic: structurally, because all
components are related to each other, and functionally, because learners can decide to enter
the model from any component and move freely from one component to another. Each
component of the model entails a set of descriptors, formulated as “can-do” statements, which
serve for the orientation of learners’ self-evaluation process, which is an important element of
autonomous behavior.
The model has been tested and validated with experts in different workshops,
including Université Nancy 2, France and the Freie Universität Berlin, Germany, and is
currently used at the Centre for Independent Language Learning at the Freie Universität
Berlin for language advising (Tassinari, 2012, p. 24).
While curricula in language teaching are concerned with making general statements about
language learning, learning purpose and experience, and contain banks of learning items and
suggestions about how these might be used in class, syllabi are more localized. They deal
with what actually happens at classroom level, as teachers and learners apply a given
curriculum to their own situation, focusing on selection and grading of content (Candlin,
1984; Nunan, 1988).
103
Learner- centeredness and learner autonomy can be represented in the curriculum and
in the syllabus, as well. The learner-centered curriculum “is a collaborative effort between
teachers and learners, since learners are closely involved in the decision-making process
regarding the content of the curriculum and how it is taught” (Nunan, 1988, p. 2). This kind of
involvement increases the learners’ interest and motivation. It is also an effective way of
developing the learners’ learning skills by fostering a reflective attitude toward the learning
process (Candlin, 1984).
Flutter researched the ‘student perspective’ for more than two decades, and spoke
about students as “stakeholders whose opinions and ideas are sought and often acted upon”
(Flutter, 2006, p. 187). She also emphasized the link between student participation and
effective learning.
Fielding offered a four-level typology of learner involvement in designing classroom
processes: (1) students as data source; (2) students as active respondents; (3) students as co-
researchers; and (4) students as researchers. If student voice is taken into consideration it
fosters not only their awareness and interest in what they learn, but their autonomous
behavior, as well.
“Capturing student voice” (Ahmadi & Hasani, 2018, p. 1) can be projected into
syllabus development in advantageous ways, where students appear as decision makers,
bringing about critical changes to the syllabus. In the realm of student voice initiatives, the
question of how to share power with students is the most challenging one. However,
negotiating with students and finding the right balance may result in more interesting classes,
greater student involvement and they help students take responsibility and control over their
own learning. Little (1995, p. 175) suggested that true negotiation in pedagogical dialogues
demands a symmetrical autonomy, that is “the development of autonomy in learners
presupposes the development of autonomy in teachers”.
Motivation has traditionally been described as an individual difference variable which plays a
crucial role in the success of learning. However, in recent research it has been acknowledged
that motivation is a basic characteristic in autonomous learning (Benson, 2007; Gao & Lamb,
2011), and several researchers argue that motivation, autonomy and identity are interrelated
(Dörnyei & Ushioda, 2009; M. Lamb, 2011; Murray, 2011; Murray, Gao, & Lamb, 2011;
Reinders & Lázaro, 2011; Ryan & Deci, 2017; Sade, 2011).
104
In language classrooms promoting autonomous learning processes, students are
encouraged to develop and express their identities through the language they are learning.
This identity perspective on motivation suggests close connections between motivation and
autonomy, particularly what Little (2007) refers to as language learner autonomy. By enabling
students to speak as themselves in the target language, to negotiate, to struggle, to participate,
to share ideas and to evaluate them, we create an environment which promotes autonomy
(Ushioda, 2011a). Ehrman (1996) and Sade (2011) also argue that self-efficacy leads to
motivation, and, ultimately, to autonomous learning. Those, who investigate the complexity
of SLA systems based on Larsen-Freeman’s theory (1997) also emphasize that motivation,
identity and autonomy are interconnected.
A three-year study by Murray (2011) investigated the experiences of students enrolled
in a self-directed language course. According to the findings, this course provided students
with autonomy, as it needed “willingness, freedom, energy, and time to move around, try new
identities and explore new relations” (p. 84). It also supported students’ exploration of
learning opportunities by offering instruction in strategies and activities, and by encouraging
them to experiment. From the viewpoint of motivation, the most important thing was that the
self-directed learning processes facilitated the learners’ L2 selves, which is discussed as a new
motivational construct in recent research (see Adolphs et al., 2018; Dörnyei & Ushioda,
2009).
Based on these trends and findings in the literature we can conclude that classroom
practices that promote language learner autonomy are likely to contribute to learners’
identities and motivation by enabling them to speak as themselves in the target language.
Those learners, who employ their capacity for autonomy by making conscious decisions and
choices regarding their learning, enhance their motivation, as well. At the same time, we must
not forget that “it is largely the teachers’ responsibility to motivate students” (Csizér &
Kormos, 2009). It is their task to try to generate positive student attitudes toward L2 learning.
Dörnyei (2001) defined five groups of facets which help teachers to achieve this goal:
105
All these will result in what researchers call a student-centered classroom environment, which
can be motivating, and may direct students to become autonomous in their in-class and out-
of-class language activities, as well.
So far only learner autonomy was discussed; in the next section our focus is on teachers
and teacher autonomy.
The concept of teacher autonomy is of fairly recent interest but it has been around for as long
as learner autonomy has been studied (Ramos, 2006; Smith & Erdoğan, 2008). Although the
early developments in the field began in self-access learning, this quickly shifted to research
and practice in classroom contexts, introducing a new focus on the teacher and the construct
of teacher autonomy (Hoyle & John, 1995; T. Lamb, 2017).
It is important to state that learner autonomy, especially when the learning process takes
place in the classroom, is bound to the teachers’ own learning, their teaching experiences and
also their beliefs about autonomy. The development of learner autonomy depends on the
development of teacher autonomy, according to Little (2000). To be able to arrive at this
finding, he examined the learning process as a “dialogue” between the learner and the teacher
in formal (classroom) contexts, emphasizing the importance of the shift in the role of the
teacher (Little, 1995). In such contexts, teachers and learners become co-producers of
language lessons, where the teacher’s task is to bring learners to the point where they start to
exercise equal responsibility for the choices to be made. Learner autonomy and teacher
autonomy become interdependent: the promotion of learner autonomy depends on the
promotion of teacher autonomy. Finch (2001) also claimed that teachers can develop learner
autonomy only in the case they themselves are autonomous and act as facilitators of learner
autonomy.
Being the “product” of the 1990s, the idea of teacher autonomy has had a relatively short
history (T. Lamb, 2008). However, it remained a problematic, even opaque concept for a long
time, mainly because it is difficult to define independently of learner autonomy and the
106
classroom context, and also because it is hard to examine from an empirical perspective
(Benson, 2006; Usma Wilches, 2007).
Table 24
Definitions of teacher autonomy
competence, self-
“the competence to develop as a self- determined,
Raya, Lamb and determined, socially responsible and responsible, critically
(2007, p. 1)
Vieira critically aware participant in (and aware
beyond) educational environments”
107
“the ability to develop appropriate skills,
Smith and ability to develop,
(2008, p. 97) knowledge and attitudes for oneself as a
Erdoğan cooperation
teacher, in cooperation with others”
As research findings show, teacher autonomy has several dimensions (Table 24) and is
influenced by ideas relating to learner autonomy. It can be described as: (1) a capacity of self-
directed professional action or development; (2) freedom from control by others with a
general aim to promote learner autonomy (Smith & Erdoğan, 2008). The first one refers to the
extent to which teachers have the capacity to improve their own teaching through their own
efforts, whereas the second one concerns the freedom to be able to teach in the way one wants
to teach as the manifestation of autonomy from another angle. In another conceptualization,
the autonomous teacher is “willing to pass control over the learning process to those engaged
in, so that learning becomes a collaborative effort, rather than the imposition of knowledge
from above” (Lawson, 2004, p. 3). Other authors, when debating teacher autonomy, speak
about work autonomy (maintaining control over activities and theoretical knowledge);
professional autonomy (grounded in ideals we reject or claim our own); engaged autonomy
(autonomy is not equal with isolation); responsible autonomy (facilitating workplace
independence in accordance with state requirements); regulated autonomy (teachers’
autonomy exists in a vacuum of limited scope); occupational autonomy (one’s own
determination with the destination set in stone) (Parker, 2015, pp. 92-93); all referring to the
degree of autonomy exercised by or granted to the teacher.
Speaking about multiple voices, Sinclair (2008, p. 245), distinguished the aspects of
teacher autonomy from a slightly different angle, describing them as (1) control of teaching
and teaching context, when the teacher appears as a manager, and (2) control of own
professional development with the teacher as a reflective practitioner. La Ganza (2008, p. 71)
coined an interrelational construct with for dimensions: (1) autonomy in relations to the
teacher’s own internal dialectics with teachers, mentors, or significant others; (2) autonomy in
relation to learners; (3) autonomy in relation to those who could make decisions influencing
the teacher’s freedom in the institution where he or she is teaching; (4) autonomy in relation
to those who could make decisions influencing the teacher’s freedom in the institutions and
the society at large. This model suggests that the teacher’s perceptions of autonomy are
108
affected by interrelational dynamics.
The broadening interest in teacher autonomy involved new areas of practice,
especially in pre-service teacher education and in in-service teacher development (Benson,
2011). What really matters in this respect is the teacher’s ability to help learners make
decisions about their learning without making those decisions for them. There is a clear link
between the two types of autonomy (La Ganza, 2008; T. Lamb, 2008; Reinders & Lewis,
2008), which prompted Raya, Lamb & Vieira (2007) to come up with a common definition
for the two concepts (see Table 16), emphasizing the self-determined, socially responsible and
critically aware nature of both participants: learners and teachers.
Teacher autonomy is important for more than one reason. It plays a crucial role in the
development of teacher professionalism, it has a motivating effect, and, according to research,
it results in improving standards (Parker, 2015). MacBeath (2012, p. 91) wrote about creating
“opportunities for children to learn for themselves, in contexts other than school” and “freeing
teachers to teach in new ways and to learn together with their students” in and beyond their
own classrooms. They must be given professional autonomy, the chance to “take
responsibility for the knowledge they organize, produce, mediate and translate into practice”
(MacBeath, 2012, p. 104). On the one hand, it is important for their development as
professionals, on the other hand, it can mark the starting point for solving current school
problems.
In addition to learner and teacher autonomy, a third expression appeared in research on them:
classroom autonomy, which refers to a place where the “seeds of autonomous learning
already exist” (Finch, 2001, p. 8) and presumes a degree of autonomy of both participants.
Classroom autonomy has been the prime focus of research in language education for many
years (T. Lamb & Murray, 2018), where the teacher’s main role is to encourage their
students’ autonomous classroom behavior. Nunan (1997) also regarded the language
classroom to be the place for promoting learners to move towards autonomy, which is a slow
109
process, as learners in formal contexts do not easily accept responsibility for what they do. It
is the teacher’s task to provide them with appropriate tools and opportunities to practice
(Little, 1995). The role of teachers, as Dickinson (1992, p. 2) pointed out, is enormous
concerning the ways they can promote greater learner independence. The most important ones
are: (1) encouraging learners to become more independent; (2) convincing learners that they
are capable of greater independence in learning; (3) giving learners opportunities to practice
their independence, (4) helping learners to develop learning techniques that equip them with
more independence; (5) helping learners to become more aware of language as a system; (6)
sharing with learners what teachers know so that they have a greater awareness of what to
expect from the language learning process.
The keyword of classroom autonomy concerns the changing role of the participants,
which is, in case of learners, manifests itself in acquiring the ability to take charge for one’s
own learning (Finch, 2001; Holec, 1981). The success of empowering learners to become
actively involved in their learning largely depends on the teachers’ ability to redefine roles, to
become aware of their task which has two functions in this new context: (1) management
function and (2) instructional function (Finch, 2001, p. 13). In this view, the teacher becomes
a skilled manager of human beings (the learners); a helper, who is warm and loving; accepts
and cares about the learner and about his problems, described by Dickinson as a person “who
is willing to spend time helping, who is approving, supportive, encouraging and friendly, and
regards the learner as equal. As a result, the learner feels free to approach him and can talk
freely and easily with him in a warm and relaxed atmosphere” (Dickinson, 1987, p. 122).
The attempt to bring the real world into the classroom is the common feature of those new
pedagogical initiatives in which teachers function as facilitators, guiding learners in the
completion of real-world tasks (Baer & Koby, 2003, viii). Influenced by these trends in
foreign language instruction, researchers called for a more process-oriented, learner-centered
approach in translator training (Gile, 2009; Kiraly, 1995).
Throughout the 1990s, the greatest problem was what Kiraly (1995, p. 5) called a
“pedagogical gap” in translation skill instruction reflected in a “lack of clear objectives,
110
curricular materials and teaching methods”, creating a classroom environment which did not
promote autonomous learning. This situation could be the result of what was discussed in a
previous chapter: the majority of translator teachers did not have any pedagogical training;
they often came from a range of different sectors in the world of industry (Gile, 2009;
Gouadec, 2007) and they had to pick up formal pedagogical training while doing the job
(Baer & Koby, 2003, viii). It had not been recognized until the 2010s that formal training was
the most effective way to teach skills and test abilities to train reliable professionals (Gile,
2009; 2010; Koskinen, 2010). The most intriguing question was how the existing programs
could help students to learn to translate. The answer to this question was provided by the last
25 years, when there was a shift in language education, and the instructors in translation study
programs were offered a variety of new teaching methodologies. This period was the birth of
the learner-centered, autonomous learning environment, in which teachers, giving up the “old
school” methodology started to act as facilitators (Baer & Koby, 2003).
As it is proven by the listed definitions by the experts of the field, we cannot speak about
autonomy without the learners’ active participation in the day-to-day process of their learning,
in which the most important element is their ability and willingness to make choices
independently (Benson, 2007; 2008b; 2011; Littlewood, 1996). What translation involves is
making choices and decisions all the time. Klaudy (1997, p. 21), examining the character of
the activity, emphasized the immense scale of choices translators face.
The result of this activity – the corpus (text) created in the target language – is the result
of numberless choices and decisions... When comparing the different translations of the
same text, we always find identical and different solutions, suggesting that the subjective
decisions of the translator have an objective base.
Baker (2006), emphasizing that translation is a highly complex activity, also discussed a
myriad of different potential translation problems students of translation may be confronted
with. In this sense, text production can be seen as a problem-solving activity. Baker (2011)
and Kenny (2009) discussed translation problems arising from the lack of equivalence at
different levels (word level, above word level). Baker (2011) also examined grammatical
equivalence, textual equivalence, pragmatic equivalence, and issues beyond equivalence:
ethics and morality, which all represent difficulties, or at least challenges a translator has to
fight. Venuti, similarly to Klaudy (1997), wrote about personal preferences and the choices a
translator has to make, which can result in words even the translator “hates” or ones “that
sound wrong” to him (Venuti, 2013, p. 33).
111
Wilss (1998, p. 57) argued that decision-making processes, which are particularly
complex in translation, are “inextricably connected with problem-solving activities. To solve
a problem, any human being must possess declarative (knowing what) and procedural
knowledge (knowing how)”. This kind of knowledge can be acquired best in an autonomous
learning environment, where the role of the teacher is to encourage learners to learn how to
apply strategies which help them arrive at the best solution in every individual case (Baer &
Koby, 2003).
Ushioda (1996) and Heitzmann (2014) definitely argued that autonomous language
learners are by definition motivated learners, and the motivating role of the autonomous
teacher is also indisputable (Dörnyei, 2001; Gardner, 2001; Nikolov, 1999; 2000). We cannot
neglect the importance of the classroom atmosphere, either, as it definitely helps the learners
achieve their goals (Dörnyei, 1994; 2007; Nikolov, 2000). A successful learning environment
is “student centered and promotes learner autonomy… supports collaborative learning and
meets the effective needs of students” (Christison & Murray, 2014, p. 42). The teaching
process itself is always based on a curriculum, which, similarly to the teaching environment,
should focus on students’ needs. According to Christison and Murray (2014, p. 189), the
negotiated curriculum is “the most all-embracing manifestation of a learner-centered
curriculum”, letting the teacher negotiate with students on materials, content, methodology,
even evaluation. However, it expects the teacher to have curriculum design skills and
negotiating skills and it expects learners to be aware of their own learning needs and desires.
Promoting autonomy, however, is not simply a matter of teaching strategies; it can take
place both inside and outside the classroom (Szőcs, 2016). In the case of translation, it is an
activity most often confined to solitary rooms, often without the physical presence of the
teacher. Once the task is given, and the translator faces the first problem, a sensible choice has
to be made, followed by a decision, often with the help of a tool (dictionary, encyclopaedia,
corpus, etc.). Without having the necessary skills and strategies (leaned in class or
autonomously), and without institutional constraints the choices and decisions could not be
made (Benson, 2007).
As has been discussed, learner autonomy depends on teacher autonomy (La Ganza,
2008; Raya et al., 2007); therefore, the role of the teacher is crucial in this respect. Dörnyei
lists the “ingredients” of an autonomy-supporting teaching practice (Dörnyei, 2001a; Dörnyei,
2001b; Dörnyei & Csizér, 1998), including increased learner involvement in organizing the
learning process, based on the curriculum.
112
4.6 Study 2: Learner autonomy in Translation Studies BA classes
Learner autonomy involves learners taking control over their own learning (Holec, 1981).
Translation is definitely an autonomous activity, as translators, when working, have to make
their own choices and solve their translation problems alone (Baker, 2011;Wilss, 1998). This
is a kind of reality translation students also have to learn about and adjust to. The fourth part
of the student questionnaire (see Appendix B) investigated how the BA specialization
program supported the autonomous learning environment and how autonomous the students
were.
In order to examine the autonomous behavior of the target group I focused on four research
questions:
1) How autonomous are BA students specializing in translation?
2) How does the BA specialization program support learner autonomy?
3) How do syllabi integrate and support autonomy and motivation in TS classes?
4) How does teacher and learner autonomy affect student motivation?
4.6.2 Participants
The participants of this part of the research project were the same 23 second- and third-year
BA students who filled in the other four parts of the student questionnaire discussed in section
YY. They all majored in English and studied in the translation specialization program. As
their answers in the questionnaire revealed nearly 50% of respondents, eleven of 23 want to
become professional translators. This means that they would work in a field where autonomy
is both essential and motivational.
To find the answers to my research questions I included ten questions on autonomy in the
student questionnaire: six were open-ended, where the participants had to give reasons for
their answers. Three of the other four questions used sets of Likert-scale items and one was a
ranking task. Similar to the other parts of the questionnaire, these were completed in-class. In
order to highlight the relationships between what students said in their responses and what the
courses involved, I examined what reference the course syllabi included on autonomy.
113
4.6.3 Results and discussion of questionnaire data
To find out how the participants preferred working in their translation studies classes, they
were offered four choices: (1) on my own, (2) in pairs, (3) in groups, (4) directed by the
teacher (Figure 23). In addition to making their choice, they were asked to explain their
answers. Ten respondents said that they liked and needed their teachers’ guidance, mainly
because of the immediate help and feedback they could get this way:
It is easier this way. (2)
He/she can correct me and give feedback immediately. (2)
They can give me professional opinion about my translation mistakes. (1)
They help me identify particular problems. (1)
They know more about translating than I do. (1)
By pointing out their mistakes and drawing their attention to particular problems,
students felt that the teachers helped them to cope with their translation tasks better.
They can guide me when I am lost. (1)
We can learn the most from them. (1)
They encourage us to share our thoughts. (1)
On my own
9 In pairs
10
In groups
Directed by
1 the teacher
3
Figure 23
Students’ preferences of work in translation classes (N=23)
The answers indicate that these eleven students are not ready to make their autonomous
decisions concerning translation tasks.
Nine students liked doing translation tasks on their own for a range of different
reasons:
114
It is a one-man job / I cannot work with others. (2)
I cannot concentrate/work with others around me. (2)
I can make my own decisions. (1)
I am not a team-player. (1)
I can allocate as much time as I need. (1)
I am like that. (1)
It is easier to translate on my own. (1)
Althoug only one student claimed to be able to make his own decisions, eight other
students preferred working alone, a finding, which suggests that they were able to do problem
solving and make choices concerning methods and tools without teacher guidance, on their
own.
Three participants liked working in a group because of getting peer-feedback on the spot,
the opportunity to learn from each-other’s errors and also because of the opportunity to share
the task, thus making their work more effective. Only one participant preferred pair work,
because “two people guaranteed two times more good ideas.”
To reveal patterns in their problem-solving strategies, the participants were offered
eight different options to choose from (Table 25). They were asked to give their answers on a
5-point Likert scale (0 = never; 4 = generally).
Table 25
Translation students’ problem solving strategies (N=23)
Distribution of scores
Problem solving strategy Means
0 1 2 3 4
115
According to data in Table 25, the most frequently applied strategy to solve a translation
problem is using online tools, including Google searches: 18 of the 23 respondents circled 4,
which marked the highest frequency. It is in full accordance with what the students had said
in the interviews discussed in section 3.1: “Google is our best friend.”
What respondents claimed about consulting a corpus contradicts what their fellow
students had said in the interviews: they had heard about corpora, but they hardly ever used
any. However, there is another possibility: those students who claimed they used corpora are
the exceptions rather than the rule.
The most frequently used strategies – using online tools (3.78) and dictionaries (2.43)
– assume a certain degree of autonomy. In contrast, asking a teacher (1.65) or a fellow student
(1.48) indicate that students apply for help by using social strategies; these indicate a lack of
autonomous behavior.
The next question aimed at the translation instruments students used when they did
home assignments (Table 26). To mark the frequency of using a given tool they were asked to
number options on a Likert scale from 0 (never) to 4 (generally). If we look at the mean
scores we can see that online tools and Google were used most often. The fact that the
participating 23 students gave only eleven examples for the tools they allegedly used in
homework tasks suggests that their use is occasional at the best. (Eight students gave no
examples at all; five gave only one, the linguee.)
Table 26
Translation tools students use when they do home assignments (N=23)
As a next step, students were asked about the usefulness of typical classroom activities and
home assignments. They could give their answers by circling the options that fitted them most
(0 – not useful at all; 4 – the most useful). Table 27 shows the distribution of their answers.
116
Overall, shorter assignments were found to be much more useful than longer ones. The latter
as typical home assignments require not only more time, but also a higher degree of
autonomy, including decision making concerning every aspects of the work.
Table 27
Usefulness of classroom and home assignments (N=23)
Distribution of answers
Activity / assignment Mean
0 1 2 3 4
1. A longer home assignement 2 7 6 4 4 2.04
117
students could immediately see their strengths and weaknesses; (2)
they could discuss their choices and correct their mistakes in class, guided by the
teacher; (2)
they could get frequent feedback; (1)
it was fast to do. (1)
Home assignments involving longer texts, typically to be submitted at the end of the term
were deemed to be the least useful tasks, because they were time consuming (9), there was no
opportunity to discuss the choices in class (2), and although the students’ mistakes were
marked, they typically did not get detailed feedback on their work (2). This task was marked
very useful by four students only, mainly because they “could focus better when working
alone”.
The five participants who opted for analyzing peer translations as the most useful
activity said that this task offered them great opportunities to see not only their own, but also
others’ mistakes, and they could learn from them. Being engaged in this task they could see
how others managed to solve a problem they were not able to cope with.
Comparing translation strategies and techniques was found to be most useful by five
respondents, who said seeing and discussing different ways of problem solving helped them a
lot with their assignments. When explaining their choice to mark analyzing translation with a
professional translator as “most useful” the general idea was that students, when doing this,
“could learn from the best”.
When asked how they used the translation skills they learnt out of classroom, the
participants’ typical answers involved:
translating short texts for family, friends (6);
translating for pleasure (3);
helping siblings or friends with their homework (3);
while reading a book in English checking the translator’s work (1).
Four of the respondents did not give any answer, presumably because they did not
practice theit skills out of classroom. Three wrote something, but that did not answer the
question.
Summarizing students’ choices, we can state that the majority prefers working with
other people, including teachers, fellow students, and, if possible, professionals. Out of the
classroom, they do not use the skills they learnt in class. If they do, it is for instance helping
118
siblings doing their homework, or translating short texts for friends or family. These answers
imply that a safe working environment is more important for them than autonomy.
Students preferred preparing for exams on their own: 22 of the 23 respondents chose
this option as fitting them the most. None of them claimed to rely on a tutor, and only one
participant liked learning for tests together with fellow students.
The respondents seem to benefit from the evaluation they receive for their exam tasks
in the most different ways (Figure 24). In eleven cases, this final feedback pushed them
towards autonomous decisions: it helped them identify their own strengths and weaknesses
and motivated them to work on them. Six students used the assessment they got to improve
their skills. Two others claimed they learned to correct their mistakes and seeing their results
they could compare their personal achievements. Two students said the evaluation helped
them to see their progress, whereas two other respondents claimed that they did not benefit
from it because the evaluation is typically expressed in grades only, without offering any
comment or explanation, and according to one student, he did not check the mistakes he had
made in the test.
12 11
10
8
6
6
4 Number of
students
2 2 2
2
0
Helps to Improves They can They can see They do not
identify their skills correct their their progress benefit from
strengths / mistakes it
weeknesses
Figure 24
Perceived benefits of assessment of exam tests (N=23)
The aim of the next question was to find out what helped students become more autonomous
in their translation studies. Autonomy cannot be discussed without the learners’ active
participation in the day-to-day process of their learning (Benson, 2007; 2008; 2011;
119
Littlewood, 1996), and the fact they could define what would help them indicates they are
ready to take steps to become more autonomous in what they do. The question was answered
by 19 respondents; five of them claimed that doing more translation tasks would definitely
help their autonomous behavior. The other answers included being more experienced (2);
guided correction of their mistakes (2); learning (2); more reading (2); more home
assignments (1); more background knowledge (1); richer vocabulary (1); working together
with professional translators (1); knowing what degree of “free translation” was acceptable
(1); doing an MA program in translation (1), and one respondents did not know. The diversity
of the answers imply that the respondents were aware of the complex nature of autonomous
behavior, and it already assumes a degree of autonomy that they could identify their needs.
The teacher plays a key role in creating a learner centered environment and is “ready to meet
the effective needs of students” (Christison & Murray, 2014, p. 142).
Previous research on English majors also found that learners’ autonomous beliefs did
not always result in autonomous behavior (Édes, 2008; Szőcs, 2016). Assuming there were
mismatches between students’ thoughts and the extent to which they manifest various aspects
of autonomous behavior nine closed items on a four-point Likert scale asked them how
responsible they thought they should be for doing translation activities on their own. (1: not
responsible, 2: a little responsible, 3: responsible to some extent, 4: mainly responsible), and
to how often they acted autonomously (1: never, 2: sometimes, 3: often, 4: in general).
Respondents felt the most responsible for identifying their own strengths and
weaknesses (3.73) and the least responsible for evaluating their own learning process (2.43)
and offering opinions about what to learn in the classroom (2.56) (Table 28). As data show,
when asked to what extent they practiced autonomy concerning the related statements,
students answered that they were the most autonomous in becoming more self-directed in
doing translations (3.26), which contradicts to what they believed about their behavior (3.08),
claiming they were more autonomous concerning this aspect than they thought they were.
Identifying their own strengths and weaknesses, which topped the list of their perceived
responsibilities (3.73), came only third when they defined the extent to which they claimed to
practice this (3.04). The table also shows considerable differences between the means of the
examined behaviors (how responsible they thought they were and to what extent they really
were responsible; 3.19 and 2.58).
120
Table 28
Students’ beliefs and claims about their responsibilities concerning TS classes (N=23)
3.22 .795 deciding what to learn outside the classroom 2.96 .878
Comparing the means of the extent to which students believed it was their responsibility to act
autonomously (3.13) and the extent to which they claimed to act autonomously (2.58)
revealed a mismatch in favor of beliefs. The same can be said about the spread of the answers:
the SD values for the same statements show considerable differences in most cases. This
result implies that students’ autonomous behavior lagged behind their perceived
responsibilities.
As research has shown, autonomy in the classroom presumes a degree of autonomy of both
parties involved: the teachers and the learners (L. Dickinson, 1992; Finch, 2001; Holec, 1981,
2008; T. Lamb & Murray, 2018; MacBeath, 2012; Nunan, 1997; Parker, 2015). If teachers are
not autonomous, they will not be able to create autonomous learning environments and, more
121
importantly, will not be able to foster learner autonomy. To explore this area in my research, I
studied twelve course syllabi designed and applied by four teachers during the academic
between 2016 and 2018 (Table 29). The syllabi appear here according to the sequence they
are expected to be taken by students.
Table 29
Course syllabi for Translation Studies, 2016 – 2018
Number
Name of the course Type Term Credit
of tutors
The structure of the syllabi was very similar, although they contained differences, too. Each
had a heading with the name of the course, the name of the tutor with contact information and
the semester, followed by the main parts:
Course requirement
Aim / Goal of the course
Course plan / Course calendar
Assessment / Grading policy
Readings / Literature
Recommended readings / Suggested sources / Additional literature
(Learning outcomes)
122
Syllabi 1 and 3 were written by the same tutor (T1), so they practically contained the same
elements with different contents. After declaring the aim of the course they followed the same
steps in the “Course plan” section, giving the dates and topics of the weekly lectures. While
syllabus 1 focused on theory and concepts that make up the dynamic interdisciplinary field of
translation studies, syllabus 3 offered an overview on different aspects of style with a reading
task for different styles for each class, suggesting that the lecture contained practical elements.
The students in this class also had home assignments (translation, analyses), had to complete
reading quizzes, do research on selected topics and were expected to document everything in
a folder to be handed in for evaluation at the end of the course. The course closed with a final
test based on selected readings and a glossary. It did not say anything about the assessment, so
the students did not know what to expect in this respect (unless they were told by the tutor).
Syllabus 1 offered a similar overview of classes, however, it did not expect the students do
practical tasks. It also offered a detailed assessment policy, where final grades were calculated
in the written examination as follows: 0-49% = 1; 50-59% = 2; 60-75% = 3; 76-89% = 4; 90-
100% = 5. Syllabus 2, written by another tutor (T2), was not as detailed as syllabi 1 and 3; it
just contained a list of topics and another list of suggested readings. Concerning assessment
and grading, it contained only one sentence: “The course concludes with a written test.”
Syllabi 4, 5, 6, 7, 8, 9, 10, 11, 12 were designed to offer the students ample practice to
improve their translation skills by translating and proofreading advanced texts taken from
major English and American daily and weekly papers, magazines. 4, 5, 6, 7, 8 were written by
the same tutor (T3), so they followed the same structure again. The “Course calendar” part of
these seminar syllabi defined the length of the texts students had to translate every week,
which were of roughly 2000 characters in case En-Hu and Hu-En translation skills, as well as
in the legal and political texts seminar. The course ended with a final translation assignment
of about 2000 characters. The requirements and grading, expressed in percent, also were
uniform in these five syllabi:
- The weekly translations were accounted for – 25%
- Participation in classroom discussions were – 25%
- End-of-term translation assignment weighed – 50%.
The syllabus for “Translating economic and final texts” (10) was designed by T1. The weekly
translation assignment in this case was 200-300. As these were home assignments, the
students were offered the opportunity to choose the translation tools themselves, however,
they could not negotiate on the text or the topic. The students also were required to compile a
123
glossary of financial and economic terms. Compared to the two syllabi om theoretical classes
(1, 3) by T1, this one was less detailed and did not say anything about assessment and
grading.
The syllabi on texts in the social sciences, IT and literary texts (syllabi 9, 11, 12) were
designed by a forth tutor (T4), who also taught the classes, so they followed a uniform pattern
again, but they contained tasks which were not part of any other syllabus. In each class the
students had weekly assignments and were required to present their version of the weekly
translating task in groups, focusing on problematic issues, reveled by the group, and they also
had to proofread each other’s translations. These syllabi did not contain a section on weekly
topics or readings; it promised to send the materials via Neptun (the electronic administration
system at Hungarian universities), and the students were required to send back their
assignments in the same way. The assessment and grading policy contained a detailed
description of the format requirements, strict instructions concerning deadlines. The grading
was different in the three classes. Syllabus 9 on texts in social sciences stated that “Marking
and correction of the translations will be based on the requirements and rules of correction of
the complex translation exam.” In case of IT translations (11) the syllabus said that two
assignments had to be handed in in hard copy (these tasks are designated during the semester).
The third assignment was the correction of one of these translations. The grade was based on
these three assignments (75 per cent) and the presentation (25 per cent). Syllabus 12 on
literary translations offered another grading policy again, in which the grade was based on the
translation tasks, a final translation individually assigned and a presentation of a translation
task. Table 30 aims to show how similar and how different the syllabi by four tutors on 12
courses were.
Except meeting the basic formal requirements, all syllabi gave the students
opportunity to make their voice heard concerning the translations they made: in the classroom
they either were offered to take turns to project their translations each week, or the teachers
chose a translation to discuss. The students were invited to make comments on the given
translation, compare their own choices to what was discussed, and suggest alternative
solutions and improvements. In literary translation classes the routine included interpreting
the text, analyzing the problems the translators and proofreaders faced during work,
discussing terminology, the concept and context of the text, style, intention etc., and rework
the translation. This practice helped develop their critical thinking and fostered their
autonomous behavior.
124
Table 30
Similarities and differences between the examined syllabi
Features
Course Course Course Class Assessment, Learning
number Tutor Heading Aim plan calendar activity grading Readings outcomes
1 T1 √ √ √ √ √ √
2 T2 √ √ √ √
3 T1 √ √ √ √ √
4 T3 √ √ √ √
5 T3 √ √ √ √ √
6 T3 √ √ √ √ √
7 T3 √ √ √ √ √
8 T3 √ √ √ √ √
9 T4 √ √ √ √ √
10 T1 √ √ √ √
11 T4 √ √ √ √ √
12 T4 √ √ √ √ √
By doing weekly translations the students were offered enough practice to learn to work on
their own, to learn to make autonomous decisions concerning the length of the time they
wanted to devote to fulfilling their tasks, the tools and background materials they wanted to
use and their choices they had to make when translation problems arose. The practice
described in the syllabi, together with discussing the end product of students’ efforts in class
theoretically offered what literature calls a motivating, student-centered learning environment,
which fosters student and translator autonomy (Benson, 2006; Little, Ridley, & Ushioda,
2003; Nunan, 1997).
It is also obvious that the syllabi offered a fair evaluation of student work; with the
exception of syllabi 3 and 7 they knew in advance that the weekly assignments and participate
in classroom would also be assessed, which, if they took them seriously, could represent
significant motivating force to show their best competence in the end-of-the term translation
assignment.
125
The last part in each practice-oriented syllabus included a list of suggested sources to
use. The word suggested implies that the students could make their own choices concerning
sources and background material, thus fostering autonomy.
The course syllabi on texts in the social sciences (9), IT (11) and literary translations
(12), parallel with individual work, offered an option to work in groups; the students could
decide which form they preferred to prepare their assignments. In case of teamwork, one team
member translated the text; then they sent it to the other members for proofreading, then
during the class the students interpreted the source text and analyzed the problems the
translators and proofreaders faced during preparing their work. This method promoted not
only autonomy, but also cooperation by putting emphasis on important elements (time
management, keeping deadlines, using different channels of communication, etc.). This was a
good example for teacher autonomy: while the teachers worked along mutually agreed
guidelines, they could adopt new forms or activities if they wanted to; breaking the typical
pattern could result in more motivating activities.
The syllabi for theoretical courses (1, 2, 3), where the students got new knowledge in
the frames of lectures were designed along stricter guidelines. Here the instructor strictly
defined what students had to read for each class, but also offered a shorter list of optional
readings.
Only syllabus 1 (Introduction to translation studies) defined the learning outcomes,
promising that on completion of the course, students will be able to
- discuss essential themes in translation theory;
- apply basic translation theory to explain aspects of translation practice, and
- critically observe the relationship between translation and culture.
It is an important feature, because knowing what they are expected to achieve, the
students are able to create the ideal L2 selves they want to become by the end of the course.
While learning new things, they are given the opportunity to adapt to it, which, according to
research, enhances their L2 motivation (Adolphs et al., 2018; Dörnyei, 2020; Dörnyei &
Ushioda, 2009).
The negative side of the syllabi by the same tutors (except T4) is the complete
uniformity of the design. Although they meet the basic expectation, which says that a
syllabus is a statement of content which is used as the basis for planning courses and the basic
task of the syllabus designer is to select and grade this content (Nunan, 1988), knowing in
advance that students will follow the same routine in each class of a given tutor can result in a
boring, monotonous system, even if the texts are different. It is clear that designing a syllabus
126
is easier in this way, but it is also dangerous, as it may suggest that the teacher is not
motivated enough to offer new ideas, new methods, and demotivated teachers never show
good examples, and do not promote the motivating classroom environment (Little et al.,
2003).
4.7 Summary
As has been discussed in previous chapters, the BA students at the University of Pécs can
choose to take on the translation specialization program in the second year of their studies,
after passing the proficiency exam. It means that up to this point they are preoccupied with
studying a range of course and developing their four skills (reading, listening, writing,
grammar and vocabulary) with the help and guidance of their teachers.
The respondents to the questionnaire were at the very beginning of studying their
chosen special area – translation – which, if one wants to do it successfully, demands a great
deal of autonomy, starting with identifying one’s strengths and weaknesses, goal setting,
choice making concerning tools, words, ways of expression etc. At the beginning of their
studies, they were quite dependent on their teachers’ guidance. However, their answers
suggest that they were on the path of becoming autonomous both in their beliefs and in their
practices. Although their responses revealed a mismatch between their beliefs and their
autonomous behavior, a comparison of their answers (4.6.3, Table 28) to Nunan’s five-level
model of autonomy (Nunan, 1997; see 4.2.2, Table 23) indicates that many of them have
reached the first two levels (awareness and involvement): eight of the 23 respondents felt to
be able to identify their strengths and weaknesses, face difficulties rather than waiting for
solutions from teachers and set their own learning goals. Seven students said they could
decide what to learn out of class. We can find examples for students who are already close to
the third (intervention) and fourth level of creation. Three of the 23 participants felt they
could evaluate their own leraning process, and three that they were able to stimulate their own
interest in TS. These rare examples are not only able to choose their goals from a list of
alternatives, but they are able to set their own definite goals and work independently in order
to achieve them. However, as their number shows, these students belong to the minority; the
most, 15 out of 23 are still teacher dependent and just have started to learn how to become
autonomous.
The syllabi suggest that the respondents learn in student-centered learning
environments, where they are offered the freedom of choice and in the form of class
127
discussions they have ample opportunities to make their voices heard. How autonomous they
will become by the end of the course, which level of Nunan’s model they will reach… “It
depends on the students themselves” (M. Lamb, 2004, p. 1).
128
Part III
Chapter 5
Assessment in Translation Studies BA classes
5.1 Introduction
When designing an instrument, one of the crucial steps is the definition of the construct,
which has to include a clearly described definition of what a test designer intends to include in
a given skill or ability, what the name of the construct is, and what knowledge and behavior
are operationalized, that is what the elements that formulate the given construct are, and how
they can be captured and measured by a rubric (Fulcher, 2014).
Based on Angelelli’s (2009, p. 31) definition of a measurable construct, translation
competence comprises:
(1) linguistic competence in its narrowest sense, including grammatical competence,
control of vocabulary, morphology and syntax;
(2) textual competence, the ability to string ideas together, including cohesive
competence, the ability of using linguistic devices to connect ideas, and the ability to
organize the text appropriately;
(3) pragmatic competence, which can be divided into illocutionary competence which
is used to perform functions (e. g., addressing, apologizing, complaining), and
sociolinguistic competence (knowledge of linguistic variations, and cultural
references, figures of speech and registers);
(4) strategic competence, including the ways a translator approaches a translation task
and instrumental-professional competence.
All these reflect what was discussed in the previous chapters, that translation is a
multi-dimensional and complex phenomenon, which may explain why there have been few
attempts to validly and reliably measure translation competence (Angelelli, 2009; Eyckmans,
Anckaert, & Segers, 2009; Williams, 2009). It involves discourse and grammatical
competence in two languages, the source language and the target language, also knowledge, a
129
variety of skills, both analytical and strategic, as well as attitudes, therefor developing reliable
tests raises a whole series of questions (Angelelli, 2009; Cohen, 1994). What aspects of
translation competence should be assessed? What is the purpose of assessment? How will the
assessment instruments be developed and validated? Should the test be a norm-referenced or a
criterion-referenced one? How will the candidates’ translations be scored? How will the
results be used? How will the results impact test takers and score users?
Some of the most important interacting factors a researcher needs to consider when
looking at a translation are listed by House (2015, p 2) as follows:
the structural characteristics, the expressive potential and the constraints of the two
languages involved;
the extra-linguistic world which is interpreted in different ways by the speakers of
source language (SL) and target language (TL);
the source text with its linguistic, stylistic and aesthetic features belonging to the
lingua-cultural community;
the linguistic, stylistic and aesthetic features of the TL community;
the TL norms internalized by the translator;
intertextuality of the text in TL culture;
traditions, principles, ideologies and history of TL community,
the translational “brief” given to the translator by the person or institution
commissioning the translation;
the translator’s workplace conditions;
the translators’ knowledge, expertise, attitudes;
the translation receptors’ knowledge, expertise and attitudes.
Although translation quality, which is hugely affected by the listed factors has long
been the focus not only of academic, but also industrial attention, and the relevance of
translation quality assessment (TQA) is stronger than ever, there are still no “generally
accepted objective criteria for evaluating the quality of translations” (Williams, 2009, p. 3).
Most researchers, even if they take different approaches of assessment agree that reliability
and validity are essential when it comes to testing quality (Eyckmans, Anckaert, & Segers,
2009; Eyckmans & Anckaert, 2017).
Reliability, in practical terms, means the consistency or reproducibility of test scores
(Bachman, 1990; Bachman & Palmer, 1996). However, it is not just about test scores but also
about different factors which impact reliability: (1) variation in test administration settings,
130
(2) variations in test rubrics, (3) variations in test input, (4) variation in expected response and
(5) variation in the relationship between input and response types (Bachman & Palmer, 1996).
From the viewpoint of the present paper, one of the most important areas that may affect
reliability is the manner in which the test-takers’ responses are scored. Ideally, assessors give
their scores based on fixed and objective criteria; each instance of scoring by a grader should
be similar to other instances of scoring by the same grader, a quality known as intra-rater
reliability. Inter-rater reliability, on the other hand, means that the same test, graded by
different graders using the same scoring criteria should yield similar results (Bachman, 1990;
Bachman & Palmer, 1996). Any consultation among graders threatens inter-rater reliability.
Graders can pull each other one way or another, arriving at a consensus in the end. The means
of all the scores are often used as the final score. For the sake of reliability, however, each
scoring should be done independently of the others, ensuring the integrity of grading criteria,
intra-rater and inter-rater reliability (Angelelli, 2009; Eyckmans et al., 2009).
Validity in general refers to the appropriateness of a test or any of its components. A
test is said to be valid to the extent that it measures what it is supposed to measure (Alderson,
Clapham, & Wall, 1995; Henning, 1987), however, “it is not possible for a test to be valid
without first being reliable” (Henning, 1987, p. 90).
There are several kinds of validity. The first distinction should be made between
empirical and non-empirical validity (Henning, 1987, p. 93), or internal and external validity
(Alderson et al., 1995, p. 172). Empirical or internal validity involves criterion related
validities, the non-emprical or external validity involves face or content validity and response
validity. Content or face validity is concerned with whether or not the content of the test is
representative and comprehensive for the test to be a valid measure of what it is supposed to
measure (Henning, 1987, p. 94). Alderson at al. (1995, pp. 172-173) make a distinction
between face validity and content validity: the first one refers to the “surface credibility or
public acceptability of the test” (p. 172), whereas the second one to the “representativeness or
sampling adequacy of the content” (p. 173). Response validity describes the extent to which
test-takers responded as expected by the test developers (Alderson et al., 1995; Henning,
1987).
Empirical, criterion-related validity, referred to by Alderson at al. (1995) as external
validity includes concurrent validity and predictive validity. Concurrent validity measures
how well a new test compares to a well-established test. It also can refer to testing two groups
at the same time (concurrently). Concurrent validity is closely related to predictive validity,
which requires comparing test scores to a subsequent targeted behavior (e.g. entrance exam
131
scores to successive annual grades), meaning that the external measures will be gathered some
time after the test has been given (J. C. Alderson et al., 1995). Predictive validation is
common with proficiency tests, which predict how well the test-taker will perform in the
future.
Construct validity is empirical in nature, because it involves data gathering, however,
it does not have any particular validity coefficient associated with it. The aim of construct
validation is to prove that the constructs being measured are valid (Henning, 1987), or,
according to the explanation provided by Alderson and al. (1995, p. 183), “is measuring how
well test performance can be interpreted as a meaningful measure of some characteristics or
quality”.
Most researchers and translation tutors would agree that translation evaluation lacks a
general framework (Colina, 2008; 2008; Garant, 2009), although in recent years it has become
“an up and coming topic within the field of translation studies” (Garant, 2009, p. 5).
Translation products are most often assessed holistically or analytically, based on rating
scales. A rating scale, also referred to as scoring rubric is defined as
…a scale for the description of language proficiency consisting of series of
constructed levels against which a learner’s performance is judged… Typically
such scales range from zero mastery to an end-point representing the well-
educated native speaker. The levels or brands are characterized in terms of what
subjects can do with the language and their mastery of linguistic features (such as
vocabulary, fluency, syntax, cohesion)… Raters and judges are normally trained
in the use of scales so as to ensure the measure’s reliability (Davies et al., 1999,
pp. 153-154).
There are different types of rating scales for scoring tests. A traditional distinction is between
holistic and analytic scales. The classic definition of holistic assessment in the context of
writing assessment is
any procedure which stops short of enumerating linguistic, rhetorical, or
informational features of a piece of writing. Some holistic procedures may specify
a number of particular features and even require that each feature be scored
separately, but the reader is never required to stop and count or tally incidents of
feature (Cooper, 1977, p. 4).
When using a holistic approach, the assessor reads the translated text and scores it on the
basis of a global impression, and decides how it reads in the target language, how true the
content is to the original. Optimally, the assessors are provided detailed instructions:
As you grade, you will underline anything in the translation that ‘does not sound
right’, in line with holistic method, without giving specific information about the
132
nature of the error or applying any kind of scoring parameter. At the end, you will
supply a grade between N1–N2 (two end points of scoring), which you feel
corresponds to the impression you obtained from the translation as a whole”
(Eyckmans et al., 2009, p. 91).
This is highly subjective in nature: personal opinions, feelings or tastes influence the
interpretation, so the assessment of the same translation may result in diverging scores from
different assessors (Eyckmans, Anckaert, & Segers, 2016a; Garant, 2009). In Garant’s (2009,
p. 10) understanding, the term ‘holistic’ refers “to a systematic way in which the teacher
arrives at an overall impression of the text as opposed to relying on a discrete points-based
scale”. Instructors approaching assessment at discourse level break down the text to paragraph
level for better results. The emphasis is on the content, not on the mistakes, assuming that
focusing on errors can be counterproductive, whereas rewarding good performance, focusing
on what test-takers can do may result in a translated text which feels, reads, makes sense like
a text written in the target language, and is true to the original.
Table 21
Waddington’s scale for holistic assessment (Waddington, 2001, p. 315)
Level Accuracy of transfer of ST* Quality of expression in TL** Degree of task Mark
content completion
4 Almost complete transfer; there Large sections read like a piece Almost 7/8
may be one or two insignificant originally written in TL. There are a completely
inaccuracies; requires certain number of lexical, grammatical or successful
amount of revision to reach
spelling errors.
professional standard.
3 Transfer of the general idea(s) Certain parts read like a piece Adequate 5/6
but with a number of lapses in originally written in TL, but others read
accuracy; needs considerable like translation. There are a
revision to reach professional considerable number of lexical,
standard. grammatical or spelling errors.
2 Transfer undermined by serious Almost the entire text reads like a Inadequate 3/4
inaccuracies; thorough revision translation; there are continual lexical,
required to reach professional grammatical or spelling errors.
standard.
1 Totally inadequate transfer of The candidate reveals a total lack of Totally 1/2
ST content; the translation is not ability to express himself adequately in inadequate
worth revising. TL.
*ST: source text; **TL: target language
133
Waddington (2001) designed a 5-level assessment scale which conceptualizes translation
competence as a whole, at the same time it requires the assessor to consider three aspects of
the translator’s performance. For each level, there are two possible marks, offering the
assessor the freedom to distinguish between candidates within five levels (Table 21). This
scale treats the translated text holistically, and even if it uses descriptors for the different
levels, the possibility to consider three different aspects of performance still let the assessor
rely on his or her personal interpretation, thus making the assessment rather subjective.
Analytical evaluation instruments, on the other hand, are based on the number of
errors (sometimes good decisions), which are often categorized according to importance and
nature (Conde, 2013).
Researchers, who are aware of the fact that measuring translation quality is a
subjective process, as it relies on human judgment, propose to base the assessment on
analytical grids which, in their opinion, represent objective evaluation criteria (Orlando,
2011). Their proposal is supported by scholars who believe that there is no universally
accepted evaluation model in the world of translation, see e.g., Pym (2014). The grids
traditionally consist of a detailed taxonomy of different kinds of inaccuracies in grammar, text
cohesion, word choice, etc.
However, the analytical approach does not adequately reduce the subjectivity of
evaluation, mainly because of disagreements between raters on the weighing of translation
mistakes (Eyckmans et al., 2009; Eyckmans, Anckaert, & Segers, 2016), and there is a
negative bias: raters look for errors rather than strengths in the text. The instructions for the
analytical graders are also different:
The analytical method entails that the translation be marked according to the
evaluation grid provided. This method implies that the corrector underlines every
error and provides information in the margin (or in the word in the proofing area)
as to the nature of the error (e.g. ‘CT’ for content errors or misinterpretations,
‘GR’ for grammatical errors, etc.). Finally, a number of points will be deducted
from a total of X points for each error found, e. g. –2/CT error; –0,5/GR mistake,
etc. (Eyckmans et al., 2009, p. 91).
When discussing analytical scales of assessment, first we have to look at the model of
communicative competence devised by Bachman and Palmer (1982, p. 451), in Table 32,
which provided the basis for their classic analytical scale, and most of the other analytical
scales devised later.
134
Table 32
The model of communicative competence by Bachman and Palmer (1982, p. 451)
COMMUNICATIVE COMPETENCE
∕ \
GRAMMATICAL SOCILINGUISTIC
PRAGMATIC COMPETENCE
COMPETENCE COMPETENCE
∕ \ ∕ \ ∕ \
non-literal
morphology syntax vocabulary cohesion organization register nativeness
language
Their operationalized assessment instrument (Tables 33, 34, 35) consisted of three separate
scales for each of the main traits of linguistic, pragmatic and sociolinguistic competence,
clearly, highlighting what to measure. Analytic scales of this type require the rater to pay
attention to specific features of the trait; their basic aim is to find out what the candidates
know and how they can perform demonstrating the tested competence.
The usefulness of a scale is always based on the careful and detailed definition of both
linguistic and functional terms of the points included (Fulcher, 2014). Bachman and Palmer’s
scales seem to meet this requirement almost in every respect, however, what counts as
“extensive”, “large”, “small” or “limited” vocabulary, or what the difference between
“extensive” and “large” or “small” and “limited” mean, is undefined.
Table 33
The Bachman and Palmer scale of sociolinguistic competence (1982, pp. 456-457)
135
Table 34
The Bachman and Palmer scale of linguistic competence (1982, pp. 456-457)
Table 35
The Bachman and Palmer scale of pragmatic competence (1982, pp. 456-457)
The use of analytical grids is nowadays widespread in the field of translation assessment
(Martínez, 2014), where scoring is based on fixed criteria (see Table 36). As it has been
pointed out, he main aim of using such grids is to move away from the potential subjectivity
of holistic assessment to a replicable system based on the identification of errors. However,
decisions on weighing the errors also involve subjective judgement, and as research has
shown, pointing out the wrong solutions, focusing on errors is not necessarily the best way to
assess a translation product (Garant, 2009; Phelan, 2017).
136
Table 36
An analytical grid used by the NAATI* for product-oriented evaluation (Orlando, 2011, p. 302)
Terminology / Word choices 0 2 4 6 8 0: Incorrect choices made with very significant impact on
(affecting more or less the meaning;
localized meaning)
2: Inadequate choices made with significant impact on
meaning;
4: Inadequate choices made with some impact on meaning;
6: Good choices made; minor amendments required;
8: Very good choices; no changes required;
137
Grammatical choices / 0 2 4 6 8 0: Incorrect choices made with very significant impact on
Syntactic choices (producing meaning;
more or less distortion to the
2: Inadequate choices made with significant impact on
meaning)
meaning;
4: Inadequate choices made with some impact on meaning;
6: Good choices made; minor amendments required;
8: Very good choices; no changes required;
This assessment tool is similar to Bachman and Palmer’s scale as it uses detailed descriptors,
although the error counting feature of analytical grids is much more pronounced in it. The
candidate’s final grade is calculated by adding up the awarded points: the higher the sum, the
better the grade.
The analytic scale proposed by Eykmans et al (2009, p. 92) is quite different (Table
37). It is based on very strict error counting. Each error is “punished” with a negative score,
which will add up, and in the end will be deducted from the total of X points.
138
Table 37
Eykmans, Anckaert and Segers’ analytical grid (Eyckmans et al., 2009, p. 92)
Meaning or Sense Any deterioration of the denotative sense: erroneous information, nonsense, − 1
important omissions
Misinterpretation The student misinterprets what the source text says: information is presented in − 2
a positive light whereas it is negative in the source text, confusion between the
person who acts and the one who undergoes the action
Calque Cases of a literal translation of structures, rendering the text “un-TL” − 1
Register Translation that is too (in)formal or simplistic and not corresponding to the nature − 0.5
of the text or extract
Grammar Grammatical errors in TT (e. g., wrong agreement of the past participle, gender − 0.5
confusion, wrong agreement of adjective and noun) + faulty comprehension of the
grammar of the original text (e.g., a past event rendered by a present tense,…),
provided that these errors do not modify the in-depth meaning of the text
Addition Addition of information that is absent from the source text (stylistic additions are − 1
excluded from this category
Spelling Spelling errors, provided they do not modify the meaning of the text − 0.5
Punctuation Omission or faulty use of punctuation. Caution: the omission of a comma leading − 0.5
to an interpretation that is different from the source text, is regarded as an error of
meaning or sense
The need for more empirical research on translation assessment has repeatedly been expressed
during the past decades. Attempting to “free evaluation of construct-irrelevant variables”
(Eyckmans & Anckaert, 2017, p. 43) that characterize both holistic and analytical scoring,
some scholars have taken up new directions, and instead of criterion-referenced translation
tests started to develop norm-referenced ones (Eyckmans et al., 2009). The norm-referenced
approach in translation assessment consists mainly of transferring the item concept of
standard language testing practice to the field of translation assessment. The first attempt to
assess translation competence by sample-based (norm-referenced) methodology was the
Calibration of Dichotomous Items (CDI) method developed by Eyckmans et al. (2009). A few
years later the Preselected Items Evaluation (PIE) was introduced (Kockaert & Segers, 2017;
139
Van Egdom et al., 2019). Both methods were developed to reduce the problem of subjectivity
in assessments. The PIE method (Kockaert & Segers, 2017) is an adapted, practical,
pragmatic version of CDI method (Eyckmans et al., 2009; 2016; Eyckmans & Anckaert,
2017). They are both calibration methods, since they are based on the practice of calibrating
segments of translation, which allows the construction of standardized test of translation.
Also, they are both dichotomous, as they make a distinction between correct and incorrect
solutions; however, they do not distinguish between levels of error (Dastyar, 2019, Kockaert
& Segers, 2017).
With the CDI method, the translations are scored on the basis of the test-takers’
performance on a pre-selected set of translated segments called ‘calibrated items’ or items for
short. Every element of the text that contributes to the measurement of differences in
translation ability between test-takers acquires the status of an item (Eyckmans & Anckaert,
2017). In contrast to the criterion-referenced approach, the CDI method uses a pre-test
procedure to decide which text segments demonstrate discriminating power. In this procedure,
the segments are determined on the basis of performance of a representative group of
translation trainees.
In the PIE method translations are also scored on the basis of test-takers’ performance
on a set of translated segments, but these segments are preselected by the translation grader by
calculating item difficulty values (p value) and item discrimination indices (d index)
(Eyckmans & Anckaert, 2017; Van Egdom et al., 2019). Item difficulty is the percentage of
test takers who answer the item correctly. To get the item difficulty, the number of candidates
answering the item correctly is divided by the total number of candidates answering the item.
To measure the discrimination value of the preselected items, the number of candidates with
high scores referring to a particular item are compared with number of candidates with low
scores who answered the same item correctly.
The d index is the X number of candidates in the top group who answered the item
correctly minus the X number of candidates in the bottom group who answered the same item
correctly. The items which have too high or too low p values or weak discriminating power
may be removed from the list of pre-selected items and replaced by other items (Eyckmans et
al., 2009; 2016; Eyckmans & Anckaert, 2017; Kockaert & Segers, 2017). As in the CDI
method, “the correct and erroneous solutions are determined” (Kockaert & Segers, 2017, p.
151). The pre-selected items can relate to different error types: vocabulary, grammar, spelling,
style, etc. To establish the test score, only items with good discriminating power (>.3;
(Eyckmans & Anckaert, 2017, p. 44) are considered. Although both CDI and PIE seem to be
140
effective and objective methods, they have a drawback: they are labor-intensive, especially
CDI, causing a growing concern over using this method in translation training contexts
(Dastyar, 2019). PIE, which, according to its developers “ensures objectivity, cross-candidate
transparency and equality in scoring” (Kockaert & Segers, 2017. p. 152) seems to be more
feasible.
However, its implementation raises an important question: What do assessors do when
a test-taker proposes an incorrect solution for an item, which was not preselected? In such a
case, the authors suggest checking the performance of all the candidates who took the given
test. If there is only one candidate who proposes a wrong solution for the not preselected item,
this item should not be included in the translation test. If the item has a good item difficulty
and discrimination index, then it may be included in the translation test.
Segers and Kockaert compared their PIE method to the other three most frequently
used methods: the holistic method, the analytical method and the CDI method (Table 38). As
data in the table shows, the holistic evaluator considers the translation as a whole and bases
the judgment on an overall impression. This method is fast but subjective and the value
judgments of different evaluators on the same translation can vary to a large extent. What one
evaluator considers a good and creative translation can be seen as unacceptable by another
(Eyckmans et al., 2009; 2016), indicating that the inter-rater reliability in holistic assessment
tends to be low.
Table 38
Evaluation methods: overall comparison (Kockaert & Segers, 2017, p. 153)
Dichotomous - - √ √
Calibration - - √ √
Inter-rater reliability - - √ √
Criterion referenced +
Norm referenced +
141
The analytical method, as it includes descriptors that make scoring easier, is regarded to be
more reliable and valid than holistic methods: the evaluator uses a matrix which consist of a
number of error types and a number of error levels (Kockaert & Segers, 2017). Preparing an
analytical scale to assess a test of translation competence requires more time than preparing
an assessment scale using the holistic method, but the translator will have a better
understanding of what is correct and what is wrong in the translation. However, this method is
no guarantee for objectivity, as different evaluators do not always agree with each other: the
same error can be a minor one for one evaluator and a serious one mistake for another
(Eyckmans et al., 2009; 2016).
As far as issues of reliability and validity related to these methods are concerned,
empirical data suggests that CDI is more reliable than PIE in assessing translation competence
(Eyckmans & Anckaert, 2017, p. 50). However, according to other research, it is the PIE
method which serves reliability better in the context of translation evaluation (Kockaert &
Segers, 2017, p. 160; Dastyar, 2019, p. 45). A comparison of the two methods shows their
basic similarities and the difference in their item selection processes (Table 39).
Table 39
CDI versus PIE (Kockaert & Segers, 2017, p. 155)
CDI PIE
Same value judgment among evaluators Same value judgment among evaluators
Reinforces its potential as assessment method for a Reinforces its potential as assessment method for a
more reliable and valid certification of translation more reliable and valid certification of translation
competence competence
Items selected on the basis of docimological criteria Items selected on the basis of translation brief criteria;
Option: Translation brief relevant items reselected on
the basis of docimological criteria
According to the developers’ beliefs (Kockaert & Segers, 2017), the PIE method is more
practical than CDI, and it offers the ultimate advantage of reliability in the context of
translation assessment: each test-taker is assessed on the same items, which have been pre-
selected based on their p-values and d-indices. It is committed to a binary logic: an answer is
either correct or it is not; there is no weighting of errors in the evaluation process.
Theoretically, it guarantees inter-rater and intra-rater reliability, and it is time efficient.
142
However, it also has weaknesses: it allows subjective influences both in selection and
assessment phases, and it does not account for the text as a whole (Van Egdom et al., 2019).
When assessing translations, several factors have to be taken into consideration, according to
Dróth (2011), who listed the most important ones. The first one is the assessment situation,
including the frames of assessment (MA level in higher education, language school / agency,
translator exam, etc.), the translator and the assessor. The most integrated system of criteria is
used in the evaluation system of language exams. Another important factor concerns the aim
of assessment, which usually is the assessment of performance and competence. Concerning
translations, (Hönig, 1998) distinguishes therapeutic assessment (focusing on the student and
student competence in the training process) and diagnostic assessment (with focus on the
hypothetic response of the translation user at the end of the training process or at the
workplace and also finding strengths and weaknesses to tailor teaching needs) (See also
Klaudy, 2005). The third element is the subject of assessment, which, from the viewpoint of
translator training, is the translation competence, including the five sub-competences
identified by Kockaert and Segers (2017) as (1) translation competence, (2) linguistic
competence (source and target language), (3) cultural competence (source and target
language), (4) research competence, and (5) technical competence.
Translation assessment aims at those elements of translation competence which are
relevant in the given assessment situation and correspond with the aim of assessment (Dróth,
2011). The emphasis is always on the quality of the translation: adequacy with target
language norms, and usability of the translated text in the target language. Knowledge of
idioms and culture-related expressions, adequate knowledge and use of terminology and good
management of traditional and IT tools also have an accentuated role in evaluation.
When writing about the assessment methods in Hungarian translator training
institutions, Dróth (2011; 2017), in line with international research, also mentioned holistic
and analytic methods and discussed their advantages and drawbacks, most of all their
subjective nature. To overcome the problem of subjectivity, she suggested mixing the two
methods, emphasizing that there was no guarantee that it will strengthen the validity and the
reliability of assessment. Another solution could be using descriptors, which is common
practice in language examinations (see CEFR, 2001), although the first descriptors for
143
mediation appeared only in the most recent CEFR Companion Volume (2018), offering ideas
to assess translations at institutions training translators.
In her study Dróth (2011) compared the assessment criteria of nine Hungarian
translator training institutions, all of them at universities. At the Training Center for
Translators and Interpreters at the Faculty of Humanities, ELTE University, Budapest, the
certificate can be earned by preparing a “print ready” translation from the source language
into Hungarian. Grade 5 (excellent) is awarded when the translation can be published without
editing and grade 1 (fail) when the translation is not fit for editing. On a scale (Dróth, 2011, p.
19; Klaudy, 2005), the grade is
(5) excellent, if the translation gives back the full content of the source text, the
target language text contains no mistakes, so the translation can be published
without editing
(4) good, if the translation gives back the full content of the source text, but the
target language text contains minor (word level) editing in the target language
(3) satisfactory, if the translation in one or two cases is different from the content
of the source language text, and the target language text needs editing at word
and sentence level
(2) pass, if the translation in more than three cases is different from the content of
the source language text, and the target language text substantial needs
correction at word and sentence level, Hungarian language use, punctuation,
spelling, but it is still worth editing
(1) fail, if the translation in more than three cases is different from the content of
the source language text, and the target language text substantial and the target
language text contains so many mistakes at word and sentence level, Hungarian
language use, punctuation, spelling, that it is not worth editing – it is simpler to
re-translate the source text.
The above descriptors are characteristic for holistic assessment (can be published without or
with editing / is not worth editing), which contains at least three analytic elements, related to
the content of the source text, the quality of the target language text and the level of error. Its
reliability is ensured by a detailed translation and assessment guide (Klaudy, 2005).
The Faculty of Economics and Social Sciences, Szent István University, Gödöllő,
similarly to the assessment system used at ELTE, uses a four-level analytic scale targeted at
144
(1) content, (2) style, (3) grammar; (4) marketability and special skills. The levels are
weighed differently. The assessment includes a holistic element: the reviewer’s short
evaluation. The reliability of the assessment is ensured by hiring internal and external
reviewers and using a detailed evaluation guide.
The Centre for Agricultural Sciences at the University of Debrecen applies a simple
assessment system, which evaluates the following components: (1) mediating information, (2)
dictionary use, (3) special language use, (4) clarity, style, (5) emphasis shift. The averages of
the scores given for each component are used as the final score. This structure is very
permissive, which promotes the subjectivity of the evaluation.
The Faculty of Social Sciences at the University of Debrecen also offers a translator
training program. Their assessment criteria include two main ones: (1) special requirements
(terminology, word choice, register, genre expectations); (2) general features (the aim of the
translation and its readers, the type of the text and its genre characteristics, Hungarian style,
Hungarian spelling, formal criteria, etc).
At the Foreign Language Institute, Universityof Szeged the assessment is aimed at (1)
content / text equivalence, (2) cohesion / coherence; (3) register /style; (4) word choice /
terminology; (5) formal expectations; (6) accuracy and (7) genre characteristics.
The Corvinus University of Budapest borrowed the assessment sheet of University of
Westminster, London (Table 40).
Table 40
Assessment sheet used at the University of Westminster and the Corvinus University (Dróth, 2011, p. 24)
Translation
Aspects of assessment Maximum score
component
Overall impression Coherence and cohesion; Text structure 2
Total points 20
145
The Faculty of Medicine at the University of Szeged also trains professional translators. The
exam translations are evaluated by a seven-level scale, including: (1) general impressions
(0,1,2,3); (2) information transfer (0,1,2,3,4); (3) style (0,1,2,3); (4) terminology / genre / text
type (0,1,2,3,4); (5) language accuracy / syntax / morphology (0,1,2,3,4); (6) spelling,
punctuation and formal requirements (0,1,2,3); (7) translation diary (0,1,2,3). Each scale is
weighed differently, as it is shown by the scores in brackets. The markers keep a so called
evaluation diary (the name of the marker, the name of the test-taker, the target language title
of the translation, the number of grammar and spelling mistakes, style/word choice, number
and type of omissions, negative and positive elements, questions / remarks / suggestions),
which might include holistic elements.
The evaluation of translations at the Budapest University of Technology and
Economics is also based on assessment scales with descriptors, which could not be interpreted
without the error list they use. This list includes the fields of assessment, the errors together
with their weighting and scoring (Table 41).
Table 41
The error list of the Budapest University of Technology and Economics (simplified; based on Dróth, 2011, p. 25)
146
The final grades are calculated with the help of descriptors, according to which a translation is
(5) excellent, when the number of the collected points is not higher than 4,
regardless the types of the mistakes they were given for;
(4) good, when the number of the collected points is between 5 and 7;
(3) satisfactory, when the number of the collected points is between 8 and 10;
(2) pass, when the number of the collected points is between 11 and 13;
(1) fail in case of several major mistranslations or when the number of the
collected points is higher than 14. (Dróth, 2011, p. 26)
This detailed and twofold assessment system (the analytic grade with the error list and the use
of descriptors) should guarantee the reliability of the evaluation, however, it contains
redundant elements (overlaps), and does not evaluate the information content of the text.
These features reduce the reliability of the assessment, as it does not include the most
important segment of translations: the message the translated texts carry. The other problem
with this scale is its pronounced error-centeredness. The emphasis is on the committed errors;
the good choices remain unobserved and unmarked.
At the Kodolányi János University of Applied Sciences the assessment involves the
language of the translations (style, register); text accuracy, coherence, terminology and
language use. (Dróth, 2011, p. 27.)
Dróth (2011) compared the most frequently used assessment criteria at Hungarian
universities. As is shown in Table 42, there are important differences between the listed
translation training institutions in this respect, indicating the absence of a common and
sensible assessment tool, which would help to achieve the objectivity of the evaluation,
resulting in a stronger intra- and inter-rater reliability.
Table 42
The ten most frequently used assessment criteria at the examined Hungarian translator training institutions
(based on Dróth, 2011, p. 27)
8
1. Language use (spelling, punctuation)
2. Terminology 7
3. Equivalency at word level 7
4. Style / register 7
5. Coherence 6
6. Equivalency at sentence level 5
7. Genre characteristics 5
147
8. Paragraphing, formal criteria 5
9. Cohesion 4
10. Nominal/verbal, linear/non-linear, word-to-word/ free translation 3
If we compare these criteria listed in Tabée 42 to the descriptors listed under the title
Translating a written text in writing in the new CEFR Companion Volume (2018, p. 114) (see
Table 43), which states that “professional translators are usually operating at a level well
above C2” we will see that this scale does not address translation competence or any typical
translation activities. (C2 in this case is the middle level of a scale of five levels for literary
translation produced in the PETRA project, a network for education and training literary
translators.) It specifies the languages involved, providing a functional description of the
language ability necessary to reproduce a source text in another language. The key concepts
of the scale include: (1) comprehensibility of translation; (2) the extent to which the original
formulations and structure influence the translation; and (3) capturing nuances in the original
text (CEFR, 2018, p. 113).
Table 43
CEFR descriptors for B2, C1 and C2 levels for task “Translating a written text in writing” (CEFR, 2018, p. 114)
Level Descriptor
C2 Can translate into (Language B) technical material outside his/her field of specialisation
written in (Language A), provided subject matter accuracy is checked by a specialist in the
field concerned.
C1 Can translate into (Language B) abstract texts on social, academic and professional
subjects in his/her field written in (Language A), successfully conveying evaluative aspects
and arguments, including many of the implications associated with them, though some
expression may be over-influenced by the original.
B2 Can produce clearly organised translations from (Language A) into (Language B) that
reflect normal language usage but may be over-influenced by the order, paragraphing,
punctuation and particular formulations of the original.
After the comparison, it becomes evident that the ten most frequent assessment criteria
applied at the above discussed Hungarian translator training institutions (maybe in other
wording) are in line with the CEFR descriptors. However, there are also differences; the most
significant one is that the CEFR scale aims to assess what a test-taker CAN do, whereas the
others focus on errors. Also, those criteria do not form a common system, so the evaluation of
148
the same translation by different assessors can show considerable differences depending on at
which institution the assessment was done.
149
(2) Major grammar error: conjugational, tense, word order, syntax errors, use of
improper prepositions in phrasal verbs;
(3) Inappropriate terminology: the apparent absence of special vocabulary, the misuse of
terminology;
(4) Major punctuation and spelling errors, e. g. in geographical names or proper nouns,
the improper use of uppercase and lowercase; ignoring the error messages and the
spell check function of MS Word;
(5) Uneducated language use, the apparent lack of practice in written communication.
Minor error:
(1) An error that does not impair the overall meaning of the text and can be corrected at
word level;
(2) A minor grammar error which does not influence understanding;
(3) A minor, usually spelling or punctuation mistake which does not change the meaning
at sentence level.
The type of the error (H / h) is marked by the evaluator in the proofing section. The final
grades are calculated by counting and weighting the errors. If the number of minor errors is
higher than six, that counts as a major error.
150
Both tasks are assessed by two teacher-evaluators, who mark the translations following the
guidelines. The mean of the two grades is the final grade. In case of one grade difference, the
test taker gets the higher one. In case any of the two tasks is 1 (fail) by both evaluators, the
final grade is 1 (fail), but only the unsuccessful part of the test has to be taken again.
As is apparent, the assessment here is based on counting errors, and, although the
types of errors are defined, the scale leaves room for subjective judgement in thes assessment.
What is judged as a major error by one marker may be judged as a minor one by the other
one, or it may be overlooked. Descriptors such as “uneducated language use” or the “apparent
lack of practice in written communication” are opaque, difficult to grab. All this suggests that
even the most detailed assessment scales can fail in fulfilling their aims if the descriptors or
components included reduce not only inter-rater, but also intra-rater reliability.
Assuming that the scale described above (“the old UP scale”) did not meet important
reliability requirements, an empirical study was conducted with focus on the reliability issues
of translation assessment.
5.4 Study 3: An inquiry into how the ‘old’ UP scale of assessment worked
This part of the dissertation discusses the preliminary study that was conducted in order to
find out how the assessment scale described in the previous section (5.3) was applied in
authentic situations, when it was used to assess exam translations. The other aim of this study
was to examine how consistently the raters used this evaluation instrument and how the final
grades were calculated.
In this study I focused on two important aspects of assessment: inter-rater reliability and rater
consistency. In addition to these, I also wanted to inquire into what the raters thought about
the instrument they used. Accordingly, four research questions were formulated:
(1) How does the rating scale perform in terms of inter-rater reliability?
(2) How consistent are the raters in their assessment?
(3) How do the raters evaluate the assessment system they apply to assess translation
students’ work?
(4) What modifications would they recommend for making it more appropriate for
assessing the quality of translation and students’ translation competence?
151
5.4.2 Participants
The participants of this study were 16 BA translation specialization students and their four
teachers who assessed their translations. In addition to them, the head of the translation
program was also involved. As both students and teachers were promised anonymity, I will
refer to the students as test takers: TT1, TT2, etc., to the teacher-raters as Rater 1, Rater 2, etc.
All students in the graduating group took the qualifying exam; thus, the rate of participation
was 100% and the 16 test-takers formed a full sample of students specializing in translation
studies.
In order to answer the first two research questions, data was collected in the form of test
scores given to 16 students for their two translations in their final exam by four raters using
the evaluation sheet described in section 5.3. The database comprised a total of 32
translations: 16 translations from English to Hungarian and 16 translations from Hungarian to
English collected in electronic format in the fall semester of 2017. The source text in the first
case was a 355-word-long (2,105 characters) text from the field of social sciences about a
well-known literary reviewer of the late 19th century; in the second case it was a 271-word
(2,050 characters) business text about a country assessment by the International Monetary
Fund.
To examine the reliability of the scale in use I compared the assessment of the 16
exam translations from English into Hungarian, and also from Hungarian into English. The
translations in both cases were assessed by two raters, who worked independently form one
another. All the errors, marked by the raters, were counted manually. First, I compared the
two raters’ evaluations of each translation; then, I looked at how consistently the same rater
treated the errors in the sixteen translations. The differences in their assessment, including the
errors and the grades, are organized and presented in Tables 45 to 55. The inter-rater
reliability of the raters’ judgments was also calculated. To do this, I used SPSS concentrating
on three important coefficients of internal consistency and reliablitity: Cronbach’s alpha,
Intra-class Correlation Coefficient (ICC) and Krippendorff’s alpha (Table 44).
152
Table 44
Three basic coefficients of measuring statistical data
scale reliability Cronbach’s alpha α ≥ 0.75 (Crocker & Algina, 2006, p. 142)
Intra-class Correlation
internal consistency α ≥ 0.75 (Shrout & Fleiss, 1979, p. 426)
Coefficient (ICC)
interrater reliability Krippendorff’s alpha α ≥ 0.80 (Krippendorff, 2004, p. 241)
In order to answer the third and fourth research questions, in December 2019 and January
2020 I conducted semi-structured interviews with the four raters and the head of the program.
Although I had a set of questions prepared beforehand (See Appendix D), I let the
interviewees divert me by the ideas they brought in during the interviews, in three cases in
face-to-face situations, when the interviews were recorded then transcribed; in one case the
answers were written up by the respondent and sent in an email. The length of the oral
interviews was between 20 and 65 minutes, depending on the verbosity of the respondents.
The interview with the head of the program, also in January 2020, was less structured.
He had never acted as an assessor of students’ translations, however, as a teacher and
practising translator he offered to give his overall opinion of the program, the assessment
scale and also some ideas on how to make it better. As he did not like the idea of recording
him, I took hand-written notes of what he said.
As the 32 translations included the raters’ grades and the number of the identified errors it
was not difficult to construct tables with the raw scores and the grades given by two raters for
16 TT’s two translations, and to include the final grades – the mean of the two grades given
by the two raters individually (Tables 45, 55).
The assessment tool was the scale – or rather a list of errors – described in the
previous section (5.3), comprising major (H) and minor (h) errors. Table 35 shows how many
of each error type the individual test-takers (TT) made and after counting and adding up these
errors what grade they got for their translations. Those students, who failed, because both
raters gave them a grade 1 (fail) for the same component (HU-EN or EN-HU), could re-sit the
exam. Re-sit results are shown in brackets in Tables 45 and 55.
153
To answer RQ1 ‘How does the rating scale work in terms of inter-rater reliability”, first I
examined the 16 HU-EN, then the 16 EN-HU translations. I compared the numbers of the
identified errors, both major and minor, and also the grades by the two raters. To provide
further evidence, I identified examples where the raters, following the same scale, judged the
mistakes differently.
A) English – Hungarian translations: inter-rater reliability
Table 45
Raw scores (H; h) and grades given by two raters for 16 test-takers’ EN-HU translation tests
EN-HU
Test-taker (TT) Rater 1 Rater 2
H h Grade H h Grade
TT1 2 11 3 1 9 4
TT2 4 11 1, (3) 7 16 1, (3)4
TT3 3 2 3 11 8 1
TT4 2 2 4 2 6 4
TT5 1 4 5 2 5 4
TT6 0 8 5 0 9 5
TT7 0 5 5 6 8 1
TT8 4 6 2 3 7 2
TT9 8 7 1, (2) 6 14 1, (1)
TT10 1 6 5 4 6 2
TT11 7 8 1, (2) 6 9 1, (2)
TT12 4 2 2 4 8 1
TT13 3 6 3 2 11 3
TT14 3 5 3 3 12 2
TT15 3 3 3 2 8 3
TT16 3 4 3 6 6 1
Total 49 90 65 142
Mean 3.00 5.63 3.06 4.06 8.88 2.25
SD 2.191 2.872 1.436 2.792 3.030 1.390
To analyze the data, and to compute the most important coefficients of measurement (see
Table 44) I used SPSS. As the grades were calculated by the raters by counting and adding up
the numbers of errors at two levels (major/minor errors; H/h), I wanted to see the level of
agreement between the two raters concerning errors. Although the huge difference between
154
the total number of identified errors can be seen immediately (49 vs. 65 in case of major H;
90 vs. 142 in case of minor errors h), I looked for statistical proof. First, I identified the
frequencies of the different scores (number of identified errors) by the two raters.
Table 46 shows the frequency of major errors (H) identified by each rater in the
Hungarian – English translations assessed.
Table 46
The SPSS frequency statistics for inter-rater reliability in the judgment of major errors in EN-HU translations
(N=16)
Rater 1 Rater 2
The highest number of major errors identified by Rater 1 in one text (TT9) was eight, whereas
by Rater 2, also in one test (TT3), was eleven. Standard deviation (SD) expresses by how
much the members of a group differ from the mean value for the group. In the case of Rater 1
the SD was 2.191, whereas in the case of Rater 2 it was 2.792, both are much higher than 1.0,
indicating very high variation. The difference between the two means (3.0; 4.06) was also
large, which, concerning major errors indicated considerable disagreement between the two
raters. The Cronbach’s alpha, the measure of internal consistency of scale reliability (also
calculated in SPSS) was 0.541, much lower than the acceptable α ≥ 0.75 (Crocker & Algina,
2006, p.142). The Intra-class Correlation Coefficient (ICC), another important measure of
internal consistency of scale reliability and the degree of agreement between two (or more)
raters with the minimum acceptable value of 0.75 (Shrout & Fleiss, 1979, p. 426) was 0.522,
indicating poor reliability. Krippendorff’s alpha, which is regarded to be the most general
155
agreement measure was 0.4536, compared to the required α ≥ 0.80 or the lowest conceivable
α ≥ 667 limit (Hayes & Krippendorff, 2007; Krippendorff, 2004, p. 241). These numbers
indicated that there was slight agreement between the two raters’ judgements concerning
major errors in the 16 texts they scored.
Table 47
The SPSS frequency statistics for inter-rater reliability in the judgment of minor errors (h) in 16 EN-HU
translations (N=16)
Rater 1 Rater 2
The disagreement in measuring the errors was even more obvious in the case of minor errors
(Table 47). The statistics showed that the range of the number of minor errors per translation
was between 2 and 11 by Rater 1, whereas between 5 and 16 by Rater 2. The high SD indices
(2.872 and 3.030) indicated a large spread of identified errors by both raters. The reliability
statistics of 0.724 Cronbach’s alpha was very close to the unacceptable value (< 0.70). The
ICC value was 0.529, the Krippendorff’s alpha 0.1718, indicating a low agreement between
the two raters’ scores.
It is also interesting to look up a few examples where the raters judged the mistakes
differently (Table 48). Although they followed the same scale, there were no translations in
which the two types of errors were assessed in the same way: an error coded as an ‘H’ for one
rater, was ‘h’ or no error for the other. For example, in the case of TT7, Rater 1 identified zero
156
major (H) and five minor (h) errors resulting in a grade 5 (excellent), whereas Rater 2 marked
six major (H) and eight minor (h) errors, resulting in a grade 1 (fail).
Table 48
Translations from English into Hungarian (EN – HU): Examples for differences in R1’s and R2’s coding of
errors
The final analysis in this part compared the grades given by the two raters (see Table 45). In
case of translations from English into Hungarian, only eight of the sixteen students (TTs 2, 4,
6, 8, 9, 11, 13, 15; 50%) were given the same grade by the two raters. In the most extreme
case (TT7) Rater 1 assessed the translation a grade 5 (excellent), whereas Rater 2 failed the
same translation.
Table 49 shows the frequency of the grades given by the raters for the same
translations (EN-HU) and their most important statistical characteristics. The means differ
largely: 3.06 vs. 2.25; the difference between them is 0.81, nearly one grade, and if we look at
the individual grades, we can also see considerable disagreements between the two raters.
157
Standard deviation (SD) in case of Rater 1 was 1.43, whereas in case of Rater 2 1.39; both
indicating a relatively high variation.
Table 49
The SPSS frequency statistics for inter-rater reliability of the grades given by the two raters for EN-HU
translations (N=16)
Rater 1 Rater 2
1 3 18.8 7 43.8
2 2 12.5 3 18.8
3 6 37.5 2 12.5
4 1 6.3 3 18.8
5 4 25 1 6.3
Total 16 100 16 100
Mean 3.06 2.25
SD 1.436 1.390
The Cronbach’s Alpha was 0.717: barely higher than the acceptable 0.70. The Intra-class
Correlation Coefficient was 0.659, definitely higher than in the case of error identification,
however, as it was still below 0.75, indicating just moderate reliability. The 0.4605
Krippendorff alpha also remained well below the “still conceivable limit” of 0.667
(Krippendorff, 2004, p. 241). All this is expressed in the final grades, as well: Rater 1 failed
three students 18.8%), Rater 2 seven (43.8%), indicating the highest disagreement between
the grades.
Translating from Hungarian into English, according to the data presented in Part I, 3.7.5 is
more difficult for the students than into the mother tongue. This finding was partly confirmed
by the means of the grades of exam translations, 3.06 by Rater 1 and 2.25 by Rater 2 in case
of EN-HU translations, and 2.56 by Rater 3 and 2.75 by Rater 4 in case of HU-EN
translations (Table 50).
158
Table 50
Raw scores and grades given by two raters for 16 test-takers’ HU-EN translation test (N=16)
HU-EN
Test-taker (TT) Rater 3 Rater 4
H h Grade H h Grade
TT1 4 5 2 4 21 1
TT2 4 5 2 1 14 3
TT3 1 8 4 1 6 5
TT4 4 8 2 2 13 2
TT5 2 6 4 1 5 5
TT6 1 2 5 1 7 4
TT7 0 6 5 0 8 5
TT8 6 13 1 1 24 2
TT9 11 7 1 3 21 1
TT10 5 5 2 3 11 3
TT11 8 7 1 8 18 1
TT12 5 6 2 2 16 2
TT13 13 8 1(1) 9 19 1(1)
TT14 3 11 2 0 19 3
TT15 0 3 5 0 12 4
TT16 5 3 2 3 16 2
To examine the degree of agreement between the two raters concerning the translations from
Hungarian into English, I repeated the steps I had followed with the translations from English
into Hungarian. The frequency statistics (Table 51) here confirmed that the differences in
identifying the major errors, compared to the other direction, were not as large between the
two assessors.
Although there was a considerable difference between the means of the identified
major errors (4.50; 2.44), the 0.858 Cronbach’s alpha, being higher than 0.70, and the 0.774
ICC indicated much better reliability in this respect. However, the Krippendorff alpha with its
law value, 0.5686, much lower than the acceptable 0.80, still indicated poor inter-rater
reliability.
159
Table 51
The SPSS frequency statistics for inter-rater reliability in the judgment of major errors (H) in HU–EN
translations (N=16)
0 2 12.5 3 18.8
1 2 12.5 5 31.3
2 1 6.3 2 12.5
3 1 6.3 3 18.8
4 3 18.8 1 6.3
5 3 18.8 0 0
6 1 6.3 0 0
8 1 6.3 1 6.3
9 0 0 1 6.3
11 1 6.3 0 0
13 1 6.3 0 0
Mean 4.50 2.44
SD 3.688 2.658
Table 52 presents the judgment of minor errors by the two raters, showing huge difference in
means (6.44 vs. 14.38), and also in SD values (2.851 vs. 5.852) which predicted poor
reliability indices again. The prediction was confirmed by the statistics: the 0.546 Cronbach’s
alpha (>0.70: unacceptable reliability), a similarly very low, 0.266 ICC and an extremely low,
–0.2170 Krippendorff’s alpha, indicating a very low level of agreement between the two
raters. How the same instrument could work so differently concerning error identification and
error coding could be the topic of another research. Think aloud protocol could offer evidence
on how the raters made their decisions, but such qualitative data were not collected. However,
this level of disagreement raises questions about the raters’ responsible use of the rating scale.
160
Table 52
The frequency statistics for inter-rater reliability concerning minor errors (h) in HU–EN translations (N=16)
Rater 3 Rater 4
2 1 6.3 0 0
3 2 12.5 0 0
4 0 0 0 0
5 3 18.8 1 6.3
6 3 18.8 1 6.3
7 2 12.5 1 6.3
8 3 18.8 1 6.3
11 1 6.3 1 6.3
12 0 0 1 6.3
13 1 6.3 1 6.3
14 0 0 1 6.3
16 0 0 2 12.5
18 0 0 1 6.3
19 0 0 2 12.5
21 0 0 2 12.5
24 0 0 1 6.3
Mean 6.44 14.38
SD 2.851 5.852
The controversial coding is confirmed by the list included in Table 53, which contains
examples for differences in R3 and R4’s coding of errors in the 16 translations. Among the
twenty items in the list there are seven cases, when the same translation was judged as a major
error by one rater, and as a correct translation by the other.
Table 53
Translations from Hungarian into English (HU-EN): Examples for differences in R1 and R2’s coding of errors
161
elemzés készítői conductors of the analysis H h
hitelezése the lending of correct H
ország nation correct h
tovább zsugorodott shrunk further H (tense) correct
bekövetkezett occurred H correct
intézkedések arrangements H h
erőteljesen heartily H h
állami szektor national sector h H
arány ratio correct H
elemzés készítői evaluators correct h
horgonyozottnak tűnnek seem fixed h correct
legfrissebb newest correct h
pozitívumként jelenik meg lists as positive H h
hitelezése supply of credit correct h
mérséklődhet can moderate H correct
The agreement in calculating the final grades (Table 54), despite the huge differences in
identifying the different types (levels) of errors turned out to be good. There is only a slight
difference between the means (2.56 vs. 2.75), showing that R4 was the more permisseive
rater. The SD values also show similar spread.
Table 54
The SPSS frequency statistics for inter-rater reliability of the grades given by the two raters for HU-EN
translations (N=16)
Rater 3 Rater 4
1 4 25.0 4 25.0
2 7 43.8 4 25.0
3 0 0 3 18.8
4 2 12.5 2 12.5
5 3 18.8 3 18.8
Total 16 100 16 100
Mean 2.56 2.75
SD 1.504 1.483
Both the 0.933 Cronbach’s alpha, which, being close to 1, expressed high similarity of scores
and the 0.933 ICC indicated good reliability concerning the grades. As the frequencies in
162
Table 54 show, the two raters differed only on gardes 2 and 3: Rater 3 was stricter than Rater
4. The Krippendorff’s alpha with its 0.8377 indicated a much better inter-rater reliability than
in the case of major and minor errors.
The final grades were calculated by rounding the means of the four grades given by
the two pairs of raters for the two (EN-HU; HU-EN) translations. Those students, who failed
by the two raters in any of the two components failed the exam, as well as those who could
not meet the requirements of grade 2 in the re-sit test (its grades are shown in brackets in
Table 55). Although each scoring had to be done independently of the others, when
calculating the final grades, the raters can pull each other one way or another, arriving at a
consensus, and in the case of extreme disagreement between the raters a third assessor can be
included (Angelelli, 2009; Eyckmans et al., 2009).
Table 55 Grades given by the raters for the two components and the final grades
EN-HU HU-EN
TT1 3 4 2 1 2
TT2 1 (3) 1 (3) 2 3 1 (3)
TT3 3 1 4 5 4
TT4 4 4 2 2 4
TT5 5 4 4 5 5
TT6 5 5 5 4 5
TT7 5 1 5 5 5
TT8 2 2 1 2 2
TT9 1( 2) 1(1) 1 1 1
TT10 5 2 2 3 3
TT11 1(2) 1(2) 1 1 1
TT12 2 1 2 2 2
TT13 3 3 1(1) 1(1) 1
TT14 3 2 2 3 3
TT15 3 3 5 4 4
TT16 3 1 2 2 2
Mean 3.06 2.25 2.56 2.75 2.81
SD 1.436 1.390 1.504 1.483
The findings of the analysis, illustrated in Tables 45 – 55, indicate extremely low inter-rater
reliability, which can be attributed to the poorly designed assessment tool, or, to another
possibility, the inconsistent work of the evaluators, or a combination of both. As is was
163
detailed in Section 5.3, the scale in use is an analytical scale, which is based on identifying
and listing the errors at levels, categorized here as major errors (H) and minor errors (h). The
grades the students get in the end of the exam is the rounded mean of four grades, given by
two pairs of raters for each component (EN-HU and HU-EN translations).
There is another factor, which should be mentioned: the nature of the error based
assessment, which, from the students’ perspective, is extremely demotivating; whereas from
the raters’ point of view, concentrating on and counting errors reveals only what candidates
cannot do, while what they can, may remain unnoticed. Also, the raters, when focusing on
error counting and categorizing, that is distinguishing the levels of errors, can arrive at
completely, often contradictory, even conflicting categorizations. The reasons must be in the
nature of the scale, which, as it has been already established, leaves room for subjective
judgments in assessment (Angelelli, 2009; Eyckmans et al., 2009).
Table 56
Translations from English into Hungarian (EN – HU): Examples for inconsistent error coding by the same rater
164
Table 57
Translations from Hungarian into English (HU – EN): Examples for inconsistent error coding by the same rater
The listed examples and results irrefutably show that validity and reliability are essential
features of an assessment tool. If they are low, the tool will not fulfill its aim, and, even if the
raters can get to any kind of agreement, it will hinder fair grading.
This section aims to answer research questions 3 and 4. RQ3 focused on how the raters
evaluated the assessment system they applied to assess translation students’ work. How did
they explain the low inter-rater reliability of the tool they had been using for years? What was
behind their own inconsistency in the coding of errors? RQ4 aimed to find out what
modifications the raters would recommend for making the tool more appropriate for assessing
the quality of translations and students’ translation competence. I hoped to find the answers in
the interviews I conducted with 4 raters and the head of the Translation Studies BA
programme, as well as for
165
certificate at ELTE University, otherwise he would not have been able to work for companies
as a translator. As a practicing translator, he relied on what he learnt by himself. Concerning
translation assessment, he emphasized he most often acted “as a judge in case of controversial
grading”, he agreed to answer my questions about the assessment policy applied at the
translation specialization program at the University of Pécs.
Q1 What does a teacher have to take into consideration when assessing students’ translation
performance?
Although he gave a long, elaborated answer, it was relatively easy to identify the surfacing
factors:
the students’ preliminary studies and their English proficiency level;
the teachers, including their qualifications;
the course and the course requirements;
the assignments;
the different forms of assessment applied by the teachers.
Q2 How would you describe the translation students’ target language (English) proficiency
level?
Speaking from experience he said that students entered the translation courses linguistically
unprepared. They have very poor and very simple vocabulary, serious grammatical problems
and practically zero cultural knowledge of the target language. There is no way they could
cope with equivalency problems when doing translations from one language into the other.
His negative perceptions were confirmed in the interviews with Rater 2 and Rater 3, however,
in contrast with what the students claimed about their English language proficiency level as
the majority placed themselves at C1 or C2 level. They are clearly not, which makes meeting
the course requirements (listed in the syllabi section) difficult, sometimes even impossible,
and results in poor performance.
Q3 How would you explain that teachers, using the same assessment scale, see and code
errors differently, inconsistently, arriving at exteremely low inter-rater reliability?
The program director identified a few reasons which can explain the differences in raters’
judgement. Using his own words,
the teachers are not qualified translators, although all of them have considerable
experience as translators;
166
their formal qualifications are in linguistics and literature, as a result, they put
emphasis on different aspects;
even using the same assessment sheet to score translations, they might judge texts
differently, and as a result, they rarely arrive at the same grade.
Q4 Do you think the courses prepare the student for their final translation exam?
Concerning the course and the course requirements, the program director confirmed what we
may know from the syllabi (and from the teachers, too), that the number one activity the
students do is translating different texts. They start working on a text in class, finish the
translation at home and in the next class they discuss it; this is the typical practice followed by
the four teachers I interviewed. The director described it as a procedure teaching the students
what qualifies as a major or a minor error, and as a result, they can develop strategies to
avoid them. However, this practice does not improve the lexical richness of their translations
and does not teach them to move consciously along the so called equivalency level, i. e.
finding the most appropriate word or expression in the target language for each word or
expression in the source text. He thought this strong focus on errors can even turn out to be
counter-productive or de-motivating in the end.
Q5 So you think the students are not really prepared to translate the texts they are given in
classes?
The assignments, as it stated in the syllabi, too, are most often authentic texts: articles from
major British and American daily papers or professional (legal, economic, etc.) journals. The
director described them as extremely difficult texts for students with poor vocabulary, with no
proper knowledge of the social, cultural, historic, legal or economic aspects of the target
language country. Although they can use anything when they translate, including the exam
situations, even the allocated six-hour time span is too short for them to look up everything
they do not know. “The bar is too high for them”, said the director, using an expression from
sporting life.
Q6 What you have just said means that your colleauges have to assess translations of poor
quality. Do you think the assessment sheet they use meets the requirements of a fair
assessment tool?
Answering this question, the director emphasized the scale’s shortcomings. Although it
defined the nature of different errors and mistakes, the definitions in his opinion were too
167
vague or permissive: expressions like uncultured language use, visible lack of practice in
written communication, arbitrary change in the logical order of the source language leave a lot
of place to subjective decisions. He mentioned examples for errors which root in taking the
scale criteria too literally, when even a good solution can be judged as a mistake. He gave an
example from an event when he acted as a “judge”. The source text said that “According to
Nature, this finding is…” which, in his opinion, easily can appear in Hungarian translation as
“A Nature folyóirat szerint ez a megállapítás…” However, the word “folyóirat” is not part of
the original text, so it can be treated as an unnecessary addition, which, according to the scale,
is a major error. However, “it does not impair the message, just the opposite – it makes it
more precise for those who do not know what Nature is. It could be even treated as an
excellent solution, but such option is not part of the scale.”
B) Interviews with the raters on the assessment system used in the transéation programme
The interviews were conducted in December 2019 and January 2020. I planned to ask eleven
questions about the evaluation scale, but the interviewees, one female and three males, turned
out to be quite talkative, so sometimes we wandered off, well beyond the narrow topic, so the
planned semi-structure interviews turned out to be rather unstructured in two cases (Raters 2
and 3). Raters 1, 2 and 4 also acted as assessors of translations discussed in section 5.4.4.
Although Rater 3 did not take part im the exam in focus, he offered his opinion on the
assessment system in use.
In this section I analyse the four raters’ answers by the questions asked, not one by
one. To show the similarities and differences between the four raters’ responses, I identified
the keywords which are presented in Table 57.
Q1 How often and how do you assess your students during the term in your translation
studies classes?
The first question aimed to elicit information on the frequency and the methods of assessment
during the term. Rater 1 (R1) and Rater 2 (R2) said they assessed their students’ work once a
week, as they had weekly assignments. Rater 3 (R3) assessed them “every time the students
had assignments”, which is good, because students got feedback on all of their translations;
Rater 4 (R4) assessed translations on 7-8 occasions a term. In case of R1, R2 and R4 the
source text was sent to the students via NEPTUN, the university’s electronic administrative
system, and the translations into the target language were expected to be sent back the same
168
way. R3 asked the assignemnets in printed form, so he could mark the mistakes “in crying
red”.
After receiving the texts from students, teachers applied different techniques. R1
identified the errors in the translations, prepared a list for the next class, and projected them in
class with the aim of discussing error types and correct solutions, thus providing feedback to
the whole group at the same time. R2 used a random integer generator to decide whose
translation to project in-class. The other students looked at their own translations, while
someone – usually the teacher – read the source text aloud, and the students compared the
upcoming solutions: they identified the errors or mistranslations, good turns, etc. This meant
that they discussed one translation. The students checked their own errors, made the necessary
corrections, so they implemented assessment and practice in an integrated fashion: the
translations were checked, but not graded, except the projected one. If it met the expectations,
its translator could choose to be graded.
In R4’s practice, in each class a student presented their translation, the group discussed
the problems they encountered, and they collectively came up with better solutions. The
presentation was assessed on a 0-20 scale, based on the quality of the presentation, the
translation and the effort. The students’ grades were based on two major translations and a
revision of a chosen translation. Working in pairs, they had to revise each other’s translations,
as well.
R3 used an entirely different method of assessment, the rather old fashioned, however,
he argued, very effective way of using red ink to correct double spaced, printed translation
assignments. “I am convinced this is the most straightforward and effective way to draw their
attention to their mistakes. I need double spacing, so I could make my notes right next to the
identified error.” In this way each student got immediate written feedback. The students could
see their own mistakes immediately in this assessment based on errors, but in the discussion
the errors were handled as the group’s errors. “I do not mention names, so nobody gets hurt.
They can see their errors anyway, so they can learn from them.” They also discussed the
solutions for the critical elements of the text and agreed on the acceptable ones.
169
these strategies to allow their students to get used to the evaluation system along which their
exam translations would be assessed. The most typical answers included:
“The main aim is to avoid errors.” (R1)
“I want to make my students familiar with the level they have to achieve as translators
and give them feedback about their work.” (R2)
“My aim was to give individual feedback in written form, pointing out the individual
errors. It was always personal. In case of oral feedback in class I addressed the errors
in general, so the students did not have to be ashamed.” (R3)
“My primary aim is to give grades, and show if the students have difficulties with
certain skills.” (R4)
Q3 What criteria do you follow when you evaluate the written assignments?
Except for R3, they all used the criteria included in the evaluation sheet used for the
assessment of exam translations. “In this way the students could also learn about these
criteria, based on identifying major and minor errors in their translations”, said R1. R2 added
that although he used it, the focus was on different elements in the different classes he taught:
in case of IT specific texts he focused on the appropriate use of special terminology, whereas
in case of social science texts, he emphasized conceptual accuracy. R3, the oldest one of the
four respondents, a native speaker of English, relied on his knowledge, instinct and
experience and the transmission of the original message into the target language. R4 used the
evaluation sheet throughout the year, but he did not point out any specific reasons.
170
Q5 Do you use any evaluation grids or rating scales? If yes, what is it like?
Three of the raters (R1, R2 and R4) used the exam sheet to assess assignments during the
term, that is for diagnostic purposes. They used it the same way they were expected to apply it
to assess exam translations. Only R2 adjusted it to the specific nature of the subjects he
taught, putting emphasis on the subject specific elements. He also found it permissive, as in
his view, it allowed too many major errors, especially compared to the short length (ca. 2,000
characters) of the texts. R3 never used grids or scales for assessments during the semester; he
relied on the method of collective discussion.
Q6 How do you grade your students’ work?
They graded their students as it was set in the course syllabi, in the most cases they used three
criteria: course participation (25%), the grades for the home assignments (25%), end-of-term
test (50%). R2 included the results of a mid-term test.
Q7 Although I understand that the final translation exam was cancelled a few semesters ago,
I am interested in your opinion on the assessment scale that was used for exam translations.
All four raters found it important to state that the translation exam was cancelled because it
put the students at a disadvantage compared to those BA students who decided to choose a
minor and not a specialization. The TS exams were very difficult, and those who failed were
prevented from getting their BA degree. The change, however, did not impact the usual
pattern: the students are evaluated based on their weekly assignments and the other criteria
which are set down in the course syllabi. For assessing students’ assignments R1 continued to
use the common assessment scale, which she described as fair, “because it makes difference
between the different levels of error”. She also identified a shortcoming: “it does not offer
extra points for outstanding solutions, for example, in word choice, which, I think, could be a
motivating aspect in an assessment scale.”
R2 also uses the exam assessment scale, which he described as the slightly modified
version of the assessment sheet used at ELTE University. The criteria are not the same for the
two directions, there is a slight difference: in case of translation from Hungarian into English
it allows more major errors. The problem in R2’s opinion is that “it is not subject-specific,
although the different text types – IT specific texts, social science texts and business texts –
should be assessed according to different criteria”. The assessment method offered by the
scale is good for assessing grammar, but it lacks the criteria which could assess conceptual or
subject-specific elements. It also has subjective aspects and, as he kept emphasizing, allows
too many errors to pass. R3 has never used this scale, but is acquainted with it, and identified
171
its main error in not stipulating the repeated errors, a caveat which can result in subjective
decisions on part of the evaluators. R4 emphasized its demotivating nature, as it concentrates
on mistakes, and does not reward creative or otherwise brilliant solutions.
Q8 How is the final test assessment different from the progress test evaluations?
The four raters agreed that there was a prominent difference: while progress testing always
involves feedback (and might involve grades, but not necessarily), the only aim of the
exam assessment is to give the students final grades. Otherwise, the evaluation process is the
same in both cases.
Q 9 How does the exam assessment sheet meet your expectations? Is it appropriate in every
respect in your opinion for assessing BA students’ translations?
According to R1, it is a good aid in identifying major and minor errors, which are defined
fairly well, but the definitions still allow subjective assessment. “Which is a major error for
me, easily can be treated as a minor error by another rater.” To counterbalance the sheet’s
demotivating effect, which roots in its error directed nature, R1 would welcome a component
which allows rewarding excellent solutions. “Because there are brilliant solutions, which
remain unrewarded”, she explained.
R2 agreed that the definitions of major and minor errors are not always clear, so they
can be treated differently by different markers. In seminars, it is a useful tool to show what
students lack in knowledge, what they have to pay special attention to, including typical
errors, so it is good for diagnostic purposes. As for the final test, R2 found it permissive: the
number of errors it allows is intolerable in his opinion. “We should not give a 5 to someone
who makes a major mistake. If the translation is not flowless, it is not 5.” R3 also emphasized
the fact that the scale allowed the raters to judge major and minor errors differently, whereas
R4 stated that it was good at BA level, an appropriate tool as it was detailed enough to find
out if a translation is at the required level or not.
172
errors should be scored: if markers do not agree about treating them beforehand, they might
count them differently”. He, similarly to the others, mentioned that the scale did not allow
markers to give extra points for brilliant solutions, although he did not see it as a major
problem: “one or two brilliant choices would not erase the serious errors”. R3, in full
agreement with R4, also saw the greatest difficulty in the fact that it did not define how to
treat recurring errors and how to reward good solutions or creativity, which is necessary, for
example, in translating idiomatic expressions.
Q11 What modifications would you recommend and why to make the scale more appropriate
for your purposes?
The modifications the raters would welcome are all connected to the options or elements they
miss from the scale: an option to give extra points for creativity and excellent solutions; to
include text-specific or genre-specific criteria with emphasis on terminology, especially
emphasized by R2.
R1 would welcome a scale, which, unlike the list of errors they use, concentrates on
what the students know. “There are only errors, errors and errors in it. I do not say it does not
work, but not rewarding the students’ knowledge is demotivating.”
R3 thought that in case of differences in final grades, the two raters should sit down
for a think aloud session and try to negotiate a grade which satisfies both of them, or if it is
impossible, a third party should be involved.
R4 also disliked the error-centeredness of the scale in use. “Extra points should be
given for outstanding solutions, which could lower the final number of errors. Counting only
the errors demotivates the students on the long run.”
Table 57 was constructed to show the four raters’ opinion of the assessment system
applied in the translation program next to each other, grouped according the sequence of the
interview questions.
173
Table 58 The raters’ opinions of the assessment system applied in the translation program
Questions /
Raters R1 R2 R3 R4
1a
every time students
How often? once a week from week to week 7-8 occasions / term
had an assignment
3 the criteria listed in test specific criteria; knowledge, instinct criteria set down in
The criteria the evaluation sheet major and minor and experience the grading sheet
errors, as defined in
the evaluation sheet
5 the exam sheet; the exam sheet; does not use any the exam grid
Use of basically good, but permissive grids or scales
evaluation has some
grids shortcomings
174
8 (a) there is no (a) giving feedback; (a) always contains They happen in the
(a) Exam feedback, just the (b) giving grades feedback; same way, no
assessment grade; (b) the assessment is difference.
vs. (b) during- (b) detailed and based on set criteria
the-term immediate feedback
evaluation
9 basically fair, definitions of major / useful, but allows the good at BA level;
How does the some elements give minor errors are not raters to judge a sufficient tool;
scale meet place to subjective always clear, can be differently detailed enough to
your assessment; does not interpreted see if the student
expectations? reward good differently; reaches the required
solutions error-based; level
permits too many
errors
10 easy to follow; errors are not defined does not define how does not stipulate the
Difficulties / defines the major clearly, the result: to treat repeated case of repeated
challenges errors very strictly; differences between errors or how to errors;
treating outstanding the grades given by reward creativity does not leave place
solutions the two raters; for giving extra
no direction how to points
treat repeated errors
or brilliant solutions
11 including an option including text / genre a thinking aloud extra points for
Suggested for extra points specific criteria; session; outstanding solutions;
modifications stipulating the involving a third stipulating the
treatment of repeated party; treatment of repeated
errors stipulating the errors
treatment of repeated
errors
5.4.6 Summary
This preliminary study served the purpose to examine the reliability of the assessment scale
used for assessing exam translations in the Translation Studies Program at the Institute of
English Studies, University of Pécs. As was detailed in Section 5.3, this is an analytical scale,
which is based on identifying and listing the errors at two levels, categorized here as major
errors (H) and minor errors (h). The grades the students get at the end of the exam is the
rounded mean of four grades, given by two pairs of raters for each component (EN-HU and
HU-EN translations).
In order to answer research questions (1) How the rating scale in use performed in terms
of inter-rater reliability? and (2) How consistent the raters were in their assessment I
examined 16 exam translations from English to Hungarian and 16 translations from
Hungarian to English, each assessed by two raters independently. Although they used the
175
same evaluation sheet, and followed the same criteria, there were considerable differences in
their judgment concerning individual errors and the grades they gave the students in the end.
The findings of the analysis, presented in Tables 45 - 55 indicate low inter-rater reliability,
which can be attributed to the poorly designed assessment tool, as well as the inconsistent use
of the scale by the evaluators. There is another factor which should be mentioned: the nature
of error based assessment, which, from the students’ perspective, is extremely demotivating;
as for the raters’ perspective, concentrating on and counting errors reveals only what
candidates do not know, while what they know, might remain unnoticed. Also, the raters,
when focusing on error counting and categorizing, which involves differentiating between
error levels, can arrive at completely, often contradictory, even conflicting results. The
reasons must be looked for in the nature of the scale, which, as it has been already established,
leaves room for subjective judgments in assessment (Angelelli, 2009; Eyckmans et al., 2009).
The situation is similar when we look at intra-rater reliability (a measure of how
consistent an evaluator is at measuring a constant phenomenon) of the same assessment tool.
Although the major and minor errors are defined in the scale, the definition in some cases is
so permissive that even the same evaluator scores identical solutions differently in the texts he
assesses. It means that two students, making the same mistakes, can get different grades
because of the rater’s inconsistent scoring.
What was found in the assessed translations was reinforced in the rater interviews,
which were conducted in order to find answers to research questions 3 and 4. They aimed to
see how raters evaluated the assessment system they applied to assess translation students’
work, and what modifications they would recommend for making it more appropriate for
assessing the quality of translation and students’ translation competence.
The interviews underlined that the assessment scale in use did not meet the
requirements of reliability in every respect. It contains elements which are permissive and
give place to subjective judgment, and consequently, to unfair grading. It has other caveats: it
does not stipulate the treatment of repeated errors and it does not allow giving extra points for
creative or otherwise brilliant solutions. It concentrates on errors, which is de-motivating.
Additionally, on the other hand, its pass level is extremely low, as it allows too many errors.
There is no perfect tool to assess translations, according to the four raters, in full
agreement with the special literature. However, there are measures to improve validity and
reliability. The next part of the chapter will focus on the “what” and “how” questions with the
ultimate aim of offering a new and hopefully more reliable way to assess translations.
176
177
Chapter 6
Working towards a new assessment tool
6.1 Introduction
This part of the dissretation presents the steps of devising and adapting a new tool to assess
translations. Different approaches and several instruments have been used in the previous
decades, both holistic (Garant, 2009; Williams, 2013) and analytic ones (Eyckmans,
Anckaert, & Segers, 2009; Martínez, 2014; Orlando, 2011); criterion referenced and norm
referenced scales (Eyckmans, Segers, & Anckaert, 2012; Kockaert & Segers, 2017; Van
Egdom et al., 2019), but all had their own caveats. The difficulties of their use can be
explained by the nature of translation and also by the nature of the assessment instruments.
The holistic approach looks at the text as a whole, not paying enough attention to the details
listed in analytical scales and vice versa; the so called error lists usually turn out to be
counter-productive or de-motivating for translation students (Garant, 2009). The main issue
has been the elimination of subjectivity in assessments. The developers of the norm-
referenced methods tried to find the “golden route”, but their experiments have not resulted in
an ultimate solution so far: the tool either turned out to be inadequate for practical use (CDI;
Eyckmans et al., 2009; 2016), or it failed to discriminate as it had been expected (PIE;
Kockaert & Segers, 2017).
Although it might seem to be a bold idea, relying on what I learnt from the relevant
literature (Sections 5.1 and 5.2) and from the interviews with expert raters (Section 5.4.5), in
this chapter I elaborate on the process of developing a new tool for assessing translations at
university level. This process included several stages, starting with recruiting translation
students to translate a business text from Hungarian into English and expert raters to assess
the translations. Once the translations were collected, a preliminary study was conducted (see
section 6.1, Study 4) to examine the lexical characteristics and the readability of the translated
texts (see sections 6.1.1 – 6.1.5). The findings of this study were important in identifying the
items for the assessment with the new tool, an adapted version of the method used by
Kockaert and Segers (2017) Preselected Items Evaluation (PIE) I named PIER (PIE Revised)
compared to the assessment scale currently used in the Translation Studies BAprogramme at
178
the University of Pécs (discussed in section 5.3). Section 6.2, comprising Study 5 presents the
steps of developing the new scale, PIER including the item-preselection and assessment
processes, the analysis of assessment results by the two instruments (UP and PIER), the
raters’ evaluation of the new scale (PIER) compared to the one in use (UP) and, in the end,
PIER’s suitability to serve as a reliable tool for assessing translations.
6.2 Study 4: Lexical characteristics and readability of the translated texts chosen
for assessment
The pervious chapters outlined the field of translation education in general in Hungary, with a
special focus on the University of Pécs where the research for the present dissertation was
conducted. My aim was to embrace the most important aspects that help L2 students become
translators including their motivation and autonomy and the assessment of their work.
The inquiry conducted into how the heavily error-based assessment scale in use
worked at the University of Pécs reported in Section 5.4 revealed poor applicability: low
inter-rater reliability indicated serious validity issues. Studying the assessed translations, the
raters’ inconsistent identification of errors, and a comparison of their conflicting assessment
scores prompted questions for further research. One area concerned the quality of the
students’ translations with an assessment system based on certain textual features established
independently from the reviewers’ judgements.
The small-scale preliminary study for this purpose aimed to examine two important features
of the translated texts translated from Hungarian into English:
(1) What are the most important lexical characteristics of BA students’ HU-EN
translations?
(2) What readability can be established for their texts?
The questions were challenging, because translating a specialized text in the field of
economics required not only a high-level mastery in both languages, but a lot of background
knowledge, as well, including precision in use of special terminology.
179
6.2.2 Participants
As this phase of the research project concurred with the restrictions connected to Covid-19,
including lockdown regulations affecting universities and all public institutions, it was
difficult to reach volunteers to participate in the study. Fourteen BA students translated a 271-
word-long business text in the spring semester of 2019-2020: four of the 13 third-year
students, and ten of the 15 second-year students (out of the 28 students representing the whole
population in the study). They are coded as TT1, TT2, etc.
The students prepared the translations as an optional assignment during their distance
instruction mode: they translated the texts at home and sent them to their teachers via email.
After the deadline had expired (25 April 2020), the teachers forwarded the texts to my
mailbox for analysis.
A well-written composition and a well-translated text make effective use of
vocabulary (Laufer & Nation, 1995), which has huge effect on the readability of any text and
also is a good predictor of translation quality. In order to examine lexical richness (how varied
the words in the given text are) and lexical density (the ratio of content and function words) in
the fourteen translations, as a first step, I prepared their Lexical Frequency Profile (LFP). To
do this I used Tom Cobb’s freely available software Compleat Lexical Tutor (Cobb, 2015).
LFP shows the percentage of words a learner uses at different vocabulary frequency levels in
writing (Laufer & Nation, 1995), which, in case of translations, largely depends on the
vocabulary of the source texts. It is also seen as a measure of how vocabulary size is reflected
in use. This reliable and valid measure of lexical richness in writing is useful for determining
the factors that affect judgments of quality in writing (Laufer & Nation, 1995, p. 307), which
is highly important in translations, and plays a crucial role in the readability of any text.
The comprehension difficulty of many documents, including legal, medical or
business texts, employment agreements, etc., is often too high for a large percentage of the
population, so the value of analyzing texts on difficulty of readability based on quantitative
scales is high (McNamara et al., 2014). Having the appropriate tool to measure these features
provides the required data in a user-friendly way. Coh-Metrix, which I used to acquire this
data, is a freely available computational system, which was developed to measure cohesion,
coherence and text difficulty at different levels of language and discourse, as an improved
means of measuring English text readability for L2 readers (Crossley, Greenfield, &
180
McNamara, 2008). Readability is a feature of basic importance in case of translated texts,
especially specialized translated texts, which must be exact, precise and comprehensible. In
order to convey the message clearly, maximizing readability is a basic requirement in case of
institutional translations (Lafaber, 2018).
According to Nation (2006), 98% text coverage is needed to achieve adequate text
comprehension. It means that there is only one unknown word for the reader in every 50
words. Other researchers (Carver, 1994) found that even that coverage is not sufficient; only
few learners achieved adequate comprehension with 98% coverage, especially in case of non-
fiction texts. Taking 98% as the ideal coverage, a 8,000 – 9,000 word family vocabulary size
is needed for dealing with written general English texts. The greatest variation is most likely
to happen in the first 1,000 (K1) words, and in the proper nouns, which typically together
cover 78-81% of written general English text. The second 1,000 (K2) words cover 8-9 %,
whereas rare (off-list) words cover 1-3% (Nation, 2006), and the ratio of AWL words amount
to 10% (Coxhead, 2000).
Number K1
Test- K2 AWL Off-list Type/Token Lexical
of words Words Types
taker words % words % words % ratio density
(Tokens) %
TT1 327 72.12 7.27 11.52 9.09 164 0.52 0.58
TT2 334 69.74 6.58 14.14 9.54 162 0.56 0.62
TT3 302 70.41 7.40 13.31 8.88 180 0.56 0.61
TT4 356 71.91 5.34 14.61 8.15 159 0.56 0.60
TT5 311 70.75 6.92 12.26 10.06 177 0.58 0.60
TT6 370 72.13 7.38 12.84 7.65 175 0.52 0.57
TT7 359 68.54 7.17 15.26 9.03 165 0.50 0.62
TT8 318 72.12 5.09 14.75 8.04 183 0.54 0.59
TT9 343 70.55 6.12 13.99 9.33 173 0.53 0.60
TT10 317 72.41 5.96 12.85 8.78 164 0.54 0.58
TT11 300 69.05 5.44 15.31 10.20 162 0.56 0.65
TT12 311 70.75 6.92 12.26 10.06 177 0.58 0.60
TT13 340 69.08 5.78 14.74 10.40 180 0.54 0.58
TT14 358 73.35 7.97 10.71 7.97 195 0.56 0.60
Mean 331 70.94 6.65 13.46 9.08 172 0.54 0.60
The most important predictors of vocabulary richness, as well as the most telling indicatives
of writing quality and translation style are lexical diversity (LD) and lexical density
(Lehmann, 2014). Lexical diversity refers to the range of different words used in a text, with a
181
greater range indicating higher diversity. The best-known LD index is the type-token ratio
(TTR) (McCarthy & Jarvis, 2010). Lexical diversity, another predictor of lexical richness in
general Englisgóh texts is expected to be between 61 and 62 % (Castello, 2008, p. 60). The
findings of the lexicalal analysis of the 14 translations are presented in Table 59.
As Table 59 shows, the average length of the translated text was 331 words, compared
to the 271 words of the source (Hungarian) text. The length of the translated texts varied
considerably: the shortest translation consisted of 300 tokens, the longest one comprised 370
words. In contrast with Nation’s findings of 78-81 % in general English texts, K1 words
covered only the 70.94% of the tokens in the translations, whereas K2 words added a further
6.65%.. These being considerably lower than the ones defined in Nation’s (Laufer & Nation,
1995; Nation, 2006) studies, together with the high ratio (13.46%) of academic words
indicate that the text was specialized and predicted comprehension difficulties for readers,
and also possible problems in translators’ choices.
The calculated mean lexical variation indicated by the TTR of the examined texts
showed average lexical variation for short texts (0.54), with the highest score (0.58) displayed
by TT12, whereas the lowest one by TT7 (0.50). Lexical density (0.60 ) is also a good
predictor of vocabulary richness: the number of content words shows the extent to which
writers are making the fullest use of their available vocabulary knowledge (Laufer & Nation,
1995, p. 308). The highest lexical density value was found to be 0.65 in TT1, whereas the
lowest one 0.57 characterized TT6’s text. The relatively high mean ratio of content words
(60%) may help text comprehension, but 13.46% of the content words in the translations
belonged to the domain of academic vocabulary, which is not necessarily available either for a
novice translator or to an average reader.
The key findings of the analyses concerning text lengths, text coverage data, means of
lexical diversity and lexical density show that even the best coverage lagged well behind the
ideal value (73.35 vs. Nation’s 78-81%). The percentage of academic words (13.46 vs.
Coxhead’s average of 10%) was high, which might have caused extra difficulties in the
translation process. Although the lexical diversity and lexical density values were close to the
ideal ones, they did not offer information if the words would turn out to be correct or incorrect
in the evaluation process.
182
B) Text difficulty and readability by Coh-Metrix
Although vocabulary is an essential element of text comprehension, there are other factors
affecting readability. Accurately predicting the difficulty of reading texts in a second
language, including translations, is important to educators, writers, and publishers alike
(Crossley et al., 2008). Readability is also a useful indicator for those who want to assess
written production in an L2, especially in case of holistic assessment, where one of the most
important questions is how the text reads in the target language (Waddington, 2001). To
examine these essential features I used Coh-Metrix (Graesser, McNamara, Louwerse, & Cai,
2004), which, in addition to surface components of language, aims to measure deeper and
more global attributes, such as textual coherence, permitting detailed and accurate analysis of
language to take place (Crossley et al., 2008; McNamara et al., 2014). It also includes more
traditional formula, including Flesch Reading Ease (FRE) and Flesh-Kincaid Grade Level
(FKGL).
The FRE test uses two surface-level variables to determine the readability score:
the average length of sentences (measured by the number of words)
the average number of syllables per word.
Then, it provides a score between 0 and 100 (Table 59). The higher the number, the easier the
text is to read.
Table 60
The Flesh Reading Ease scores (From: https://linguapress.com/teachers/flesch-kincaid.htm)
Score Notes
The Flesch-Kincaid Grade Level test (FKGL) (Table 60) offers a scale developed through the
re-calculation of three widespread reading ease formulas: ARI (Automated Readability
Index), Fog and Flesch, each operating at surface level. Based on the idea that “reading
material should be written at a level of difficulty appropriate to the reading ability of those
reading it” (Kincaid, Fishburne Jr., Rogers, & Chissom, 1975, p. 20), this test rates texts
according to US school grade level. Scores 0 to 6 assume basic, 7 to 10 average, 12 to 16
183
skilled, and above 16 academic readers. For most documents aim for a score of approximately
7 to 8, schooling age of 13 to 14 ensures that the content is comprehensible for approximately
the 80% of American readers (Linney, 2017).
Table 61
Flesch-Kincaid Grade Level (From: https://linguapress.com/teachers/flesch-kincaid.htm)
Compared to the CEFR scale, which aims for B2 level to define a competent language user,
the FRE index should be between 60 to 70 for the same level (Linney, 2017), whereas the
FKRL value should be between 5 and 11.
Table 62
The value of FRE and FKRL indices compared to CEFR levels
(From https://linguapress.com/teachers/flesch-kincade.htm)
The scores awarded for any written text by any scale are not always perfectly accurate, as they
are the result of computerized analysis that does not take incidental criteria into account;
however, they offer a fairly good overall assessment of how easy or difficult a written text
will be to comprehend.
The Coh-Metrix test (Table 63), according to the developers (Graesser et al., 2004),
offers a much deeper and more detailed analysis than FRE or FKRL. Using this test, it was
easy to establish how many paragraphs and sentences the translated texts consisted of, and
besides the type-token ratio, it added new dimensions to the lexical profile of the texts in
focus. The MTLD index is calculated as the mean length of sequential word strings that
maintain a criterion level of lexical variation (McCarthy & Jarvis, 2010). Focusing on textual
184
patterns, MTLD analyzes all the words in the text, from the first to the last one, including
function words, i. e. sequentially (Koizumi, 2012). The larger the values, the more lexically
diverse the text is. Research results also demonstrate that the Coh-Metrix L2 Reading index
performs better than traditional readability formulas, as it is founded on cognitively inspired
variables, including a word overlap index related to text cohesion, a word frequency index,
related to decoding, and an index of syntactic similarity related to parsing (Crossley, Allen, &
McNamara, 2011).
Table 63
The Coh-Metric profile of the 14 translated texts
Looking at the data in Table 62 we can immediately see that each participant kept the five
paragraphs of the source text; however, the number of sentences ranged between eleven and
17, with a mean of 13. As the number of sentences in the source text was only ten, we can
assume that each student re-structured the text turning some of the very long complex
sentences of the source text into shorter ones, which, if not done appropriately, may be judged
as a major error in the assessment process.
Supplementing the TTR, the Coh-Metrix analysis provided another strong measure of
lexical diversity: the MTLD. According to data in Table 62, the lexical diversity it indicates is
185
different from the lexical diversity expressed by the TTR index: in case of TT2, TT3, TT4,
TT11 and TT14 the TTR was 0.56, whereas the corresponding MTLD index was different for
each text. The reason should be in the way the textual patterns are formed, including the
cognitive processes underlying them. As was already stated, the higher the value, the more
diverse the vocabulary of the text is, meaning that the highest lexical diversity (93.273) was
achieved by TT4, while the least diverse vocabulary (54.353) was displayed by TT7. The
difference between the two values is considerable.
The readability indices shown in Table 62 indicate very difficult texts. The 37.920
FRE index places it in the category of “difficult to read, best understood by college graduates”
(Table 59); understanding it would be very difficult, maybe impossible for an average reader.
The mean of the FKRL index (14.991) also defines “higher education level”, which meets the
requirements of “proficient (C2) user” on the CEFR scale (Tables 61 and 62).
According to the FRE, the TT5 translation (30.069) is the most difficult to read, and
TT11 (45.949) is the easiest one, still requiring “higher education level” concerning
comprehension. If we look at the FKRL indices, we can also see that all scores are well above
the 11th – 12th -year value, allowing comprehension for university and college graduates,
however, it defines TT7 translation (16.761) as the most difficult one to read, and TT11
(11.234) as the easiest one. Perhaps the most exciting finding is to realize that the RDL2
index defines TT11 (8.455) translation to be the most difficult to comprehend, followed by
TT13 (8.621) and TT4 (8,958) texts. TT7 translation (17.295), which was predicted by FKRL
to be the most difficult one, is characterized as the easiest text to read. However, we know
from research (Crossley, Allen, & McNamara, 2011; Graesser et al., 2004; (McCarthy &
Jarvis, 2010) that FRE and FKRL use surface-level variables such as the average length of
sentences and the average number of syllables per word to predict readability. The RDL2
index, on the other hand, is a formula based on psycholinguistic and cognitive models of
reading, including the word overlap index, a word frequency index and an index of syntactic
similarity: variables closely connected to text comprehension processes (Crossley et al.,
2011). The findings prove that Coh-Metrix analyzing texts on multiple measures of language
and discourse such as word concreteness, syntactic simplicity, referential cohesion, causal
cohesion and narrativity, permits a more accurate, more detailed text analysis than the
traditional tests, which aim to analyze descriptive features; therefore, it discriminates better
and can predict text difficulty more precisely (Graesser, McNamara, & Kulikowich, 2011)
186
6.2.5 Summary
Knowing the difficulty of a translated text is important in translation pedagogy, including the
final element, the assessment of the translated target language text. To measure the difficulty
and readability of the translated texts, two important features of translations, I relied on tool
As the source text was a business text – the summary of an annual monetary report –
the relatively high percentage (13.6%) of academic words could be predicted, making
translators’ choices, as well as comprehension and readability for the average reader difficult.
The length of the sentences in the source text also need be mentioned: the shortest one
consisted of 15 words, the longest one of 49 words, making up about a fifth of the 270-word-
long text. The transfer operations the students applied show what difficulty meant for them: in
order to simplify the text and to make it more feasible to translate, they restructured it by
creating more and shorter sentences in their translation. This strategy, however, does not
necessarily improve the quality of the translation. The target language texts were longer than
the source text: the shortest translation comprised 300, the longest one 370 words, 100
hundred more tokens than the total number of words in the source text. All this may have
added to the high difficulty level of the business test, as all three measures applied (FRE,
FKGL, RDL2) predicted text difficulty to be above college / university level.
The indices calculated by the Lexical Frequency Profile show a relatively low
coverage of frequent vocabulary (70.94% K1 words), and a relatively high ratio of academic
and infrequent vocabulary compared to Nation (2006). Based on these findings, I believe that
the vocabulary of the translations is varied and sophisticated, and the complexity of the texts
makes them difficult to comprehend, which might be an indicator of translation quality.
As research findings of Study 3 (Section 5.4) showed, the assessment scale used in the
Translation Studies BA programme at the University of Pécs - referred to as UP scale - had
numerous caveats, very low inter-rater reliability. Thus, it seemed to be reasonable to devise a
new tool for assessing student translations. In order to develop a new assessment scale, I
relied on what was discussed in the literature review in Chapter 5 (5.1) and the findings
presented in Sections 5.4 and 6.1. I also revised the teacher interviews (5.4.5) to see if I could
implement an idea suggested by R2 to include text-specific criteria with an emphasis on
terminology. I wanted to see whether such a step would improve the reliability of Preselected
187
Items Evaluation, PIE, a norm referenced scale to assess translations by Kockaert and Segers
( 2017).
The PIE method discussed in 5.1 was developed for evaluating the translation product
with the aim of giving a rating or making a value judgment. For efficiency and time
management reasons Kockaert and Segers limited the number of pre-selected items, adding
that the ideal number is “as much as needed, but not more” (Kockaert & Segers, 2017, p. 150)
making the limit expandable. The method makes a distinction between correct and wrong
solutions, but compared to the UP scale, which is based on major and minor errors, does not
distinguish between levels of errors. With the PIE method, the items to be evaluated are
selected on the basis of the calculation of p values (item difficulty) and d indices (the
discrimination value) of each item, which can be words or word groups in the source text, and
the solutions for the preselected items are also predetermined. In the case of PIER, as I named
the assessment tool I set out to develop (PIE Revised), I followed different steps. After a
detailed analysis of the translated texts (6.1) I followed the procedures below by inviting
expert raters to participate in the study with a threefold aim:
a) to identify elements (items; words, expressions, grammatical features) which
they found as problematic and having discriminating value in the source text
(ST)
b) to offer possible translations for the identified elements in the target text (TT)
c) to assess the fourteen translations analyzed in section 6.1 using
PIER, i. e. Kockaert and Segers’s revised PIE scale
the UP scale for assessing translations at the University of Pécs.
188
6.3.2 Participants
The participants of the study were four raters, all females, each with considerable experience
in L2 assessment, and one financial expert, a male, working for a global investment
management corporation. One of the raters also was an instructor in the Translation Studies
BA programme at the University of Pécs, having experience not only in translation
assessment, but also in translating. The three other raters had considerable experience in exam
assessment by different language exam institutions.
6.3.3 Procedures
Following the steps recommended by Kockaert and Segers (2017), and relying on the raters’
expertise I aimed to spare the first round of the item preselection procedure of PIE
(identifying the segments, then p values and d indices) which, as the developers also
acknowledged, demanded much time and energy. Seemingly this process is subjective in
nature, but I counterbalanced that by the number of expert raters. The training of the four
participating raters and the financial expert, because of Covid 19 restrictions, took place on an
electronic platform: they were emailed Kockaert and Segers’ (2017, p. 152) steps which they
had to follow in their work. The first phase of the study, conducted in August 2020, involved
the item selection process. As a first step, the raters were given the source text (a business text
of 331 words in the Hungarian language) and they were instructed to identify in it all items (a
word, an expression or a longer chunk or a structural or conceptual unit) which they found
important to include in the assessment, with a focus on translation brief relevancy, domain
specific and test specific criteria, strictly following the PIE method (Kockaert & Segers,
2017). They were also asked to give all the possible TT (English) translations for the
preselected items. Then, the item list was sent to the financial expert, who checked the
correctness of the selected items, and corrected inappropriate or incorrect terminology.
As is evident, the pre-selected items do not cover every detail in the text, a problem
which brought forward an important question: what do raters do when they come across an
incorrect solution for an item which was not preselected? The answer was provided by
Kockaert and Segers (2017, p. 153), suggesting that the item should be included only if the
incorrect solution was proposed by numerous candidates. When proposed by only one, it
should not be included.
189
In Kockaert and Segers’s study, ten items (30 words) were preselected in a 118-word-
long text, i. e. the 25% of the total number of words, which left 75% of the text uncovered
(2017, p. 156). Nineteen MA Translation and Interpretation students participated in the study
conducted at KU Leuven, Belgium, which did not represent a large sample; a fact that turned
out to be an admitted caveat. With PIE, each candidate is evaluated on the same pre-selected
key items, chosen on the basis of translation brief relevance; this method is expected to ensure
reliability in the context of translation evaluation. However, it does not encompass an entire
translation performance; the evaluation is restricted to the translation of a few segments
(between one and six words), which does not seem to be sufficient for assessing a complex
skill (translation competence in this case), so no assurances can be given regarding the
reliability and the validity of the method (Eyckmans & Anckaert, 2017, p. 45). The
conclusion remained the same when the PIE test was repeated with a larger number of
participants, 113 BA translation students. The preselection and the evaluation in this latter
study (Eyckmans & Anckaert, 2017) was made by seven expert raters, selected on the basis of
their extensive experience in assessing translations. The main problem was that the 38
preselected items showed very little overlap; only five of the items were selected by more
than half of the raters. The other problem was that the retained ten items did not discriminate
well enough, and the reliability calculated for the test results amounted to .558, which is low.
According to Eyckmans and Anckaert (2017, p. 50), the “problem lies with the inadequate
number of preselected items and their low discriminating power”.
As the aim of the present study was to create a relatively simple, easy-to-use, reliable
and valid assessment tool, I decided to “re-visit” Kockaert and Segers’ PIE. Due to my access
to only 14 student-participants I had only one choice: to work with a larger number of
preselected items, which would cover bigger segments in the ST. The five, individually
working expert raters focused on grammatical features, terminology, style and register,
spelling, and punctuation.
The result was a pool of 63 items. Some of the longer units identified by one marker
contained a shorter unit selected by other raters, so when preparing the final list (Table 63),
only the longer ones were included. There were only two items preselected by all the five
markers (segments 3 and 22), but all twenty-five items included were identified at least by
three of the markers. The length of the identified units was between three and ten words. In
this way 128 words (roughly 40%) of the text were included in the list of preselected items
(Figure 25, the preselected items are in bald), not in isolation, but together with their
190
immediate surroundings, keeping in mind the Firthian advice: “You shall know a word by the
company it keeps” (Firth, 1957, p. 11).
A magyar gazdaság (1) eddigi teljesítményét ugyan pozitívan értékeli a Nemzetközi Valutaalap, de (2)
a legfrissebb országértékelésükből kiderül, hogy (3) még mindig sérülékenynek tartják a magyar
gazdaságot.
A Nemzetközi Valutaalap (IMF) delegációja április 22-én (4) fejezte be a magyar gazdaság szokásos
éves felülvizsgálatát. A jelentésben (5) a közelmúlt történéseit összegezve megállapították, hogy (6)
eddig minden szép és jó, a magyar gazdaság (7) erőteljesen növekedett az elmúlt években, amelyet (8)
támogató gazdaságpolitika segített egy kis kedvező külső környezettel és az uniós források (9)
nagyarányú lehívásával megspékelve. Az erős (10) belső keresletnek köszönhetően 2015-ben 2,9
százalékos volt a gazdasági növekedés. Az IMF továbbá megállapítja, hogy jelentősen csökkent a
munkanélküliség, nőtt a foglalkoztatottság – igaz, az elemzés készítői hozzáteszik: (11) a
magánszférában bekövetkezett növekedés mellett (12) a közmunkaprogram is bővült, az inflációs
nyomás alacsony, (13) a várakozások pedig horgonyzottnak tűnnek.
A (14) magánszektor hitelezése tovább zsugorodott, a nemteljesítő hitelek (NPL) aránya (15) a
csökkenés ellenére még mindig magas, a bankok profitabilitása javul. A (16) rossz hitelek kezelésére
már történtek lépések – nyugtázza még a dokumentum.
Szintén (17) pozitívumként jelenik meg az értékelésben, hogy az ország sérülékenysége csökkent,
viszont az erre irányuló intézkedések az elmúlt időszakban (18) a kockázatokat az állami szektorra és a
jegybankra helyezték át. (19) Emellett magas maradt az adósságszint, így az ország finanszírozási
szükséglete is jelentős, ráadásul (20) továbbra is magas a külföldi finanszírozási szükséglet, a jelentős
(21) negatív nemzetközi befektetési pozíció pedig továbbra is kockázatot jelent.
Tavaly a (22) költségvetési hiány a vártnál kedvezőbben alakult, az államadósság a GDP 75,3
százalékára csökkent a 2014-es 76,2 százalékról. Az IMF várakozásai szerint a költségvetési deficit (23)
a 2 százalékos GDP-arányos cél alatt marad – beleértve egy (24) a GDP 1 százalékának megfelelő
strukturális fiskális lazítást, az államadósság pedig (25) a GDP 74,25 százalékára mérséklődhet idén.
Figure 25
The Hungarian text with the preselected items marked in bold
Although the list was intended to be domain specific, straightforward business terminology
and words which are easy to look up, e. g., ‘Nemzetközi Valutaalap’, ‘költségvetés’, ‘hiány’,
‘nemteljesítő hitel’, ‘államadósság’, etc., were not included. On the other hand, there was a
heavy emphasis on grammatical structures, including tenses, especially in the indirect
191
sentences. The “end product” of the preselection, the 25 items with their possible translations
(Table 64) was checked and approved by a financial expert.
Table 64
The list of preselected items and their translations for PIER evaluation
192
1) spiced with / along with / combined with the large scale
[kedvező külső környezettel és uniós utilization / drawdown of [EU funds / resources]
(9) források] nagyarányú lehívásával 2) spiced with / [favorable external environment] and the
megspékelve (4) large scale drawdown of [EU funds]
3) helped… with large scale drawdown of EU funds
1) due to / thanks to / owing to / because of / as a result of
az erős belső keresletnek strong / heavy domestic / internal / national demand
(10)
köszönhetően (4) 2) strong / heavy domestic / internal / national demand
resulted in [2.9 percent economic growth]
next to / besides / together with / coupled with / in addition
a magánszférában bekövetkezett
(11) to / along with the growth (that occurred / happened) in the
növekedés mellett (3)
private sector / sphere
the public / communal work(s) programme / scheme / project
(12) közmunka program is bővült (4) - has also been extended
- has also expanded / grown / broadened /widened
a várakozások pedig horgonyozottnak the expectations seem / appear (to be) anchored / stable
(13)
tűnnek (4)
private sector lending / lending to the private sector
a magánszektor hitelezése tovább
(14) - has further shrunk / decreased
zsugorodott (4)
- has continued to shrink / decrease
1) [the proportion / rate of non-performing loans]
- although decreasing, is still high
[a nemteljesítő hitelek aránya] a
- despite / in spite of decrease is still high
(15) csökkenés ellenére még mindig magas
2) [the rates of non-performing loans] are still high despite
(3)
the decrease / decline
3) despite the drop
1) initiatives / measures / steps / actions have been taken (in
order) to manage / treat / tackle (bad) loans
a [rossz] hitelek kezelésére már
(16) 2) steps have already been taken to deal with bad loans
történtek lépések (3)
3) handling of bad loans has already started
4) There have been steps to handle disadvantageous loans
- [another] positive remark / note / fact / part / aspect [in the
assessment / evaluation]
- [also] appears as / shows as positive / positively [in the
assessment / evaluation] / as a positive element
- another positive remark / note [in the evaluation /
assessment that]
[szintén] pozitívumként meg [az
(17) - [the evaluation / assessment] presents / points out /
értékelésben] (3)
pinpoints [yet another] positive point / feature [noting that…]
- the report / assessment considers it positive that
- [decreased vulnerability] is considered as a positive feature
in the report
- the report considers [decreased vulnerability] as a positive
feature…
- [the adopted measures] have transferred / shifted the risks
[az erre irányuló intézkedések] a
(on)to the public / state sector and the central bank
(18) kockázatokat az állami szektorra és a
- risks have been shifted to / transferred to the private sector
jegybankra helyezték át (5)
and the central bank
besides / additionally / in addition, debt levels have remained
emellett, magas maradt az
(19) / stayed high /debt level has remained / stayed high
adósságszint (3)
debt levels are / debt level is still high / remained high
- the external / foreign funding needs / financing needs also
remain high / are high
magas a külföldi finanszírozási - overall high external financing needs remain
(20)
szükséglet (3) - external / foreign funding / financing needs are / will
continue to be / will remain high
- the need for foreign financing remains high
[a jelentős] negatív nemzetközi [the significantly] negative international investment position
(21)
befektetési pozíció (4)
193
the fiscal / budget / budgetary deficit
- turned out favorably / to be (more) favorable compared to
[költségvetési hiány] a vártnál expectations
(22)
kedvezőbben alakult (5) - turned out better than expected / to be more favourable than
expected earlier
- was lower / better than expected
a 2 százalékos GDP-arányos cél alatt will remain / stay under the two percent GDP-proportional /
(23)
marad (3) proportionate target / goal
- a fiscal easing corresponding to / equivalent to / equal to
A GDP 1 százalékának megfelelő
(24) the 1% of the GDP
fiskális lazítást (3)
- equivalent to 1% of the GDP fiscal easing
[the government debt]
[az államadósság] a GDP 74,25
(25) - may / might decrease / decline to
százalékára mérséklődhet idén (3)
- may / might be reduced to the 74.25% of the GDP
Note: The numbers in brackets refer to the number of raters marking the given item.
When the list was completed, two of the five raters withdrew from the assessment process. As
new markers were needed to get enough data, I had to recruit new volunteers; I could find
only one. As a next step, the task of the four raters was twofold: first they were asked to
assess the fourteen translations using PIER; then, they were to assess them using the UP scale.
The aim of this double process was to compare how the two assessment scales worked in the
evaluation of fourteen translated texts. I wanted to find out which of the two offered better
discrimination and reliability, including the agreement between the raters, and fairer grades in
the end. Upon completing their tasks, the four raters were asked to fill in a short questionnaire
on the two scales and to share their thoughts anonymously on what their experiences were.
As the test takers were evaluated on the same items (Section 6.2.3, Table 64 and Figure 25),
PIER was expected to ensure good inter-rater reliability. However, the raw data presented by
the raters (Table 64) suggested that the tool, in this respect, did not meet the expectations.
Most probably, the raters sometimes gave points for translations which were not listed among
the preselected items, or they missed or neglected translations in the list, otherwise the scores
given for the same solution by the four raters should have been the same.
194
Table 65
Raw scores given by four expert raters for 14 test-takers’ EN-HU translation tests using PIER
Scores
Student
code ER1 ER2 ER3 ER4
S01 13 12 12 13
S02 13 12 12 12
S03 22 18 17 18
S04 21 17 18 18
S05 22 11 11 11
S06 19 17 18 19
S07 14 11 12 12
S08 21 18 18 17
S09 20 13 14 15
S10 21 15 15 16
S11 22 11 12 11
S12 16 11 12 12
S13 11 6 6 8
S14 18 15 13 15
Mean 18.07 13.36 13.57 14.07
SD 3.912 3.455 3.390 3.269
Comparison of the scores in Table 65 shows that the four expert raters rarely arrived at the
same scores in the same test. The closestmatch was achieved in the case of translation S02
(13-12-12-12). The range of scores given by the four raters were different: between 11 and 22
in the case of ER1, between 6 and 18 in the case of ER2, between 6 and 18 in the case of
ER3, between 8 and 19 in the case of ER4. Even if ER2’s and ER3’s range was the same, the
differences between them inside the range were formidable. The item statistics, the means and
SD values calculated by SPSS indicated that ER1 was the most permissive rater with the
highest mean and SD (mean=18.07; SD=3.912), whereas ER2 was the strictest. However, the
scale with its set list of preselected items was not permissive; ER1 either did not follow the
list according to the instructions or made errors by not paying attention to the assessment
process. The inter-item correlation matrix (Table 66) shows that ER1 has the highest
correlation with ER3, but even this 0.646 value is moderate, as variables with correlation
values greater than 0.7 are considered strong correlations (Shrout & Fleiss, 1979). It seems as
if ER1 used a different tool to evaluate. ER2 shows strong correlations with both ER3 and
ER4, indicating that they measured the same characteristics.
195
Table 66
Inter-item correlation matrix of the four raters using PIER
The 0,835 intraclass correlation coefficient (ICC) value (Table 67), is higher than the
acceptable 0.75 (Shrout & Fleiss, 1979, p. 426), despite the very low lower bound (0.477)
indicating high internal consistency. The 0.932 value of Cronbach’s alpha, also calculated by
SPSS based on the scores given by the four raters, shows very good scale reliability (Crocker
& Algina, 2006, p. 142)
Table 67
The Intraclass Correlation Coefficient of the assessment by four raters using PIER
Average
0.835 0.477 0.948
Measures
The weak point of the present assessment using PIER, according to pure statistics, is inter-
rater reliability, expressed by Krippendorff’s alpha figure (0.5190), which was lower than the
acceptable α ≥ 0.80 value (Krippendorff, 2004, p. 241) in this case. The result is altered, if we
leave out the outlier rater ER1 from the statistics. The analysis of ER2, ER3 and ER4 scores
provideshigher values for ll indices: ICC is calculated to be 0.984, Cronbach’s alpha is 0.988
and Krippendorff’s alpha is 0.932, all acceptable. These findings raise the question of rater
responsibility: even one rater’s negligation of assessment instructions may corrupt the
reliability of the instrument.
Overall, three out of the four raters used the PIER as expected, whereas one of the raters
seemed to be out of step with them.
196
B) Assessment using the UP scale
The assessment of the 14 translations by the same four raters using the UP scale (see also
section 5.4, Study 3) provided different results (Table 68). The error-based scale, as has
already been mentioned, works at two levels of major and minor errors. At the end of the
evaluation process the errors are counted and, based on their number, a grade is calculated.
Table 68
Number of errors identified using the UP scale by four expert raters in 14 EN-HU translation tests
S01 1 5 7 11 5 20 4 10
S02 2 9 6 14 6 14 9 15
S03 1 6 4 11 4 10 2 7
S04 1 7 9 12 4 7 7 9
S05 0 5 7 20 5 11 8 10
S06 2 7 8 5 3 8 6 9
S07 3 6 14 10 3 9 10 12
S08 1 4 1 6 2 4 1 6
S09 1 5 6 8 2 13 7 10
S10 4 4 7 7 6 10 8 8
S11 3 4 5 15 7 12 7 13
S12 1 8 8 16 10 17 9 16
S13 10 8 14 10 17 6 18 9
S14 2 5 6 11 4 6 5 11
First, I analyzed the treatment of major errors (H) by the four raters. One look at Table 67 is
enough to see that most of the scores vary to a large extent, and so do the means, calculated
by SPSS. The SD values also indicate important differences between the raters, reflected in
the law Krippendorff’s alpha figures, which was 0.2376 in case of major errors (H); a very
low value compared to the acceptable α ≥ 0.80 (Krippendorff, 2004), suggesting poor inter-
rater reliability. The individual scores in Table 67 reveal the reasons for this poor outcome:
the raters’ subjective interpretations of H and h based on the UP scale resulted in subjective
judgements concerning their error management and eventually in the considerable diversity of
scores they gave.
197
Table 69
Inter-rater correlation matrix of the four raters concerning major (H) mistakes using the UP scale
The inter-rater correlation matrix (Table 69) shows the correlations between the raters. ER1,
being the most permissive rater again, displays the strongest correlations with ER3 and ER4,
but the 0.787 and 0.786 values are barely higher than the 0.70 acceptable minimum value.
The strongest correlation can be seen between ER2 and ER4; the 0.819 value shows that their
scoring was similar, as reflected in their means, as well (7.29 and 7.21, respectively).
Table 70
The Intraclass Correlation Coefficient of the scores of four raters concerning major (H) errors using the UP
scale
The relatively high, 0.899 Cronbach’s alpha figure suggests good scale reliability (Crocker &
Algina, 2006), however, the 0.791 Intraclass Correlation Coefficient with a 0.420 lower
bound is barely higher than the acceptable α ≥ 0.75 acceptable value of internal consistency
(Shrout & Fleiss, 1979) (Table 70). All these fidings point at a controversial assessment scale
with caveats rooted in its heavy emphasis on errors, allowing the raters to make subjective
judgements in the assessment process and to identify a different range of serious and less
serious errors.
The identification of minor errors resulted in similarly diverse scores (Table 68).
Based on the means, there were considerable differences between ER1 and the other three
raters; the means of ER2, ER3 and ER4, even if they gave different scores, were closer to
each other. The Krippendorff’s alpha value was 0.1584, which indicates very low inter-rater
198
reliability. The inter-rater correlation matrix (Table 70) barely shows any correlation among
the raters; even the highest value (0.581) is much lower than the acceptable figure.
Table 71
Inter-rater correlation matrix of the four raters concerning minor (h) errors using the UP scale
The Cronbach’s alpha value of 0.694 indicated poor scale reliability, and the 0.573 Intraclass
Correlation Coefficient showed very low internal consistency, again a sign of the permissive
nature of UP scale and the raters’ subjective error treatment.
In order to explore how the two assessment tools worked, the four raters were asked to fill in
a short questionnaire (see Appendix E). Their answers are analysed in the order the questions
appeared in the questionnaire.
1) Which of the two scales do you find better for assessing translations? Please, explain
your choice.
The four raters’ answers revealed that ER1, ER2 and ER4, preferred using the new PIER,
although they identified its caveats. Explaining their choice, each rater emphasized that it was
easy and straightforward to use and described it as more objective than the UP scale. ER1 said
“it is surprisingly easy to use”. ER2 emphasized its user-friendliness and that “it aims to look
for what candidates know”, an important argument he remained alone with. ER4 thought “it
offers a higher degree of objectivity and better agreement between the raters”. Only ER3
voted for the UP scale, defining it as a tool “which assesses the whole text… both major and
minor mistakes are considered during the evaluation process”. However, ER3 also
acknowledged that PIER was more objective and more user-friendly than the UP scale,
adding that “it may be time-consuming to keep in mind all the possible answers for the
various items in the key”.
199
2) What are the advantages and disadvantages of the two scales?
Table 72 offers an overview of the raters’ opinion in this respect. Their overall message is
clear: both tools have numerous strengths and weaknesses. When describing PIER, all four
raters agreed that it assessed translations objectively and it was easy to use. It was time-
saving, a characteristic mentioned by ER2 and ER 4, although they were aware of the fact that
the pre-selection process was time consuming and laborious. Only ER2 mentioned that it
focused on what candidates knew, and not on errors, and ER1 said it was also more reflective.
Perhaps the greatest caveat of PIER is that “it does not cover the whole text”, and as a result,
it does not treat the errors that occur in the unchecked parts, an argument emphasized by ER3.
Another disadvantage mentioned by ER2 and ER4 was that it was designed for one use only;
as soon as the list of preselected items fulfilled its aim, it was discarded and a similar list of
key items had to be established for each and every new text.
- can be applied any time for - too rigid (1) - including more detailed
any translation test (1) - error-based (4) instructions for assessors
- offers easy error - demotivating (2) regarding the types of
identification (1) - allows subjective error treatment (4) mistakes (1)
- assesses the whole text (2 - time consuming (1)
UP
- considers both major and
minor problems (1)
- offers a fair assessment of
students’ skills and overall
mastery of L2 (1)
- more reflective (1) - the key may not include all possible - including a holistic part
- easy to use (4) options (2) (1)
- offers equal assessment - pre-selection takes extra time (2) - defining the number of
- time saving (2) - the pre-selected items do not cover all pre-selected items based
- objective (3) the errors (2) on the length of the text
PIER
- aims to look for what - does not assess the whole text (3) (1)
candidates know (1) - parts of the text remain unchecked - including proper
with mistakes (2) training on using the
- involves the element of luck (1) scale (1)
- can be used only for 1 specific test (2)
Note: The numbers in brackets refer to the number of raters who gave the answers
200
The four raters agreed that although the UP scale assessed the whole text, it left too much
room for subjective judgements. Even ER3, who claimed it to be “fairer and more suitable to
assess translations” admitted that it was difficult to decide what to treat as a major or as a
minor error. This feature is reflected in the error identification practices of the four raters
(Table 67). The range of the different scores (marking the numbers of the identified errors)
indicate important disagreements, a characteristic which has been shown earlier (see section
5.4.4). Examining the number of major errors counted in individual translations, only a few
overlaps could be found: one in S02’s translation, where both ER2 and ER3 marked 6 major
errors; one in S03’s translation, where ER2 and ER counted 4 major errors. In the case of S07,
the overlap with 3 major errors occured between ER1 and ER4, in case of S11 between ER3
and ER4 with 7 marked major errors. The best agreement concerning major errors was in the
assessment of translation S08, where ER1, ER2 and ER4 identified only one, and ER3
marked two errors of this level. This high degree of disagreement, at least partly, may be
explained by the large number of errors at both levels (in translation S13 the number of
marked Hs was between 10 and 18); the low quality of translations may have made the raters’
error marking consistency difficult. The main reason, however, should be the permissive treat
of UP scale, which allowed the raters to make subjective judgements concerning errors at
both levels.
3) Which of the two scales offers fairer assessment? Please, explain your choice.
In ER2’s opinion, “both scales can offer fair assessment if they are used in a responsible
way, however, the PIE tool offers more objective assessment, as it does not involve the rater’s
subjective choice in error treatment”. ER3 thought it was the UP scale, as “it takes the whole
text into consideration when assessing students’ performance… it measures their translation
skills better”. Both ER1 and ER 4 opted for PIER, ER1 emphasizing its caveat as not
presenting all the possible options in the pre-selected list.
4) Is there anything you would change in the scale you marked better in the first
question? If yes, specify it, please.
As has been stated, ER1, ER2 and ER4 chose PIER as a better and fairer translation
assessment tool. ER1 would not change anything in it, but would be more astute in item
preselection. ER2 would define the number of preselected items based on the length of the
text., whereas ER4 would include a holistic part, as PIER “neglects big chunks in the text”.
Although ER3 voted for the UP scale, she would “appreciate more detailed instructions for
the assessors”.
201
Although the raters were laconic in their opinions, what they said helped identify the
strengths and weaknesses of the two scales, indicating that both need to be improved in more
than one way. Overall, although PIER has some caveats, it offers more objective assessment
and proved to be more reliable. Also, by focusing on what candidates know (and not their
errors) is a positive feature in lione with current trends in educational assessment. It is easy to
use, even though the pre-selection process may take longer, and the list provided might not
contain all the possible translation options for the preselected items. The fact that relatively
large sections remain unchecked in the translated text might result in unfair grades. The UP
scale was rejected by three raters because of its error centeredness; all four raters agreed that
applying the two-level error shooting is a difficult, exhausting process which might result in
subjective decisions and unreliable results.
6.3 Summary
Chapter 6 aimed to present the process of working towards a new assessment scale, an
improvement compared to the UP scale, which had been used to assess student translations at
the University of Pécs. The research was based on 14 translations from Hungarian to English
prepared by second- and third-year TS students in April 2020. In the first part of the
validation project the lexical characteristics and the readability of the translated were
examined, as the two most telling predictors of translation quality. he key findings of this part
of the research concerning lexical richness and lexical diversity presented in the Lexical
Frequency Profile (6.1.4, Table 58) of the translated texts predicted comprehension problems.
The ratio of very frequent () words was only 70.94% compared to the 78-81% found in
Nation’s study (2006) in general English texts. The percentage of AWL words proved to be
higher (13,46%) than Coxhead’s 10% average (2000) in general English texts. The indices on
text difficulty and readability calculated by using Coh-Metrix confirmed these findings; the
translated texts showed lexical feutures inherent to specialized texts as opposed to general
English texts. This was an expected finding, however, it does not provide a full picture of
translation quality, therefore, further measures were applied in the study.
In order to develop a new assessment scale I revised Kockaert and Segers’ (2017)
original Preselected Items Evaluation (PIE) method, mostly because it had started out as a
promising one, but, in the end, did not meet the expectations of the researchers, mostly
because its preselected items did not discriminate well, and were not sufficient for assessing a
202
complex skill (Eyckmans & Anckaert, 2017). I assumed that with more items, which are not
necessarily words but longer chunks, the method may prove to be more reliable. Therefore, if
longer sections are covered in a text and they are viewed “in the company they keep” (Firth,
1957, p. 11), better results may be achieved. My research questions aimed to find out how
many preselected items were necessary to create a norm-referenced, sufficiently
discriminating translation assessment tool. More importantly, I aimed to find out how such a
tool worked in practice, with a focus of rater consistency and inter-rater reliability. In addition
to exploring how reliable the modified PIE scale was, I was interested in how it worked
compared to the error-based assessment scale used in the English-Hungarian translation
studies programme at the University of Pécs.
The procedures of data collection turned out to be more complicated than expected.
Two times, both in spring and summer of 2020, the research was hindered by the restrictions
connected to the two waves of Covid-19. The pandemic affected the number of participants,
both students and raters. In the end, five expert-raters took part in what I now call the pilot
process, which consisted of a pre-selection and an assessment phase. As a result, the study
revealed that both scales had several caveats. The old UP scale with its error-centeredness
allowed subjective judgements in error marking and was difficult to follow; its use resulted in
very low inter-rater reliability indicating inconsistency in the scores. However, it had at least
one positive feature: it aimed to assess the whole text, counting every single translation error
in it. The new scale named the Preselected Items Evaluation Revised version (PIER) offered
more objective assessment with its preselected list of items. The 25 items (each one a longer
chunk) of the final list covered 40% of the text, and included domain specific terminology,
grammatical structures with emphasis on the use of tenses, style and register, spelling and
punctuation and translation specific features. The expert-raters were asked to identify possible
translations of the preselected items. During the assessment process, their only task was to
check if the preselected item was translated as it was offered in the list or not. All listed
equivalents were accepted. If the scale worked well, each item in a translation should have
been given the same score (1 or 0) by all four raters. As is often the case with assessment
scales, the agreement between the raters was less than perfect. Although inter-rater reliability
was much better than with the UP scale (0.5190 vs. 0.2376), it was clear that in the unchecked
sections of the translations both major translation errors and good solutions (not listed as
acceptable) remained unnoticed. Therefore, the answer to the research question ‘To what
extent and in what ways is PIER suitable to be a tool to assess translations?’ is that based on
the 14 translations and the dataset provided by four raters, the results are encouraging.
203
However, more participants and more careful item preselection may be necessary in order to
examine how PIER can be improved and used even more reliably. To achieve this aim,
further training and a more active participation of the expert-raters may be needed.
Conducting think-aloud protocols with raters could also add to the development of a valid,
reliable, and fair assessment tool.
204
Chapter 7
Conclusions
Despite the substantial research on motivation, autonomy and assessment, which are clearly
interrelated according to the literature cited throughout the dissertation, these issues seem to
have been neglected in the field of translation studies. The reason has to be searched for in the
widely recognized fact that translation as a linguistic domain, has struggled for recognition for
a long time and has been underestimated as a profession (Baker, 2011). These trends did not
contribute to the prestige of the activity and they also hindered the interest in the field of
translation studies. As a researcher as well as a teacher and a practicing translator, I aimed to
address this gap with my thesis. The BA in Translation Studies specialization at the
University of Pécs offered an area in need of research: on the one hand, it had a curricular
structure to examine and on the other hand, a convenience sample to study. However, what
seemed to be convenient and feasible at the beginning, turned out to be a hindering factor: I
soon had to realize that the number of translation students was minimal and not everyone in
the target groups was a willing participant in my research.
Motivation, autonomy and assessment might be considered as overarching areas to be
researched independently, in their own right. That is why I devoted separate sections to the
three main focal points and implemented empirical research on these topics. The study
applied a mixed methods research tradition. The qualitative phases of the thesis included data
collection applying semi-structured interviews with students and teachers, essay and syllabus
analyses, a student questionnaire comprising both closed and open-ended questions and the
analysis and the development of a new assessment tool. The datasets compiled in the studies
were analyzed by applying both qualitative and quantitative procedures.
Study 1 aimed to find answers for research questions addressing students’ motives to
choose translation as a specialization, their language background when they enrolled for the
programme, the most important factors, which fostered and/or hindered their motivation
throughout their studies, and the ways they were planning to use their translation skills after
graduating. The collected data revealed that their choice to learn translation was twofold: they
either wanted to learn to become professional translators, and the specialization seemed to be
a good first step to achieve this goal, or they found translation interesting and rewarding, and
they were happy to do it as a hobby. As is most often the case, their linguistic background was
205
not the same, their level of English language proficiency ranged between B2 and C1 on the
CEFR scale, so those, who were less prepared linguistically had more demotivating influences
expressed in grades and feeling less successful than the ones with firmer knowledge. The
study underlined what was emphasized in the cited literature, as well: that motivation is a
many-faceted ID factor, which can be boosted extrinsically, e. g., by regular and useful
feedback on what they do, and intrinsically, e. g., by finding more and more interest in their
tasks and activities. However, if their training lacks the boosting factors, they can easily
become demotivated. A promising finding is that the majority of respondents wanted to use
the learnt skills either in becoming a professional, or as part of their future work. The others
wanted to do translation for pleasure (translating songs, film subtitles, comics for themselves),
i. e., as a hobby.
Study 2 examined the role of autonomy in Translation Studies BA classes, with a
focus on learner autonomy, in order to find out how autonomous BA translation students were
during their studies in the programme, how the specialization programme supported their
learner autonomy, how course syllabi integrated and supported autonomy and motivation in
TS classes, and how teacher autonomy affected learner autonomy and motivation. Answers to
the ten questions in the student questionnaire revealed that the respondents started to become
more autonomous thanks to the practices they pursued in and out of classes: they were offered
tasks and assignments which taught them to be autonomous in their decisions and allowed
them to make their voices heard in the class discussions. The syllabi theoretically embraced
both student and teacher autonomy, however, their uniformity did not offer the students too
many ways to practice it. Syllabus analysis also revealed that the teachers planned to work
along the same guidelines, and each syllabus contained elements which suggested high
degrees of teacher autonomy, for instance in defining how they assessed and graded their
students’ work and how they promoted learner and classroom autonomy.
Studies 3, 4 and 5 were devoted to exploring assessment system practiced in the
English-Hungarian translation studies BA programme at the University of Pécs. Study 3
focused on the rating scale, or rather error list used for the assessment of exam translations,
and also for diagnostic purposes during the term. The analyses revealed serious caveats, most
importantly very low inter-rater reliability and rater consistency in using the tool. The teacher
interviews on the scale identified the same problems.
Study 4 reported preliminary research on the quality of student translations by
examining lexical characteristic and readability indices. The research data on text quality was
provided by Lexical Profiles of translations (Cobb, 2015) and readability indices of the
206
translated texts calculated using Coh-Metrix (Graesser et al., 2004). These provided
preliminary information for expert-raters who participated in the assessment of the same
translations.
Study 5 embarced the steps of developing and validating a new assessment scale
named PIER, the revised version of Cockaert and Segers’ (2017) PIE. The study aimed to find
out how many preselected items were necessary to create a norm-referenced, sufficiently
discriminating, and reliable assessment tool for translations. First, expert-raters were invited
to take part in the assessment process by pre-selecting items in the original text. Then, expert-
raters used the list of items identified to assess the student translations Study 4 analysed, in
order to see how the test items worked in terms of discrimination, inter-rater reliability and
rater consistency. For comparison, the same raters were asked to assess the same translations
using the UP scale of major and minor errors. As a finishing touch, the raters were invited to
evaluate both scales in a short questionnaire with open-ended questions, and identified
advantages and disadvantages of both tools. The findings show that although PIER provided
much higher inter-rater reliabilty and rater consistency than the UP scale, it still has caveats
which have to be improved.
Study 5 also added an unexpected finding to Study 1 on motivation, which proved to
bean important outcome of the research: the tutors teaching translation courses in the
programme are neither well-motivated nor autonomous, and are not prepared to assess
students’ translations in a reliable way.
The purpose of the thesis was to gain a better understanding of how motivation, autonomy
and assessment are interrelated in translation studies. The factthat translation studies as a
linguistic domain had been struggling for recognition for a long time, the challenges were
numerous and research questions aimed to find out (1) what motivates young people to
choose translation studies as part of their BA programme and, eventually, to do it as a
profession or a hobby; (2) what they find motivating in their classes; (3) what they perceive to
be their strength and weaknesses.
As translation is an autonomous activity by its nature, not everyone is cut to become a
professional translator, as translators have to be able to make autonomous choices in the line
of their work. Therefore, the research questions examined (1) how autonomy can be taught
207
and learntin the field of translation training; (2) how teacher and learner autonomy are related
to each other and to motivation; and (3) how autonomous students are in the translation
studies programme.
The product of students’ autonomous activity is the translated text, and the translator’s
worth on the translator market is measured by the quality of their work. Therefore, evidence
was sought to find out (1) how the quality translations may be measured; (2) what criteria
should be taken into consideration when assessors make decisions about the quality of
translations; (3) what the best ways are to make such decisions, and (4) what tools can be used
to help make valid and reliable decisions.
All these questions were examined from a broader perspective, in the frameworks of
translator training in general in Hungary, and with a special focus on the Translation Studies
BA specialization programme at the University of Pécs. The following sections give a short
summary of the research along the research questions and the corresponding findings.
When I designed my research, my thinking was guided by major studies and overviews
published in the past few decades in the field of translation studies and educational
assessment (Adolphs et al., 2018; Baker, 2011; Chesterman, 2005; Dickinson, 2002; Dörnyei,
1994; 1998; 2007; 2014; 2020; Dörnyei & Ottó, 1998; Dörnyei & Ryan, 2015; Dörnyei &
Ushioda, 2009; Gardner, 2010, Józsa et al., 2014; Pym, 1992, 2003; Risku et al., 2010)
providing the framework for my hypotheses, assumptions, and, in the end, for making
comparisons.
The first part of the thesis examined student motivation in TS classes and the emerging
picture was overall encouraging. The findings revealed that students choose translation as a
specialization for different reasons ranging from “I want to do it as a profession” to “it is not
as boring as other classes”, indicating the two extremes along a continuum of motives.
Although all students were English majors, for many of them English was not the first foreign
language they had previously studied. Some of them started with German as a first foreign
language, and took up English only later. There were students who had studied other foreign
languages besides English, typically French, Italian or Spanish; an important factor when one
wants to become a professional translator. These findings showed participants’ favourable
attitudes and motivation towards learning languages in general and English in particular. As a
208
prerequisite for enrollment, students were required to pass the C1 level proficiency exam at
the end of their first academic year, however, some of them were unsuccessful and at B2 level
of proficiency. Since they did not have much experience in translating, they had no idea what
to expect, so their motivation to study translation most of all was instrumental: they aimed to
make translating a bread winning profession. However, both qualitative and quantitative data
indicated the importance of intrinsic motivation, including the teachers’ knowledge and
personality, the tasks and assignments, regular feedback, and fair assessment of students’
work. There were students who chose translation because they liked doing it, they were
interested in the activity itself, or they saw it as a future hobby, these were the most important
intrinsic motives identified. The findings also revealed that certain factors may become the
source of motivation and demotivation at the same time for students. The best example for
this dual character is the texts students get to translate: if the texts are interesting, they are
motivating; however, if they are boring or too difficult to comprehend, they tend to have a
demotivating effect. According to the analyzed data, the majority of the participants wanted to
use their translation skills after graduation either as a translator or in a job where translation is
part of the professional routine.
Autonomy in language teaching and learning was discussed in the interrelationship of three
main foci: learner autonomy, teacher autonomy, and classroom autonomy. The three forms
tend to be examined in research as a dialogue between the learner and the teacher in
classroom contexts (Finch, 2001; Hoyle & John, 1995; T. Lamb, 2017; T. Lamb & Murray,
2018; Little, 1995). Translation, as a highly complex problem-solving activity (Baker, 2006;
Klaudy, 1997a; Venuti, 2013) can be developed best in an autonomous learning environment,
where the learners are encouraged to apply strategies which help them arrive at the best
solutions in all problem solving processes (Baer & Koby, 2003).
To find out how autonomous BA translation students were, and how the BA
specialization programme supported learner autonomy, I used a student questionnaire.
Findings revealed that the participants were very much teacher-dependent at the beginning of
their translation studies, and progressed on the path of becoming autonomous not only in their
beliefs, but also in their practices. Students were at different stages of autonomy, but a lot of
them learnt to come to decisions and make choices on their own, to set their own goals and
work independently in order to achieve them. However, regular feedback was important for
209
all of them, suggesting that they were not confident enough concerning their assignments and
their performances on them.
The translation studies programme offered the students ample opportunities to learn to
become autonomous. When doing home assignments, they had to make decisions on what
tools, sources and methods to use, and they had to solve the arising translation problems on
their own. They were required to hand in their assignments by a deadline, so they were
offered the opportunity to learn time management, which is an important factor in the
translation business. Students were allowed to make their voices heard in classes, where they
discussed their work under their teachers’ guidance; they had multiple opportunities to
express their opinions and to listen to what their peers had to say. Most of the classroom
activities were designed to help their critical thinking and to promote their autonomous
behavior.
To gain insights into how the course syllabi integrated and supported learner
autonomy, I examined twelve course syllabi from the academic year of 2016/2017, including
both lecture and seminar syllabi. Due to their nature, the seminar syllabi offered more learner
learner autonomy by inviting the students to present their “translation products” and to
discuss them with their peers. They could comment on each other’s work, compare different
choices and suggest alternative solutions or improvements. In some of the courses they were
required to do proofreading, which made them more aware of the problems translators face
while working; proofreading as an activity definitely fostered their autonomous learning. The
practice described in the syllabi theoretically offered a motivating, student-centered learning
environment, which would not be possible without autonomous teachers. But even if the
syllabi suggested a degree of teacher autonomy, they also revealed that the teachers, with a
few refreshing exceptions, practically followed the same routine, which might have been their
autonomous decision, but it resulted in uniform activities the students were satisfied with
according to findings.
Both the findings of the questionnaire and the analysis of the syllabi seem to suggest
that motivation and autonomy were interrelated in the study. Having the opportunity to play
an active role in classroom discussions of the translations created individually, express their
opinion on each other’s work, make their own choices, set their own pace when doing their
assignments, as well as to learn from their own and others’ mistakes made the participants not
only more autonomous, but also more motivated. The teacher interviews revealed that the
blended routine in doing assignments -starting working on them in class with teacher
guidance, completing them at home on their own, and then discussing the translations in class
210
- also resulted in a higher degree of motivation on the students’ part, and fostered their
autonomous behaviour implying that student and teacher autonomy were interrelated.
Assessment in education, as reflected in the research findings discussed in this thesis, has a
significant effect on motivation, especially if it involves giving appropriate feedback.
Research has also shown that motivation goes hand in hand with autonomy. When the three
meet in the vast arena of SLA and language pedagogy, a complex system is created, in which
the components are interdependent and interact with each other in a variety of different ways
(Larsen-Freeman, 1997; 2002).
As has been discussed in section 5.1, translation as a multi-dimensional and complex
phenomenon is difficult to assess (Angelelli, 2009; Eyckmans, Anckaert, & Segers, 2009;
Williams, 2009), even with the help of a sophisticated assessment tool. Its complexity is
shown by the different approaches scholars have taken to translation assessment, including
holistic (Waddington, 2001), analytical (Orlando, 2011) and other, either experimental
methods, such as item calibration (Eyckmans & Anckaert, 2017) or item pre-selection
methods (Kockaert & Segers, 2017) relying on grids, descriptors and norms. No previous
study was found on the ideal assessment tool. It appears we have to agree with House’s (2015,
p. 64) conclusion: “It seems unlikely that translation assessment can ever be objectified in the
manner of natural science”, or even in the manner of Bachman and Palmer’s (1982) classic
model of communicative competence, which is often taken as an example for analytical
scales.
The studies in Chapters 5 and 6 in Part III also proved that there is no royal road. The
assessment scale used in the translation studies BA programme at the University of Pécs (UP
scale) was compared to Kockaert and Segers’ Preselected Items Evaluation method (PIE,
2017). The findings showed that although both types of assessment had advantages, they also
had caveats, which were difficult or impossible to counterbalance. Although the initial aim of
the research discussed in section 6.2 was to develop a new scale for assessing translations
through revising PIE (Kockaert & Segers, 2017), the findings turned out to be controversial.
Although the adapted scale of PIER (PIE Revised) offered more and longer chunks in the pre-
selected items list than the original one did, and it comprised more acceptable options for each
item, it still left significant parts in the translated texts uncovered, and as a result, unchecked.
211
Although pre-selection was done by expert raters, they could not think of every possible
option, meaning that good but not listed translations remained unrewarded, or in extreme
cases, treated as errors. It also turned out that raters found checking too many pre-selected
items with multiple acceptable options difficult to follow.
As for the UP scale, the questionnaire filled in by the expert-raters subsequent to the
assessment process revealed all the disadvantages and shortcomings described by raters in
section 5.4.5. Focus on errors may result in learner (as well as rater) demotivation. Although
the errors were classified and discussed at two levels as major and minor errors, the
distinctions were difficult to apply during the assessment process; therefore, the raters often
relied on their subjective judgement. Another caveat of the UP scale is that it does not
stipulate the treatment of recurring errors, which are quite common in student translations.
Although three of the four expert raters identified PIER as a more user-friendly and
fair method, and the reliability data indicated encouraging findings, its shortcomings are also
clear. The statistical analysis of the data showed good inter-rater reliability of the tool,
however, in its present form it is not quite as valid and reliable as desirable for assessing
translations.
When I designed my thesis, it was not possible to foresee all the possible limitations which
might hinder the research. Thus, the limitations of the present thesis are manifold. First, due
to the small sample sizes, the findings cannot be generalized beyond the context of the present
research project. Working with a convenience sample has its advantages, but the number of
participating translation students was lower than expected. On part of both the students and
their teachers, willingness to participate the research caused another problem, especially after
the Covid-19 restrictions were announced in the spring of 2020. What was intended to be a
mixed-methods study on a grander scale, turned out to be a case study in the end, as the
research context was confined to the Translation Studies BA programme at the University of
Pécs with a limited number of BA students who were available and ready to cooperate.
Nevertheless, by using multiple methods and various related perspectives, the study hopefully
provides sufficient details to claim a degree of transferability of the results in the three areas it
examined: motivation, autonomy and assessment in Translation Studies BA classes and in
becoming a translator in general.
212
Another group of problems emerging in association with the labor-intensive parts of
the empirical studies, especially in the field of translation assessment, concerned the
reluctance of teacher-raters to participate in the study which at more than one point threatened
the feasibility of the plans. Because of their unavailability, partly for reasons nobody could
overwrite, I had to give up my plans on raters training and had to modify the procedures to
meet them half way: raters received the task descriptions and the instructions in an email, but
there was no opportunity to do an in-person workshop and think aloud sessions which could
have cast further light on the findings. Another limitation concerns that fact that the teachers
of the TS programme I interviewed in the first round were not willing to participate inthe
study on assessment. Therefore, new raters had to be recruited, who were experts in assessing
translations. These limitations were partly related to the COVID 19 pandemic. Still, the
findings on PIER, the new assessment scale, turned out to be promising. The results showed
that a larger section of the text can be covered with more careful item selection and with being
more attentive when providing translation options for the items. Testing the scale on larger
samples and providing proper rater training also would improve the reliability of the method.
The research findings of the present thesis focusing on motivation, autonomy and assessment
in the field of translation studies have pedagogical implications for syllabus design and the
implementation of the BA in TS programme that may benefit both teachers and experts in
charge of the programme. The findings underscore the role and importance of motivation, a
key factor in individual differences research. Overall, students were found to be motivated,
but maintaining their motivation may pose further challenges, especially if the learners lack
feedback and ways of assessment demotivate them over time. Further research is needed to
find out more about classroom practice and the relationship between teacher and student
motivation. It would be necessary to find out more about the reasons why the teachers were
not willing to participate in the study on how the assessment scales worked, as well as their
perceptions of what role a more valid and reliable tool could play in assessing their students’
progress in the TS courses. Classroom observations, including self- observations, could reveal
important details about the teaching and learning processes applied, as well as good practice.
It would be useful for teachers to work as a team and compare notes as they teach the courses,
give students feedback and assess their translations regularly. Students should be involved in
213
every step towards developing assessment scales for their study programme. In addition to
these, students should also be asked about how helpful they find different types of feedback
and which type of assessment scale they find conducive to their own development.
As for learner autonomy in translation, the ultimate aim of TS is to help students
become autonomous in their studies and as translators when they graduate. The findings
indicated that students tended to find learner autonomy motivating in the learning process,
therefore, it is important to emphasize self-reliance and autonomy in the programme, and as a
next step, to observe what stages students go through and what strategies they apply. As less
emphasis was placed on teacher autonomy, this area should also be included in further studies
to find out how autonomous teachers feel and how much guidance they need to improve the
TS programme.
The studies found that feedback and assessment in the courses were not uniform,
teachers varied in their practices, and the frequency and type of feedback or grades students
received over a semester depended on their teacher. There seemed to be a need for more
frequent and more systematic feedback. Further research is necessary into feedback and
assessment practices applied in the courses, which, if done in a fair and responsible way, will
help students develop in what they do and encourage them to achieve more. Teacher
awareness should be raised about the interrelatedness of these factors, and the fact that they
have to be treated differently in the field of translation studies than in other courses students
take in the BA in English Studies programme.
As the studies in the present thesis involved a limited number of participants, further
research should involve more students, teachers, and expert raters at other universities as well
to find out if the challenges and the findings are similar in other contexts. Finally, ongoing
rater training would reveal how teachers can become autonomous in their assessment
processes. By including think-aloud protocols, further research would definitely bring
improvements both in the reliability and consistency of the assessment instrument and
improve inter-rater reliability.
214
References
423/2012. (XII.29.) Korm. rendelet a felsőoktatási felvételi eljárásról, Pub. L. No. 423/2012. (XII. 29.)
(2012). Hungary: Net jogtár.
5/2020. (I.31.) Korm. rendelet a Nemzeti alaptanterv kiadásáról, bevezetéséről és alkalmazásáról szóló
110/2012. (VI.4.) Korm. rendelet módosításáról, Pub. L. No. 5/2020. (I.31.), 296 (2020).
Hungary: Magyar Közlöny January 31, 2020.
Adolphs, S., Clark, L., Dörnyei, Z., Glover, T., Henry, A., Muir, C., … Valstar, M. (2018). Digital
innovations in L2 motivation: Harnessing the power of the ideal L2 self. System, 78, 173–185.
Ahmadi, R., & Hasani, M. (2018). Capturing student voice on TEFL syllabus design: Agenticity of
pedagogical dialogue negotiation. Cogent Education, (5), 1–17.
https://doi.org/10.1080/2331186X.2018.1522780
Alderson, J. C., Clapham, C., & Wall, D. (1995). Language test construction and evaluation.
Cambridge Language Teaching Library. https://doi.org/10.5897/ERR12.035
Alderson, J. Charles, Figueras, N., & Kuijper, H. (2006). Analysing tests of reading and listening in
relation to the Common European Framework of Reference : The experience of the Dutch CEFR
Construct Project. Language Assessment Quarterly, 3(1), 3–30.
Allen, J. P. B. (1984). General purpose language teaching: a variable focus approach. In C. J. Brumfit
(Ed.), General English Syllabus Design (pp. 61–74). Oxford: Pergamon.
Allwright, D. (1990). Autonomy in language pedagogy: CRILE (No. 6). University of Lancaster.
Angelelli, C. V. (2009). Using a rubric to assess translation ability: Defining the construct. In C. V.
Angelelli & H. E. Jacobson (Eds.), Testing and assessment in translation and interpreting studies
(pp. 13–48). Amstardam / Philadelphia: John Benjamins.
Austermühl, F. (2014). Electronic tools for translators. Translation practices explained. London and
New York: Routledge.
Bachman, L. F., & Palmer, A. S. (1982). The construct validation of some components of
communicative proficiency. TESOL Quarterly, 16(4), 409–465.
Bachman, L. F., & Palmer, A. S. (1996). Language testing in practice. Oxford: Oxford University
Press.
Baer, B. J., & Koby, G. S. (Eds.). (2003). Beyond the ivory tower. Rethinking translation pedagogy.
Amsterdam / Philadelphia: John Benjamins Publishing Company.
Baker, M. (Ed.). (2001). Routledge Encyclopedia of Translation Studies (1st ed.). London and New
York: Routledge.
Baker, M. (2011). In other words. A coursebook on translation (2nd ed.). London and New York:
Routledge.
215
Bell, R. T. (1991). Translation and translating: theory and practice. London and New York:
Longman.
Benson, P. (2001). Teaching and researching autonomy in language learning. London: Longman.
Benson, P. (2006). Autonomy in language teaching and learning. Language Teaching, 40, 21–40.
https://doi.org/10.1017/SO261444806003958
Benson, P. (2007). Autonomy and its role in learning. In J. Cumming & C. Davison (Eds.),
International handbook of English language teaching I (pp. 733–746). New York: Springer.
Benson, P. (2008a). Introduction. In T. Lamb & H. Reinders (Eds.), Learner and teacher autonomy.
Concepts, realities and responses (pp. 15–32). Amsterdam / Philadelphia: John Benjamins B.V.
Benson, P. (2008b). Teachers’ and learners’ perspectives on autonomy. In T. Lamb & H. Reinders
(Eds.), Learner and teacher autonomy. Concepts, realities and responses (pp. 15–32).
Amsterdam / Philadelphia: John Benjamins B.V.
Benson, P. (2009). Making sense of autonomy in language learning. In R. Pemberton, S. Toogood, &
A. Barfield (Eds.), Maintaining control: Autonomy and language learning (pp. 13–26). Hong
Kong: Hong Kong University Press.
Benson, P. (2011). What’s new in autonomy? The Language Teacher Online, 35(4), 15–18. Retrieved
from jalt-publications.org/tlt
Biler, A. (2019). The reliability of readability tools in L2 reading. In S. Papageorgiou & K. M. Bailey
(Eds.), Global perspectives on language assessment: Research, theory, and practice (pp. 108–
121). New York: Routledge.
Blair, J., Czaja, R. F., & Blair, E. A. (2014). Designing surveys. A guide to decisions and procedures
(3rd ed.). Thousand Oaks: SAGE Publications.
Borg, S., & Al-Busaidi, S. S. (2012). Learner autonomy: English language teachers’ beliefs and
practices. London: British Council.
Breen, M., & Mann, S. (1997). Shooting arrows at the sun: Perspectives on a pedagogy for autonomy.
In P. Benson & P. Voller (Eds.), Autonomy and independence in language learning (pp. 132–
149). London and New York: Longman.
Breen, M. P. (1984). Process syllabuses for the language classroom. In Christopher J. Brumfit (Ed.),
General English Syllabus Design (pp. 47–60). Oxford: Pergamon Press.
Breen, M. P. (1987b). Contemporary paradigms in syllabus design. Part II. Langauge Teaching, 20(3),
157–174.
Brown, J. D. (2001). Using surveys in language programs. Cambridge, UK: Cambridge University
Press.
Brumfit, Christpher J. (Ed.). (1984). General English syllabus design. Oxford: Pergamon Press.
Candlin, C. N. (1984). Syllabus design as a critical process. In Christopher J. Brumfit (Ed.), General
English Syllabus Design (pp. 29–46). Oxford: Pergamon Press.
216
Carver, R. P. (1994). Percentage of unknown words in text as a function of the relative difficulty of the
text: Implications for instruction. Journal of Reading Behaviour, 26(4), 413–437.
Castello, E. (2008). Text complexity and reading comprehension tests. Bern: Peter Lang.
Christison, M., & Murray, D. E. (2014). What English language teachers need to know Volume III:
Designing curriculum. New York and London: Routledge.
Cohen, A. (1994). Assessing language ability in the classroom (2nd ed.). Boston: Heine & Heine
Publishers.
Common European Framework of Reference for Languages: Learning, teaching, assessment. (1996).
Conde, T. (2013). Translation versus language errors in translation evaluation. In D. Tsagari & R. van
Deemeter (Eds.), Assessment issues in language translation and interpreting (pp. 97–112).
Frankfurt am Main: Peter Lang GmbH.
Cooper, C. R. (1977). Holistic evaluation in writing. In C. R. Cooper & L. Odell (Eds.), Evaluating
writing: Describing, measuring, judging (pp. 3–22). Urbana: National Council of Teachers of
English.
Coxhead, A. (2000). A New Academic Word List. TESOL Quarterly, 34(2), 213–238.
Creswell, J. W. (2003). Resarch Design. Qualitative, quantitative and mixed methods approaches.
Thousand Oaks: Sage Publications.
Crocker, L., & Algina, J. (2006). Introduction to classical and modern test theory. Mason, Ohio:
Cengage Learning.
Crossley, S. A., Allen, D. B., & McNamara, D. S. (2011). Text readability and intuitive simplification:
A comparison of readability formulas. Reading in a Foreign Language, 23(1), 84–101.
Crossley, S. A., Greenfield, J., & McNamara, D. S. (2008). Assessing text readability using
cognitively based indices. TESOL Quarterly, 42(3), 475–493.
Csizér, K., & Kormos, J. (2009). Learning experiences, selves, and motivated language behaviour: A
comparative analysis of structural models for Hungarian secondary and university learners of
English. In Zoltán Dörnyei & E. Ushioda (Eds.), Motivation, language identity and the L2 self
(pp. 98–119). Bristol: Multilingual Matters.
Cullinan, M. (2016). Critical review of ESL curriculum: Practical application to the UAE context.
International Journal of Curriculum and Instruction, 8(1), 54–68.
Dastyar, V. (2019). Dictionary of education and assessment in translation and interpreting studies
(TIS). Newcastle upon Tyne: Cambridge Scholars Publishing.
217
Davies, A., Brown, A., Elder, C., Hill, K., Lumley, T., & McNamara, T. (1999). Dictionary of
language testing. Cambridge: Cambridge University Press.
de Vaus, D. (2014). Surveys in social research (6th ed.). New York: Routledge.
Deci, E. L., & Ryan, R. M. (1985). Intrinsic motivation and self-determination in human behaviour.
New York: Plenum Press.
Denscombe, M. (2014). The good research guide: for small-scale social projects (5th ed.).
Maindenhead: Open University Press.
Dickinson, L. (1992). Learner autonomy 2: Learner training for language learning. Dublin:
Authentic.
Doró, K. (2010). Miért lettem angolos? Elsőéves szegedi egyetemisták szakválasztása és nyelvi
felkészültsége [Why did I chose English studies? Study choice and language prearedness of first-
year undergraduates at the University of Szeged]. In A tudomány nyelve, a nyelv tudománya :
alkalmazott nyelvészeti kutatások a magyar nyelv évében (pp. 580–588). Székesfehérvár:
MANYE Eszterházi Károly Főiskola.
Doró, K. (2011). English language proficiency and prediction of academic success of first-year
students of English. In M. Léhmann, R. Lugossy, & J. Horváth (Eds.), UPRT 2010 (pp. 171–
185). Pécs: Lingua Franca Csoport.
Dörnyei, Z. (2020). Innovations and challenges in language learning motivation. London: Routledge.
Dörnyei, Z., & Csizér, K. (2002). Some dynamics of language attitudes and motivation: Results of a
longitudional nationwide study. Applied Linguistics, 23(4), 421–462.
Dörnyei, Z., & Ottó, I. (1998). Motivation in action: A process model of L2 motivation. Working
Papers in Applied Linguistics, 4, 43–69.
Dörnyei, Z., & Ushioda, E. (2011). Teaching and researching motivation (2nd ed.). Harlow: Pearson
Education Limited.
Dörnyei, Z. (1994). Motivation and motivating in the foreign language classroom. The Modern
Language Journal, 78(3), 273–284.
Dörnyei, Z. (1998). Motivation in second and foreign language learning. Language Teaching, 31(3),
117–135. https://doi.org/10.1017/S026144480001315X
218
Dörnyei, Z. (2003b). Questionnaires in second language research: Construction, administration, and
processing. Mahwah, NJ: Lawrence Erlbaum Associates.
Dörnyei, Z. (2007b). Research methods in applied linguistics. Oxford: Oxford University Press.
Dörnyei, Z. (2010a). Questionnaires in second language research (2nd ed.). New Jersey: Lawrence
Erlbaum Associates.
Dörnyei, Z. (2010b). The relationship between language aptitude and language learning motivation:
Individual differences from a dynamic systems perspective. In E. Macaro (Ed.), Continuum
companion to second language acquisition (pp. 247–267). London: Continuum.
Dörnyei, Z., & Csizér, K. (1998). Ten commandment for motivating language students: Results of an
empirical study. Language Teaching Research, 2(3), 203–229.
Dörnyei, Z., Csizér, K., & Németh, N. (2006). Motivation, language attitudes and globalisation: A
Hungarian perspective. Clevedon: Multilingual Matters Ltd.
Dörnyei, Z., Muir, C., & Ibrahim, Z. (2014). Directed Motivational Currents: Energising language
learning through creating intense motivational pathways. In D. Lasagabaster, A. Doiz, & J. M.
Sierra (Eds.), Motivation and foreign language learning: From theory to practice (pp. 9–29).
Amsterdam: John Benjamins.
Dörnyei, Z., & Ryan, S. (2015). The psychology of the language learner revisited. London and New
York: Routledge.
Dörnyei, Z., & Skehan, P. (2003). Individual differences in second language learning. In C. J. Doughty
& M. H. Long (Eds.), The handbook of second language acquisition (pp. 589–630). Oxford:
Blackwell.
Dörnyei, Z., & Ushioda, E. (Eds.). (2009). Motivation, language identity and the L2 self. Bristol:
Multilingual Matters.
Dróth, J. (2017). Kutatási kérdések a fordítások értékelése témakörben. In M. Kóbor & Z. Csikai
(Eds.), Iránytű az egyetemi fordítóképzéshez (pp. 289–302). Pécs: Kontraszt.
Édes, C. (2008). “Teachers know best”: Autonomous beliefs and behaviours of English majors. A case
study of three first-year students at Eötvös University. In J. Horváth, R. Lugossy, & M. Nikolov
(Eds.), UPRT 2008: Empirical studies in Engéish applied linguistics (pp. 43–58). Pécs: Lingua
Franca Csoport.
EMT. (2009). Competences for professional translators, experts in multilingual and multimedia
communication.
219
Esch, E. (2009). Crash or Clash? Autonomy 10 years on. In R. Pemberton, S. Toogood, & A. Barfield
(Eds.), Maintaining control: Autonomy and language learning (pp. 27–44). Hong Kong: Hong
Kong University Press.
Espinosa, R. (2015). Fostering autonomy through syllabus design: A step-by-step guide for success.
HOW, 22(2), 114–134.
Eszenyi, R. (2016). What makes a professional translator. The profile of the modern translator. In I.
Horváth (Ed.), The modern translator and interpreter (pp. 17–28). Budapest: Eötvös University
Press.
Europe, C. of. (2001). Common European framework of reference for languages: learning, teaching,
assessment. Cambridge, UK: Press Syndicate of the University of Cambridge.
Everhard, C. J. (2016). What is this thing called autonomy? Finding a definition and a model. In
Selected papers of the 21st International Symposium on Theoretical and Applied Linguistics
(ISTAL 21) (pp. 548–568). Retrieved from
https://ejournals.lib.auth.gr/thal/issue/view/832/showToc
Eyckmans, J., & Anckaert, P. (2017). Item-based assessment of translation competence: Chimera of
objectivity versus prospect of reliable measurement. Linguistica Antverpiensia, New Series:
Themes in Translation Studies, (16), 40–56.
Eyckmans, J., Anckaert, P., & Segers, W. (2009). The perks of norm-referenced translation evaluation.
In C. V. Angelelli & H. E. Jacobson (Eds.), Testing and assessment in translation and
interpreting studies (pp. 73–94). Amsterdam / Philadelphia: John Benjamins B.V.
Eyckmans, J., Anckaert, P., & Segers, W. (2016a). Translation and interpretation skills. In D. Tsagari
& J. Banerjee (Eds.), Handbook of second language assessment (pp. 219–236). Boston / Berlin:
De Gruyter.
Eyckmans, J., Anckaert, P., & Segers, W. (2016b). Translation assessment methodology and prospects
of European collaboration. In D. Tsagari & I. Csépes (Eds.), Handbook of second language
assessment (pp. 171–184). Berlin: De Gruyter Mouton.
Eyckmans, J., Segers, W., & Anckaert, P. (2012). Translation assessment methodology and the
prospects of European collaboration. In Dina Tsagari & I. Csépes (Eds.), Collaboration in
language testing and assessment (pp. 171–184). Frankfurt am Main: Peter Lang.
Fazekas, N., & Sárosi-Márdirosz, K. (2015). Born or made? An overview of the social status and
professional training of Hungarian interpreters in Romania. Acta Universitatis Sapientiae,
Philologica, 7(3), 139–156. https://doi.org/10.1515/ausp-2015-0060
felvi.hu. (n.d.).
Finch, A. (2001). Autonomy: Where are we? Where are we going? Presentation at the JALT CUE
Conference on Autonomy. Retrieved May 2, 2018, from
http://www.finchpark.com/arts/Autonomy.pdf
Firth, J. R. (1957). A synopsis of linguistic theory, 1930-1955. Studies in Linguistic Analysis, 1–32.
220
Flanagan, M. (2016). Cause for concern? Attitudes towards translation crowdsourcing in professional
translators’ blogs. The Journal of Specialized Translation, (25), 149–173.
Flutter, J. (2006). “This place could help you learn”: student perticipation in creating better school
environments. Educational Review, 58(2), 183–193.
Frary, R. B. (1996). Hints for designing effective questionnaires. Practical Assessment, Reserach &
Evaulation. A Peer-Reviewed Electronic Journal, 5(3), 3. Retrieved from
http://ericae.net/pare/getvn.asp?v=58&n=3
Fulcher, G. (2014). Testing second language speaking (2nd ed.). New York: Routledge.
Gao, X. (2004). A critical review of questionnaire use in learner strategy reserach. Prospect, 19(3–14).
Gao, X., & Lamb, T. (2011). Exploring links between identity, motivation and autonomy. In G.
Murray, X. Gao, & T. Lamb (Eds.), Identity, motivation and autonomy in language learning (pp.
1–8). Bristol: Multilingual Matters.
Garant, M. (2009). A case for holistic translation assessment. AFinLA-e Soveltavan Kielitieteen
Tutkimuksia, (1), 5–17.
Gardner, R. C. (1985). Social psychology and second language learning: The role of attitudes and
motivation. London: Arnold.
Gardner, R. C. (2001). Language learning motivation: The student, the teacher and the researcher.
Texas Papers in Foreign Language Education, 6, 1–18.
Gardner, R. C. (2010). Motivation and second language acquisition: The socio-educational model.
New York: Peter Lang.
Gardner, R. C., & MacIntyre, P. D. (1993). A student’s contribution to second language learning. Part
II: Affective variables. Language Teaching, 26(1), 1–11.
https://doi.org/10.1017/S0261444800000045
Gardner, R. C., Masgoret, A. M., Tennant, J., & Mihic, L. (2004). Integrative motivation: Changes
during a year-long intermediate language course. Language Learning, 54(1), 1–34.
Gile, D. (2009). Basic concepts and models for interpreter and translator training. Amsterdam /
Philadelphia: John Benjamins B.V.
Gile, D. (2010). Why Translation Studies matters: A pragmatist’s viewpoint. In D. Gile, G. Hansen, &
N. K. Pokorn (Eds.), Why translation studies matters (pp. 251–262). Amsterdam / Philadelphia:
John Benjamins Publishing Company.
Gillham, B. (2000a). Case study research methods. London and New York: Continuum.
Gillham, B. (2005). Research interviewing: The range of techniques. Maidenhead, Berkshire: Open
University Press.
Gillham, B. (2008a). Observation techniques: Structured to unstructured. London and New York:
Continuum.
221
Gillham, B. (2008b). Small-scale social survey methods. London and New York: Continuum.
Graesser, A. C., McNamara, D. S., & Kulikowich, J. (2011). Coh-Metrix: Providing multilevel
analyses of text characteristics. Educational Researcher, 40(5), 223–234.
https://doi.org/10.3102/0013189X11413260
Graesser, A. C., McNamara, D. S., Louwerse, M. M., & Cai, Z. (2004). Coh_Metrix: Analysis of text
cohesion and language. Behaviour Research Methods, Instruments, & Computers, 36(2), 193–
202.
Griffee, D. T. (2012). An introduction to second language research methods: design and data (s).
Berkeley, CA: TESL-EJ Publications.
Hayes, A. F., & Krippendorff, K. (2007). Answering the call for a standard reliability measure for
coding data. Communication Methods and Measures, 1(1), 77–89.
Heitzmann, J. (2014). The fluctuation of motivation: A longitudinal study of secondary school learners
of English. In J. Horváth & P. Medgyes (Eds.), Studies in honour of Marianne Nikolov (pp. 23–
36). Pécs: Lingua Franca Csoport.
Henning, G. (1987). A guide to language testing: Development, evaluation, research. Newberry House
Publishers.
Henry, A., Davidenko, S., & Dörnyei, Z. (2015). The anatomy of directed motivational currents:
Exploring intense end enduring periods of L2 motivation. Modern Language Journal, 99(2),
329–345.
Henter, S. (2016). How happy are translators with their studies? Current Trends in Translation
Teaching and Learning E, (3), 24–66.
Holec, H. (1981). Autonomy and froreign language learning. Oxford / New York: Pergamon Press.
Holec, H. (2008). Foreword. In T. Lamb & H. Reinders (Eds.), Learner and teacher autonomy.
Concepts, realities and responses (p. 297). Amsterdam / Philadelphia: John Benjamins B.V.
Holliday, A., Hyde, M., & Kullman, J. (2004). Intercultural communication. London and New York:
Routledge.
Horváth, I. (Ed.). (2016). The modern translator and interpreter. Budapest: Eötvös University Press.
Hoyle, E., & John, P. D. (1995). Professional knowledge and professional practice. London: Cassell.
Hönig, H. G. (1998). Positioins, power and practice: Functionalist approaches and translation quality
assessment. In C. Schaffner (Ed.), Current Issues in Language & Society (pp. 6–34). Clevedon:
Multilingual Matters.
222
Józsa, K. (2014). Developing new scales for assessing English and German language mastery
motivation. In J. Horváth & P. Medgyes (Eds.), Studies in honour of Marianne Nikolov (pp. 37–
50). Pécs: Lingua Franca Csoport.
Józsa, K., Wang, J., Barrett, K. C., & Morgan, G. A. (2014). Age and Cultural Differences in Self-
Perceptions of Mastery Motivation and Competence in American, Chinese, and Hungarian
School Age Children. Child Development Research, 1–17. Retrieved from
http://dx.doi.org/10.1155/2014/803061
Kearns, J. (Ed.). (2008). Translator and interpretes training. Issues, methods and debates. London:
Continuum.
Kenny, D. (2014). Lexis and creativity in translation: A corpus based approach. London and New
York: Routledge.
Kincaid, P. J., Fishburne Jr., R. F., Rogers, R. L., & Chissom, B. S. (1975). Derivation of new
readability formulas (Automated Readability Index, Fog Count and Flesch Reading Ease
Formula) for Navy enlisted personnel. Millington.
Kiraly, D. C. (1995). Pathways to translation. Pedagogy and process. Kent, Ohio and London: The
Kent State University Press.
Klaudy, K. (2013). A Ménesi úttól az Amerikai úton át a Múzeum körútig... In K. Klaudy (Ed.),
Fordítás és tolmácsolás a harmadik évezred elején. Jubileumi Évkönyv 1973-2013 (pp. 9–15).
Budapest: ELTE Eötvös Kiadó.
Kóbor, M., & Csikai, Z. (Eds.). (2017). Iránytű az egyetemi fordítóképzéshez. A kompetenciafejlesztés
új fókuszai. Pécs: Kontraszt Kiadó.
Kóbor, M., & Lehmann, M. (2018). „Minden szinten szinte minden.” Fordításoktatás alap- és
mesterképzésben, valamint a szakirányú továbbképzés keretei között. In J. Dróth (Ed.),
Gépiesség és kreativitás a fordítási piacon és az oktatás különböző szintjein (pp. 19–34).
Budapest: L’Harmattan Kiadó.
Kockaert, H. J., & Segers, W. (2017). Evaluation of legal translations: PIE method (Preselected Items
Evaluation). The Journal of Specialized Translation, (27), 148–162.
Koizumi, R. (2012). Relationships between text length and lexical diversity measures: Can we use
short texts of less than 100 tokens? Vocabulary Learning and Instruction, 1(1), 60–69.
https://doi.org/10.7820/vli.v01.1.koizumi
223
Kormos, J., & Csizér, K. (2014). Longitudinal changes in the interaction of motivation and
intercultural contact in a study-abroad context. In J. Horváth & P. Medgyes (Eds.), Studies in
honour of Marianne Nikolov (pp. 9–22). Pécs: Lingua Franca Csoport.
Koskinen, K. (2010). What matters to Translation Studies? On the role of public Translation Studies.
In D. Gile, G. Hansen, & N. K. Pokorn (Eds.), Why translation studies matters (pp. 15–28).
Amsterdam / Philadelphia: John Benjamins Publishing Company.
Krajcsó, Z. (2017). Translator’s competence profiles versus market demand. Babel, 63(4), in print.
Krippendorff, K. (2004). Content analysis. An introduction to its methodology. Thousand Oaks: SAGE
Publications.
Kussmaul, P. (2015). Training the translator. Amsterdam / Philadelphia: John Benjamin’s Publishing
Company.
La Ganza, W. (2008). Learner autonomy - teacher autonomy: Interrelating and the will to empower. In
T. Lamb & H. Reinders (Eds.), Learner and teacher autonomy. Concepts, realities and responses
(pp. 63–82). Amsterdam / Philadelphia: John Benjamins B.V.
Lafaber, A. (2018). The skills required to achieve quality in institutional translation: The views of EU
and UN translators and reviewers. In F. Prieto Ramos (Ed.), Institutional translation for
international governance (pp. 63–80). London, New York: Bloomsbury.
https://doi.org/10.5281/zenodo.1048188
Lamb, M. (2004). “It depends on the students themselves”: Independent language learning at an
Indonesian state school. Language, Culture and Curriculum, 17(3), 229–245.
https://doi.org/10.1080/07908310408666695
Lamb, M. (2011). Future selves, motivation and autonomy in long-term EFL learning trajectories. In
G. Murray, X. (Andy) Gao, & T. Lamb (Eds.), Identity, motivation and autonomy in language
learning (pp. 177–194). Bristol: Multilingual Matters.
Lamb, T. (2000). Finding a voice - learner autonomy and teacher education in an urban context. In B.
Sinclair, I. McGrath, & T. Lamb (Eds.), Learner autonomy, teacher autonomy: Future directions
(pp. 118–127). Harlow: Longman.
Lamb, T. (2008). Learner autonomy and teacher autonomy: Synthesising an agenda. In T. Lamb & H.
Reinders (Eds.), Learner and teacher autonomy. Concepts, realities and responses (pp. 269–
284). Amsterdam / Philadelphia: John Benjamins B.V.
Lamb, T. (2017). Knowledge about language and learner autonomy. In J. Cenoz & D. Gorter (Eds.),
Language Awareness and Multilingualism (pp. 173–186). Cham, Switzerland: Springer Intern
ational Publishing Switzerland.
Lamb, T., & Murray, G. (2018). Space, place and autonomy in language learning: an introduction. In
G. Murray & T. Lamb (Eds.), Space, place and autonomy in language learning (pp. 1–9). New
York: Routledge.
224
Larsen-Freeman, D. (2002). Language acquisition and language use from a chaos/complexity theory
perspective. In C. Kramsch (Ed.), Language acquisition and language socialization: Ecological
perspectives (pp. 33–46). London: Continuum.
Laufer, B., & Nation, P. (1995). Vocabulary size and use: lexical richness in L2 written production.
Applied Linguistics, 16(3), 307–322.
Lehmann, M. (2014). The lexical demands of readings in English studies. In J. Horváth & P. Medgyes
(Eds.), Studies in honour of Marianne Nikolov (pp. 343–355). Pécs: Lingua Franca Csoport.
Levý, J. (2011). The art of translation. Amstardam / Philadelphia: John Benjamins B.V.
Lightbown, P. M., & Spada, N. (2013). How languages are learned? (4th ed.). Oxford: Oxford
University Press.
Limon, D. (2010). Trasnlators as cultural mediators: Wish or reality? A question for Translation
Studies. In D. Gile, G. Hansen, & N. K. Pokorn (Eds.), Why translation studies matters (pp. 29–
40). Amsterdam / Philadelphia: John Benjamins Publishing Company.
Little, D. (1991). Learner autonomy 1: Definitions, issues and problems. Dublin: Authentic.
Little, D. (1994). Learner autonomy: A theoretical construct and its practical application. Die Neuren
Sprachen, (93), 430–442.
Little, D. (1995). Learning as dialogue: The dependence of learner autonomy on teacher autonomy.
System, 23(2), 175–181. https://doi.org/0346-251X(95)00006-2
Little, D. (2000). Autonomy and autonomous learners. In M. Byram (Ed.), Routledge encyclopedia of
language teaching and learning (pp. 69–72). London: Routledge.
Little, D. (2007). Language learner autonomy: Some fundamental considerations revisited. Innovation
in Language Learning and Teaching, 1(1), 14–29. https://doi.org/1750-1229/07/01 014-16
Little, D. (2009). Language learner autonomy and the European Language Portfolio: Tw L2 English
examples. Language Teaching, 42(2), 222–233. https://doi.org/10.1017/SO261444808005636
Little, D., Ridley, J., & Ushioda, E. (Eds.). (2003). Learner autonomy in the foreign language
classroom: teacher, learner, curriculum and assessment. Dublin: Authentik.
Littlewood, W. (1997). Self access: Why do we want it and what can it do? In P. Benson & P. Voller
(Eds.), Autonomy and independence in language learning (pp. 79–92). London.
Littlewood, W. (1999). Defining and developing autonomy in East Asian Contexts. Applied
Linguistics, 20(1), 71–94.
Lowndes, S. (2005). The e-mail interview. In B. Gillham (Ed.), Research interviewing. The range of
techniques (pp. 107–112). Maidenhead, Berkshire.
Macaro, Ernesto. (1997). Target language, collaborative learning and autonomy. Clevedon:
Multilingual Matters.
225
MacBeath, J. (2012). Future of teaching profession. Cambridge: Cambridge University Press.
Mackey, A., & Gass, S. M. (2005). Second Language Research. Methodology and design. London:
Lawrence Erlbaum Associates.
Martínez, R. (2014). A deeper look into metrics of translation quality assessment (TQA): A case
study. Miscelánea: A Journal of English and American Studies, (49), 73–94.
McCarthy, P. M., & Jarvis, S. (2010). MTLD, vocd-D, and HD-D: A validation study of sophisticated
approaches to lexical diversity assessment. Behavior Reserch Methods, 42(2), 381–392.
https://doi.org/10.3758/BRM.42.2.381
McGrath, I. (2000). Teacher autonomy. In B. Sinclair, I. McGrath, & T. Lamb (Eds.), Learner
autonomy, teacher autonomy: Future directions (pp. 100–110). London: Longman.
McNamara, D. S., Graesser, A. C., McCarthy, P. M., & Cai, Z. (2014). Automated evaluation of text
and discourse with Coh-Metrix. Cambridge: Cambridge University Press.
Medgyes, P., & Nikolov, M. (2014). A country in focus. Research in foreign language education in
Hungary (2006-2012). Langauge Teaching, 47(4), 504–537.
Moser, C. A., & Kalton, G. (1971). Survey methods in socila investigation. London: Heinemann.
Murray, G. (2011). Identity, motivation and autonomy: Stretching our boundaries. In G. Murray, X.
(Andy) Gao, & T. Lamb (Eds.), Identity, motivation and autonomy in language learning (pp.
247–262). Bristol: Multilingual Matters.
Murray, G., Gao, X., & Lamb, T. (Eds.). (2011). Identity, motivation and autonomy in language
learning. Bristol: Multilingual Matters.
Nadstoga, Z. (2008). Translator and interpreter training as part of teacher training at the Institute of
English, Adam Mickiewicz University, Posnan, Poland. In P. W. Krawutschke (Ed.), Translator
and interpreter training and foreign language pedagogy (pp. 109–118). Amsterdam /
Philadelphia: John Benjamins B.V.
Nagy, B. (2007). “To will or not to will”. Exploring advanced EFL learners’ willingness to
communicate in English. University of Pécs, Hungary.
Nation, I. S. P. (2006). How large a vocabulary is needed for reading and listening? The Canadian
Modern Language Review, 63(1), 59–82.
Nida, E. (1981). Translators are born and not made. The Bible Translator, 32(4), 401–405.
Nida, E. (2012). Principles of correspondence. In L. Venuti (Ed.), The Translation Studies Reader (3rd
ed., pp. 141–155). London and New York: Routledge.
Nikolov, M. (1999). ‘Why do you learn English?’‘Because the teacher is short.’A study of Hungarian
children’s foreign language learning motivation. Language Teaching Research, 3(1), 33–56.
226
Nikolov, M. (2000). ‘We do what we like’: negotiated classroom work with Hungarian children. In
Michael P. Breen & A. Littlejohn (Eds.), Classroom decision making: Negotiation and process
syllabuses in practice (pp. 83–93). Cambridge: Cambridge University Press.
Nikolov, M. (2009). The age factor in context. In M. Nikolov (Ed.), The age factor and early language
learning (p. 424). Berlin: De Gruyter Mouton.
Nikolov, M. (2011). Az idegen nyelvek tanulása és a nyelvtudás. Magyar Tudomány, (9), 1048–1057.
Nikolov, M., & Mihajlević Djigunović, J. (2006). Recent research on age, second language
acquisition, and early foreign language learning. Annual Review of Applied Linguistics, 26, 234–
260.
Nunan, D. (1997). Designing and adapting materials to encourage learner autonomy. In P. Benson &
P. Voller (Eds.), Autonomy and independence in language learning (pp. 192–203). Harlow:
Longman.
Nunan, D. (2003). Nine steps to learner autonomy. In D. Nunan (Ed.), Practical English Language
Teaching (pp. 193–204). New York: McGraw Hill.
Nunan, D., & Bailey, K. M. (2009). Exploring second language classroom research. Boston: Heinle,
Cengage Learning.
Oxford, R. L. (2011). Teaching and researching language learning strategies. London and New York:
Routledge.
Pemberton, R., Li, E. S. L., Or, W. W. F., & Pierson, H. D. (Eds.). (1996). Taking control: Autonomy
in language learning. Hong Kong: Hong Kong University Press.
Phelan, M. (2017). Analytical assessment in legal translation: a case study using the American
Translators Association framework. The Journal of Specialized Translation, 27(January), 189–
210.
Pym, A. (1992). Translation error analysis and the interface with language teaching. In C. Dollerup &
A. Loddegaard (Eds.), The teaching of translation (pp. 279–288). John Benjamins.
Pym, A. (2012). Training translators. In K. Malmkjær & K. Windle (Eds.), The Oxford Handbook of
Translation Studies. Oxford: Oxford University Press.
https://doi.org/10.1093/oxfordhb/9780199239306.013.0032
Pym, A. (2013). Research skills in translation studies: What we need training in. Accross Languages
and Cultures, 14(1), 1–14. https://doi.org/10.1556/Acr.14.2013.1.1
Pym, A. (2014). Translation studies in Europe - reasons for it, and problems to work on. Target, 26(2),
185–205. https://doi.org/10.1075/target.26.2.02.pym
Pym, A., Grin, F., Sfreddo, C., & Chan, A. L. J. (Eds.). (2011). The status of the translation profession
in the European Union. London: Anthem Press.
227
Ramos, R. C. (2006). Considerations on the role of teacher autonomy. Colombian Applied Lingusitics
Journal, (8), 183–202.
Raya, M. J., Lamb, T., & Vieira, F. (2007). Pedagogy for autonomy in language education in Europe.
Dublin: Authentic.
Reinders, H., & Lázaro, N. (2011). Beliefs, identity and motivation in implementing autonomy: The
teacher’s perspective. In G. Murray, X. (Andy) Gao, & T. Lamb (Eds.), Identity, motivation and
autonomy in language learning (pp. 125–144). Bristol: Multilingual Matters.
Reinders, H., & Lewis, M. (2008). Materials evaluation and teacher autonomy. In T. Lamb & H.
Reinders (Eds.), Learner and teacher autonomy. Concepts, realities and responses (pp. 205–
215). Amsterdam / Philadelphia: John Benjamins B.V.
Reiss, K. (1989). Text types, translation types and translation assessment. In A. Chesterman (Ed.),
Readings in translation theory (pp. 105–115). Helsinki: Oy Finn Lectura Ab.
Risku, H., Dickinson, A., & Pircher, R. (2010). Knowledge in Trasnation Studies and translation
practice: Intellectual capital in modern society. In D. Gile, G. Hansen, & N. K. Pokorn (Eds.),
Why translation studies matters (pp. 83–96). Amsterdam / Philadelphia: John Benjamins
Publishing Company.
Robin, E. (2016). The translator as reviser. In I. Horváth (Ed.), The modern translator and interpreter
(pp. 45–56). Budapest: Eötvös University Press.
Robinson, D. (1997). Becoming a translator. An accelerated course. London and New York:
Routledge.
Rose, M. G. (2008). Must translation training remain elitist? In P. W. Krawutschke (Ed.), Translator
and interpreter training and foreign language pedagogy (pp. 18–25). Amsterdam / Philadelphia:
John Benjamins B.V.
Rubin, H. J., & Rubin, I. S. (2005). Qualitative interviewing: the art of hearing data (2nd ed.).
Thousand Oaks, CA: SAGE Publication Ltd.
Ryan, R. M., & Deci, E. L. (2017). Self-determination theory. Basic Psychological needs in
motivation, development and wellness. London and New York: Guilford.
Sade, L. A. (2011). Emerging selves, language learning and motivation. In G. Murray, X. (Andy) Gao,
& T. Lamb (Eds.), Identity, motivation and autonomy in language learning (pp. 42–56). Bristol:
Multilingual Matters.
Saldana, J. (2009). The coding manual for qualitative researchers. London: SAGE.
Shrout, P. E., & Fleiss, J. L. (1979). Intraclass correlations: Using in assessing rater reliability.
Psychological Bulletin, 86(2), 420–428.
Sinclair, B. (2008). Multiple voices: Negotiating pathways towards teacher and learner autonomy. In
T. Lamb & H. Reinders (Eds.), Learner and teacher autonomy. Concepts, realities and responses
(pp. 237–268). Amsterdam / Philadelphia: John Benjamins B.V.
228
Smith, R., & Erdoğan, S. (2008). Teacher-learner autonomy. In T. Lamb & H. Reinders (Eds.),
Learner and teacher autonomy. Concepts, realities and responses (pp. 83–103). Amsterdam /
Philadelphia: John Benjamins B.V.
Spivak, G. (1992). The politics of translation. In A. Barrett (Ed.), Destabilizing theory: contemporary
feminist debates (pp. 177–200). London and New York: Routledge.
Stern, H. H. (1984). Review and discussion. In Christopher J. Brumfit (Ed.), General English Syllabus
Design (pp. 5–12). Oxford: Pergamon Press.
Szőcs, K. (2016). Teachers’ and students’ perceptions of language learning autonomy and its
implications in the classroom. University of Pécs.
Tassinari, M. G. (2012). Evaluating learner autonomy: A dynamic model with descriptors. Studies in
Self-Access Learning Journal, 3(1), 24–40.
Thavenius, C. (1999). Teacher autonomy for learner autonomy. In S. Cotteral & D. Crabble (Eds.),
Learner autonomy in language learning: Defining the field and effecting change (pp. 159–163).
Frankfurt: Peter Lang.
Tóth, Z. (2008). Foreign language anxiety - for beginners only? In R. Lugossy, J. Horváth, & M.
Nikolov (Eds.), UPRT 2008: Empirical studies in Engéish applied linguistics (pp. 225–246).
Pécs: Lingua Franca Csoport.
Ushioda, E. (1996). Learner autonomy 5 : The role of motivation. Dublin: Authentic.
Ushioda, E. (2011a). Motivating learners to speak as temselves. In G. Murray, X. (Andy) Gao, & T.
Lamb (Eds.), Identity, motivation and autonomy in language learning (pp. 11–24). Bristol:
Multilingual Matters.
Ushioda, E. (2011b). Why autonomy? Insights from motivation theory and research. Innovation in
Language Learning and Teaching, 5(2), 221–232.
Ushioda, E. (2016). Language learning motivation through a small lens: A research agenda. Language
Teaching, 49(4), 564–577.
Ushioda, E., & Dörnyei, Z. (2012). Motivation. In S. Gass & A. Mackey (Eds.), The Routledge
handbook of second language acquisition (pp. 396–409). New York: Routledge.
Usma Wilches, J. (2007). Teacher Autonomy: A Critical Review of the Research and Concept beyond
Applied Linguistics. Íkala, Revista de Lenguaje y Cultura, 18(12), 245–275.
Van Egdom, G.-W., Verplaetse, H., Schrijver, I., Kockaert, H. J., Segers, W., Pauwels, J., …
Bloemen, H. (2019). How to put the translation test to the test? On Preselected Items Evaluation.
In E. Huertas-Barros, S. Vandepitte, & E. Iglesisas-Fernández (Eds.), Quality sssurance and
assessment in translation and interpreting (pp. 26–56). Hesrhey PA USA: IGI Global.
229
Van Lier, L. (1996). Interaction in the language curriculum: Awareness, autonomy and authencity.
London: Longman.
Waddington, C. (2001). Different methods of evaluating student translations: The question of validity.
Meta, 46(2), 312–325.
Wilkinson, D., & Birmingham, P. (2003). Using research instruments A guide for researchers. New
York: RoutledgeFalmer.
Williams, M. (1994). Motivation in foreign and second language learning: An interactive perspective.
Educational and Child Psychology, 11(2), 77–84.
Williams, M. (2013). A holistic-componential model for assessing translation student performance and
competency. Mutatis Mutandis, 6(2), 419–443.
Wilson, R., & Jean-Marc, D. (2010). The use of web questionnaires in second language acquisition
and bilingualism research. Second Language Research, 26(1), 103–123.
Wilss, W. (1982). The science of translation: Problems and methods. Tubingen: Narr.
Wu, Z. (2016). Towards understanding interpreter trainees’ (de)motivation: An exploratory study. The
International Journal for Translation and Interpreting, 8(2), 13–25.
Yalden, J. (1984). Syllabus design in general education: Options for ELT. In Christopher J. Brumfit
(Ed.), General English Syllabus Design (pp. 13–22). Oxford: Pergamon Press.
Young, D. J. (1999). Affect in foreign language and second langu age learning. Boston, MA: McGraw
- Hill.
230
Appendices
Appendix A: BA students’ motivation in translation classes. Planned interview
questions and transcriptions
231
Part 3: Course content
1) What expectations did you have concerning this specialization before signing up for
it? Did the content of the course meet these expectations?
2) How many translation classes do you have a week?
3) Are they enough for your development, or would you like to have more?
4) What activities do you do in translation classes?
5) Which of these do you find really useful for developing your translation skills?
6) Which are the least useful tasks (if there is any)?
7) Are you taught by one or more teachers?
8) How are the classes by different teachers different? (What do different teachers teach
you?)
9) Are you instructed about course content and course requirements at the beginning of
the term?
10) Are you offered to negotiate when you discuss course content?
11) Do you learn about translation methods, strategies and techniques? Can you name
any?
Method: refers to the way a particular translation process is carried out in terms of
the translator’s objective, i.e. a global option that affects the whole text (e. g.
literal, free)
Strategies: procedures used by the translator to solve emerging problems (e. g.
paraphrasing,)
Technique: the result of a choice made by a translator (e. g. adaptation, borrowing,
using of an established equivalent (in case of idioms) etc.) description,
12) What tools / instruments do you use in your classes?
13) What tools / instruments do you use when you do your home assignments?
14) What forms of assignments do you like doing?
232
Transcription of the interview with KV (P1) and MD (P2)
30/04/2017
Duration: 55 minutes
I: Először szeretném megköszönni, hogy vállaltátok a beszélgetést egy ilyen szép tavaszi
délutánon. Megkérnélek benneteket, hogy röviden mutatkozzatok be.
KV: BA szakos másodéves anglisztika hallgató vagyok, szakfordító a specializációm, a sáv
irodalom és kultúra, körülbelül ennyit tudok mondani bemutatkozásként.
DM: Részemről ugyanez, azzal a különbséggel, hogy én alkalmazott nyelvészeti sávon
vagyok.
I: Mit jelent pontosan a sáv?
KV: A szakfordító egy minort vált ki gyakorlatilag, mindenkinek kell másodévtől egy sávot
választani, ami lehet nyelvészet, alkalmazott nyelvészet, amerikanisztika és angol irodalom és
kultúra. Ezen felül kell minort választani, amit ki lehet váltani szakfordítói specializációval,
úgy tudom, ez a hivatalos neve, és mi ezt választottuk. Második évtől van, és két év.
I: Értem. Milyen más specializációból lehetett volna még választani?
DM: Különböző nyelvekből, vagy van még kommunikáció és média…
KV: Igen, gyakorlatilag egy másik szakot lehetett volna felvenni minorként, kommunikáció,
vagy germanisztika, ami van még a bölcsészkaron.
I: No, akkor szaladjunk előre egy kicsit. Beszéljünk arról, hogy mikor és hogyan találkoztatok
először az angol nyelvvel?
MD: Általános iskolában, a negyedik osztályban kezdtük.
I: És végig az angol volt a fő idegen nyelved?
MD: Igen. Másodikként felvettem a németet, de az nem érdekelt annyira, és nem is vettem
annyira komolyan. Nem fektettem rá annyi súlyt, mint az angolra.
KV: Én hasonlóképpen, azzal a különbséggel, hogy én általános iskola elsőben kezdtem az
angolt, teljesen sima osztályban, nem specializációban. Németet az első gimnáziumban
kezdtem, bár általános iskola ötödiktől fel lehetett volna venni második idegen nyelvként, de
annyira rosszul ment az angol, hogy azt mondták, ne kezdjek bele egy második idegen
nyelvbe…
I: Ezt mondták?
KV: Igen. Általánosban nagyon rossz voltam angolból, csak a vége felé lett jobb, nem tudom,
mitől, lehet, hogy a tanárok miatt, lehet, hogy csak ráéreztem, nem tudom, de jobb lett, és
akkor gimnáziumban már egy emelt óraszámú angolos osztályba jelentkeztem, ahol németet
233
is elkezdtem, de az eléggé elenyésző volt, egy érettségit le tudtam tenni belőle, de eléggé
gyenge voltam.
I: D. mondta, hogy nagykanizsai. Ott is jártál iskolába?
MD: Igen.
I: És te melyik suliba jártál, V.?
KV: A Nevelési Központba jártam gimnáziumba, általánosba pedig a PTE Deák Ferenc
gyakorló iskolájába.
I: A családban volt-e valaki, aki idegen nyelvet beszélt?
MD: Nálunk nem. Mármint nem úgy, hogy munkával kapcsolatban. A szüleim tanultak
iskolában, általánosban, ami kötelező volt nekik.
KV: Nekem a testvéreim szintén az iskolában tanultak, de egyik sem ment tovább vele. A
szülők pedig egyáltalán nem. Picit idősebb korosztályba tartoznak, nem is tudtak idegen
nyelvet tanulni, csak az oroszt, azt meg nem szerették, úgyhogy nem igazán foglalkoztak vele.
I: Akkor nem a családi háttér miatt választottátok a nyelvet, hanem egyéb megfontolásokból.
KV: Nem, egyáltalán nem.
I: Te azt mondtad, németet tanultál még, és a D. is. Nem szeretnétek folytatni?
KV: De szeretném, csak kicsit nehéz a lehetőségem, anyagiak miatt, de én szeretnék tovább
menni, van egy fordítói mesterszak, és ahhoz kellene még egy nyelv.
I: Pécsen nincs sajnos mester szak.
KV: Nincs. Debrecenben van, Miskolcon…
I: Meg Budapesten biztosan, és Szegeden is…
MD: Reménykedünk, hogy mire eljutunk odáig, már itt is lesz.
KV: Reméljük.
I: Akkor mind a ketten elég korán kezdtétek az angoltanulást. Én annak idején gimnáziumban
kezdtem el, és orosz tagozatra jártam, a szüleitekkel egy korosztály vagyok, és kevés
angolórám volt. Kellett nektek különórára járni ahhoz, hogy bejussatok az egyetemre?
MD: Nem. Addig jártam külön angolra, amíg nyelvvizsgára készültem.
I: Milyen nyelvvizsgára?
MD: Középszintűre.
I: ECL, TELC, vagy valami más?
MD: Azt nem tudom.
KV: ECL van nekem is, én a gimi közepe felé csináltam meg, szóval, nem is terveztem akkor
még, hogy ilyen szakra megyek.
I: A Nevkóban elég erős az angol.
234
KV: Igen, elég erős, csak akkor még más szak volt bennem, az elúszott, szóval, maradt az
angol, és nem, én sem jártam külön órákra.
I: Azt, hogy angol szakosok lesztek, azt mikor döntöttétek el? Erre emlékszel, Dávid?
MD: Hát, nem sokkal azelőtt, hogy jelentkezni kellett.
I: És mi alapján döntöttél? A jegyeid alapján?
MD: Főleg azt néztem, hogy mi az, amiből jó vagyok, meg ami érdekel is, és hát angolból a
jobbik nyelvcsoportban voltam a gimnáziumban, meg érdekelt is. Tervbe van véve, hogy
külföldre megyek majd dolgozni.
I: Ez motiváló tényező volt?
MD: Igen.
I: És mi érdekel, milyen területen szeretnél dolgozni?
MD: Hát… fordítás, esetleg még a tolmácsolás.
I: És te mikor határoztad el, hogy angol szakos leszel?
KV: Hát, egy picit hamarabb talán, de ugyanúgy a gimi utolsó éve körül.
I: És nem motiváltak a tanáraid esetleg?
KV: Hát, a tanáraim annyira nagyon nem, kedvesek voltak, csak nem nagyon láttam benne
először fantáziát. Azt láttam, hogy tanárnak tudnék elmenni vele, egy másik szakkal, ami a
magyar volt, ami érdekelt, de a tanárkodás annyira nem vonzott, ezért később utánanéztem
dolgoknak, és rájöttem, hogy fordító is lehetek, satöbbi. Úgyhogy, az utolsó évben döntöttem.
I: De ez nem jelentette azt, hogy extra erőket kellett bedobni, vagy igen?
KV: Igazából nem. Először is, tagozatos osztályba jártam, nagyon erősen foglalkozott velem
kettő angoltanárom is, az egyik anyanyelvű volt, úgyhogy nem volt ez probléma.
I: Én, amikor elkezdtem az egyetemet, emlékszem, hogy nekem nagy problémát jelentett, hogy
gimiben orosz tagozatra jártam, alacsony angol óraszámmal és a csoporttársaim többsége
pont fordítva: angol tagozatra járt, magas óraszámmal. Nekem nagyon rossz volt. Úgy
éreztem mindig, hogy nekem bizonyítanom kell, többet kell tanulnom, gyengébb vagyok. Ti
hogy emlékeztek erre? Milyenek voltak az első benyomásaitok?
MD: Első benyomásként én azt vettem észre, hogy nagyjából mindenki ugyanazon a szinten
áll, mindenkinek megvannak az erősségei és a gyengeségei. Nem a nyelvvel van a probléma,
hanem a tanulnivalóval. Hogy angolul kell dolgokat tanulni.
I: És ez volt az első alkalom, hogy minden angolul kellett.
MD: Igen. Az fura volt. Ugye, gimnáziumban csak magát a nyelvet tanultuk és elkezdeni az
irodalmat és a történelmet angolul tanulni az kicsit erős kezdés volt.
235
I: Akkor mind a ketten úgy érkeztetek, hogy volt nyelvvizsgátok. Középfokú B2. És te hogy
voltál a többiekkel? Hogy helyezted el magad?
KV: Hát, gimnáziumban jobb dolgom volt, úgymond, mert ott eléggé az élen álltam.
I: Szorgalmas voltál?
KV: Nem is annyira szorgalmas, inkább jó erős iskola volt, illetve, volt nyelvérzékem,
mindenkinek, aki anglisztikára jár, van. Gimnáziumban nem sokaknak volt, ezért ott mindig
jó dolgom volt, könnyű volt. Egyetemen kicsit nehezebb lett, mert rájöttem, hogy nagyon sok
mindenki ugyanolyan jó, de aztán ugyanúgy boldogultam, meg tudom csinálni a tárgyakat,
értékelnek a tanárok.
I: Most már talán nem jelent problémát, hogy angolul kell olvasni dolgokat?
MD: Nem, egyáltalán nem.
KV: Megszoktuk.
I: Tudtok-e olyan területet említeni, ahol érzékelhetően sokat javultatok? Gondolom, az
olvasásotok az mindenképpen javult.
MD: Igen, én nagyon sokat olvasok hobbi szinten, krimit, fantasy-t…
I: Tehát könyveket.
MD: Könyveket, igen. És sokáig magyarul olvastam őket, és később, amikor idekerültem,
akkor kezdtem el angolul is olvasni. A könyveket sokkal jobban megértem már.
I: Ez azt jelenti, ha szórakozásból olvasol, akkor inkább angolul olvasol?
MD: Vegyesen.
I: És ha magyarul, akkor odafigyelsz a fordításra?
MD: Igen, amióta fordítani kezdtünk, azóta feltűnnek dolgok. Meg elgondolkozok, hogy én
hogyan fordítanám.
KV: Ezzel én is így vagyok, például, sorozatoknál, nem biztos, hogy fordítók csinálják a
fordításokat hozzájuk, és azért nagyon sok hibát látunk. Mindig kritizálom, akaratlanul is
észreveszi az ember a hibákat. Én, szerintem, főként szókincsben javultam, mióta itt vagyok.
Gimiben főként csak a nyelvet tanultuk, volt egy-két óra, ahol próbálta a tanár, hogy
kulturálisan is bevonjon minket, de…
I: És minek tulajdonítod ezt a szókincsjavulást?
KV: Mindenképpen annak, hogy nagyon sok tárgyunk van, alkalmazott nyelvészettől kezdve,
irodalom, sima nyelvészet, tényleg nagyon sokféle, és mindegyik szakterületnek a lexikális
szavait megtanuljuk, akaratlanul is.
I: Ha nem a fordítást tekintjük, van kedvenc órátok angolból? Melyiket szeretitek a
legjobban? Tanárfüggő?
236
MD: Nálam inkább tanárfüggő a dolog, azokat az órákat szeretem én jobban, ahol tényleg
használjuk a nyelvet, nemcsak ülünk és passzívan hallgatjuk, amit mondanak. Mikor tényleg
gyakorlunk mi is.
KV: Igen, én is úgy vagyok, hogy erősen tanárfüggő. Az biztos, hogy az előadások néha kicsit
nehezek. Nekem most van olyan szabadon választott órám, hogy Creative writing, ami elég
jól megy, tényleg csinálunk érdekes dolgokat, írunk, ráadásul nem csak esszéket, hanem
alkotunk is valamit, és az nagyon jó. Tetszik nagyon.
I: Mondhatjuk, akkor, hogy neked a szókincs az erősséged?
KV: (Nevet) Hát, ezt azért nem mondanám, csak… régen talán nem volt az erősségem, most
pedig jobb lett.
I: Mi az, amiből szerinted a legjobb vagy?
KV: Hát… esszéket írni nagyon jól tudok, ezért is tetszett meg ez a kreatív írás óra.
I: És te hogy vagy ezzel, D.?
MD: Én is hasonlóképpen. Olyan gimibe jártam, ahol emelt óraszámban tanultam magyart,
sokat kellett fogalmazni, így én is jobban ki tudom fejezni magam és ez az angolra is igaz.
KV: Ez lehet, hogy nekem is a magyar miatt van így, mert én is emelt magyarra jártam és
emelt szintű érettségit is tettem. Angolból is mindig kellett esszéket írni, és ez segítette a
fogalmazási készségem javulását.
I: Nagyon érdekes, amiket mondotok, de térjünk vissza a fordításhoz. Nem tudom, ott
voltatok-e a múlt héten a Czippott Péter előadásán. Számos példát hozott arra, hogy ki,
hogyan definiálja a fordítást. Nektek mit jelent ez a fogalom?
KV: Hát… igazából azt mondanám talán, hogy tartalomfordítást, nem pedig szövegfordítást.
I: Úgy érted, visszaadni a szöveg tartalmát a célnyelvben, nem szó szerinti fordítással?
KV: Igen. Ez főleg igaz a magyar és az angol nyelvre, mivel teljesen különböznek, ugye más
a szórend…
I: Ugye, ti specializáción tanultok, szaknyelvi specializáción, ami jog és közgazdaság?
KV: Mindenféle… Most ilyen bevezetések vannak.
I: A kérdésem az, hogy a szaknyelv megengedi-e ezt az általad „tartalminak” nevezett
fordítást?
KV: Igen, megengedi egyébként, meglepően, viszont vigyázni kell azért a szavakkal. Most
például jogi szövegeket fordítunk, ott tudni kell, hogy miket szoktak mindig használni, tehát
azért nincs akkora szabadság.
237
MD: Nagyon újat ehhez én sem tudok mondani, én is így gondolom. Bár sokszor olyan
szövegeket kapunk, ahol tényleg utána kell nézni, hogy mit, hogyan fordítanak, főleg a
szavaknak.
I: Nekem az a tapasztalatom, hogy szaknyelvi szövegeknél nagyon észnél kell lenni, bár
valóban lehetnek olyan területek, ahol megengedhető a szabadabb fordítás. Ti miért a
fordítást választottátok?
MD: Én alapból azért az anglisztikát választottam, mert fordító szeretnék lenni. Sokat
olvasok, ezért inkább a műfordítás felé érdeklődöm, és így kizárásos alapon ez jött.
I: Szóval azt gondolod, ez jó előiskola lesz ahhoz, hogy fordító legyél.
MD: Igen. Így kezdetnek, bevezetésnek jó.
KV: Én is ugyanígy gondolkodtam. Azért választottam az anglisztikát, mert utánanéztem,
hogy lehet fordító specializációt választani. Azt még nem tudom pontosan, hogy milyen
fordító szeretnék lenni, tény, hogy a fordítás engem is érdekel, de igazából nem zárom ki a
gazdasági és más szakszövegeket sem.
I: Anyagi szempontból talán az utóbbi kifizetődőbb lehet, ami nem utolsó szempont.
KV: Nyilván.
I: Van-e valamilyen tapasztalatotok a fordítás területén? Végeztetek-e már fordítási
feladatokat, mielőtt felvettétek a specializációt?
KV: Nem igazán. Egyszer volt nálunk egy cserediák, és közte és édesanyám között
fordítottam, vagy inkább tolmácsoltam. Az nehéz egyébként. Én mindenképp a fordítói
munkát választanám.
MD: Én egy rövid ideig feliratokat fordítottam a tesómnak filmekhez, meg egy-két
képregényt, és ennyi.
I: De pénzt még nem kerestetek fordítással?
KV + MD: (Mosolyogva) Nem.
I: Más fordítói kurzusra jártatok-e már?
KV + MD: Nem.
I: És elégedettek vagytok? Úgy gondoljátok, hogy jól választottatok?
KV + MD: Igen.
I: Feltételezem, hogy az órákon kétféle irányba végeztek fordítást, angolról magyarra és
magyarról angolra. Melyik a könnyebb?
MD: Angolról magyarra.
KV: Ez érdekes, mert az előző félévben mi azzal kezdtük, hogy angolról magyarra, és a
tanárok is mondták, hogy ez elvileg a könnyebb, viszont nekem ebben a jogi témában
238
könnyebb néha angolra, vagy legalábbis ugyanolyan szintű, én nem láttam annyival több
nehézséget az angolra fordításban, nekem mindkettő tetszik.
I: És most már mindkét irányt csináljátok?
MD: Ebben a félévben csak magyarról angolra.
KV: Félévenként változó. Múlt félévben angolról magyarra volt, jövő félévben lehet, hogy
megint az lesz.
I: És mi az, ami szerintetek a fordításban szerintetek a legnehezebb? Egyáltalán, hogy
kezdetek neki egy szöveg lefordításához?
MD: Először elolvassuk a szöveget,
I: Bocsáss meg, hogy közbeszólok. Ezek milyen hosszúságú szövegek?
MD: Nagyjából egyoldalasak. Elolvassuk őket, megpróbáljuk értelmezni…
I: Közösen?
MD: Nem, mindenki egyedül. Aztán nekiállunk, és amit nem tudunk, annak utánanézünk,
hogy mások hogy fordítják.
I: A helyesírás, spelling, okoz-e gondot valamelyik nyelvben?
KV: Inkább a vesszők okoznak nehézséget.
I: Vannak kifejezések, amiket az angol nyelv „set lexical units”-nak nevez. Idiómák például.
Azokat hogy kezelitek?
KV: Utánanézünk, hogy hogyan van a másik nyelvben.
I: És hol tudsz utánanézni?
KV: Google. Ő a legjobb barátunk.
I: Ő a legjobb barát? (nevetve)
KV: Tényleg. Ott a legkönnyebb valamit megtalálni. Például múltkor volt egy ilyen kifejezés,
hogy „két legyet üt egy csapásra”. Ezt mos nyilván gondoljuk, hogy nem így van angolul szó
szerint, de megnéztük, és rögtön találtunk rá egy angol idiómát.
I: Most mondjátok, hogy magyarról angolra fordítotok. Okoz-e ez valamilyen problémát?
MD: Nekem különösebben nem. Például a szórend sosem okozott nekem gondot. Bizonyos
esetekben még jobban is tetszik az angol szöveg, mint a magyar. Kötöttebb, és nem tud
elkalandozni az ember.
I: És amikor olyasmit kell fordítani, ami az egyik nyelv beszélői számára ismert dolog, a
másik kultúrában azonban nem létezik, akkor… ?
KV: Hát ez egy elég nehéz helyzet, igen…
I: Volt már ilyen?
KV: Hm… Most hirtelen nem jut eszembe…
239
MD: Nekem se.
KV: … de előbb beszéltünk, hogy volt egy ilyen előadás, hogy fordításelmélet, ott megadtak
különböző lehetőségeket, illetve példákat is mondtak, például a Mézga Géza jött fel, ami most
már nem igazán aktuális.
I: Ha végiggondoljátok, mi az, amire a leginkább szükségetek van ahhoz, hogy egy fordítást
elkészítsetek, illetve, hogy fordítók legyetek? Kezdjük azzal, hogy kell egy jó angoltudás.
KV: Az mindenképp.
I: Magyartudásnak is jónak kell lennie. Mondjátok is, hogy mennyit segít.
KV: Igen, az is mindenképp fontos. Főleg, ha valaki angolról magyarra fordít, alapvető, hogy
a magyar nyelvtana, szóhasználata jó legyen.
I: Tehát egy fordítónak nemcsak az idegen nyelvet, hanem a sajátját is jól kell tudnia.
Tudnátok még fontos tényezőket mondani?
KV: Talán van olyan, hogy fordítói készség… ami meghatározza, hogy valaki jó-e?
MD: El tudjon rugaszkodni az ember egy kicsit a magyar szövegtől… meg…
KV: lehet, hogy valaki nagyon jól tud angolul, meg magyarul is, de valahogy egyiket sem
tudja átvinni a másik nyelvbe. Plusz fontos még a tapasztalat is, mi is sokat gyakorolunk,
ezért jobbak is vagyunk, mint mondjuk egy évvel ezelőtt, főleg a tavaly év elejéhez képest, ez
mindenképp segít, hogy gyakoroljuk, meg tanítanak minket.
I: És ne feledjük a célnyelv kultúrájának az ismeretét. Te mondtad, hogy szeretnéd MA szinten
folytatni a fordítást. Csak fordítást, ugye?
KV: Hát, fordítás és tolmácsolás.
I: És te mit szeretnél?
MD: Én főleg csak a fordítást. Tolmácsolást… majd még meglátjuk.
I: De úgy gondolod, hogy még kell hozzá tanulod, vagy úgy, hogy ezt befejezed, és akkor csak
nekiállsz fordítani?
MD: Nekem még ki kellene választani, hogy mi legyen a másik nyelv, amit még tanulok az
angol mellé, ha MA-ra megyek, de szeretném folytatni.
I: És ha végeztél, ha jól emlékszem, inkább műfordításban szeretnéd kipróbálni magad…
MD: Igen.
I: Te pedig azt mondtad, hogy inkább…
KV: … talán gazdasági vagy jogi…
I: A kettő, végül is, nem zárja ki egymást. Mit gondoltok a fizikai adottságokról? Van szükség
ilyenre?
KV + MD: (Hallgatás)
240
I: Egy másfél oldalas szöveg lefordításához nem kellenek különösebb fizikai adottságok, de ha
van egy 800 oldalas könyv, az azért húzós tud lenni.
KV + MD: (Hallgatás)
I: Az azt jelenti, hogy adott esetben napi 8-10 órát kell eltölteni a számítógép előtt,
huzamosan…
MD: Hát azzal még nem is lenne komolyabb baj. Gép előtt ülni részemről nem olyan
megterhelő.
KV: Mi már az a generáció vagyunk, amelyik amúgy is eltölt napi 8-10 órát egy számítógép
előtt, így ez nem lesz annyira idegen tőlünk. Néha egy kicsit fel kell állni, meg ilyesmi, de
amúgy… szerintem kibírjuk.
I: Akkor most térjünk át arra, amit tényleg tanultok az óráitokon. Hány féle órátok van?
KV: Fordítós órákból mindig négy van egy félévben, úgy értem négy féle tárgy. Van benne
előadás is és gyakorlati óra, ez változó. Most három gyakorlati óra és egy előadás van.
I: Lehet, hogy jövőre már csak gyakorlat lesz, nem?
KV: Igen, most már letudtuk az előadásokat ezzel a két félévvel, a mintatanterv alapján már
csak gyakorlat lesz.
I: Milyen területeket érintenek az előadások?
MD: Előző félévben egy olyan előadásunk volt, ami nagyon kultúrspecifikus volt, a kulturális
különbségeket és átfedéseket vettük. Ez volt az egyik. Ami mostani, az inkább egy kicsit
nyelvtan…
KV: … nyelvészet, összehasonlítja az angol és a magyar nyelv szerkezetét, eredtét, tehát
nagyon sokmindent lefed, illetve volt egy átfogó fordításelmélet, fordítói tanulmányok,
Translation Studies. Miként tekintettek a fordításra, hogyan alakult a fordítás helyzete…
MD: Magyarországon és külföldön…
I: Értem. És amikor úgy döntöttetek, hogy ez lesz a specializációtok, akkor voltak valamilyen
előzetes elvárásaitok? Amit kaptok, az megfelel-e az elvárásaitoknak?
MD: Hát… én úgy gondoltam, hogy talán egy picivel több gyakorlat lesz, legalábbis az
elején, de most már látom, hogy ennyi azért elég. Most három különböző órára kell
fordítgatni… Vagy kettő. Kettő.
I: Nem unalmasak az előadások?
KV: Nem. Igazából kevesen vagyunk, tehát most összesen heten vagyunk az évfolyamon, és
így nagyon családias. Ott is nagyon sokszor beszélgetünk, eléggé interaktív. Sokszor ott is,
mondjuk a mostani előadás végén is kapunk fordítási gyakorlatot… eléggé aktívak az
előadások is.
241
I: Úgy értelmezem, hogy nem okozott csalódást a program.
KV: Nem. Igazából szerintem nagyon jó. Én ennél többet nem vártam el tőle. Nem tudtam,
mire számítsak, de én teljesen meg vagyok elégedve. Lefedünk nagyon sok témát, területet…
Én meg vagyok elégedve vele.
I: Kaptok-e a szemeszter elején valamilyen leírást, tájékoztatást arról, hogy mit fogtok
tanulni?
KV: Minden egyes tantárgyból kapunk egy tanmenetet, hogy mit takar a kurzus, mik az
elvárások.
I: A feladatokat is megkapjátok?
KV: Nem. A fordítási feladatokat hetente kapjuk.
I: De azt tudjátok, hogy mire fogjátok kapni a kurzus végi jegyet, nem?
KV: Persze, igen.
I: Beleszámít ebbe a jegybe az évközi munka?
KV: Ez tárgytól függ. Most van egy olyan tárgyunk, ahol minden egyes fordításra jegyet
kapunk. Van olyan, hogy politikai és jogi, ott három politikai és három jogi fordítást kell
készíteni. Az első kettő gyakorlás, és mindkettőből a harmadikra kapunk jegyet.
I: És amikor megkapjátok a syllabusokat, van lehetőségetek arra, hogy javaslatot tegyetek a
tananyaggal kapcsolatban? Nem tudom, hallottatok-e a „negotiated syllabusról”. A tanár
gyakorlatilag megbeszéli a diákokkal, hogy mit szeretnének, bevonja őket a tananyag
megtervezésébe.
MD: Hát annyi volt előző félévben, hogy mi küldhettünk be fordítandó szövegeket, és a tanár
kiválasztotta, hogy kinek a szövegét fogjuk lefordítani.
I: Ez komoly dolog, nem? Nagyobb kedvvel fordítjátok azt, amit ti választotok?
MD: Igen, mert ha olyan szöveget kapunk, ami érdekel minket, akkor jobban megy.
I: Te például milyet küldtél be?
MD: Valami… Ha jól emlékszem, az egyik kedvenc bandámmal kapcsolatos szöveg volt az
internetről.
I: És te?
KV: Húúú… Már nem is emlékszem, és nem is került be fordításra. Nagyon sokan
beküldünk, úgyhogy válogatott a tanár, de mindenképp érdekesebb volt ezeket fordítani.
Tényleg jó témák voltak.
I: Akkor, ha össze akarjuk foglalni, legalább négy tárgyatok van félévente, köztük előadások
meg szemináriumok, és valamilyen szintem beleszólhattok a feladataitokba. Mi a helyzet a
többi dologgal? Tanultok-e olyat, hogy translation methods, translation techniques?
242
KV: Hát ezt így külön nem, viszont fordítás órán nagyon sokszor szó van róla, kifejti a tanár
rendesen, úgyhogy átvettük ezeket már, illetve most is úgy működik, hogy minden
alkalommal ezeket órán megbeszéljük,…
I: Tehát jelentkezik egy konkrét probléma, és akkor megbeszélitek, milyen stratégiát, technikát
lehet alkalmazni a lefordítására?
KV: Igen, illetve a tavalyi kurzus szerintem első egy, két, három órájában erről volt szó.
I: És ezeket az elméleti dolgokat is számon kérik tőletek, vagy csak magát a gyakorlati
produktumot?
MD: Elméletileg csak az elmélet előadás végén kérik számon a félév vizsgákon, de
fordításórán nem. Elvárják, hogy tudjuk őket és használjuk őket, de az sosincs, hogy a tanár
kikérdezné őket.
I: Elemeztek-e kész fordításokat?
KV + MD: (Hallgatás)
I: Arra gondolok, hogy bevisz a tanár egy szöveget meg a fordítását, és megnézitek, mi a jó
benne vagy mi a rossz.
MD: Egyszer volt olyan, hogy kivetített egy fordítást és ezt-azt megnéztünk benne, ki hogy
csinálná.
KV: Nem igazán jellemző. Egyszer volt egy olyan óránk, amikor megnéztük egy volt diák
fordítását, és abban a hibákat elemezgettük, de nem jellemző.
I: Hasznosabb, ha a saját fordításaitokkal csináljátok ugyanezt, vagy ennek is van valami
haszna?
KV: Szerintem ugyanúgy ráérzünk… de minden egyes órán átnézzük mindenki fordítását, így
tanulunk egymás hibából is.
I: Azt hadd kérdezzem meg a tanáraikról, hogy például, amikor magyarról angolra fordítotok,
azt anyanyelvű tanár tanítja?
MD: Az egyiket.
KV: Nincs annyi anyanyelvű tanár, hogy mindegyiket az tanítaná. Most az egyik magyarról
angolra fordítást azonban anyanyelvű tartja.
I: És jobb, ha anyanyelvű tanár van?
KV: Hát igen. Talán jobb azért.
MD: Magyarul ritkán szólal meg, de tud olyan kifejezéseket, amiket nem biztos, hogy az
interneten megtalálnánk. Angolul, de magyarul is.
KV: Már csak a korunkból kifolyólag is volt egy régebbi szöveg, valamilyen múzeumi
brosúra, amiben mi nem tudtuk a régies magyar kifejezéseket, ő meg igen.
243
I: Mondtátok, hogy a Google a legjobb barát. (Nevetés.) Mit használtok a Google-n kívül?
Egyáltalán, órán is fordítok, vagy inkább otthon?
DM: Otthon inkább. Az A. óráin fordítunk órán is, elkezdjük, és ami marad, azt kell otthon
befejeznünk.
I: Ez hogy néz ki? Megkapjátok a szöveget, és míg ti dolgoztok, a tanár nézelődik?
DM: Kivetíti, és…
KV: Szóban fordítjuk, úgy hogy kivetíti, megyünk sorban, mindenki mond egy mondatot.
Segít, hogyha elakadunk, vagy ha teljesen melléfordítunk, kijavít minket, ha rosszat
mondunk… Hasonló ahhoz, mint amikor átnézzük, csak itt akkor csináljuk a fordítást.
I: Minden órán új feladatot kaptok?
DM: Igen.
I: Az elég kemény, nem?
DM: Most, holnap lesz az óránk, és egy zeneszöveget kellett lefordítani.
I: Magyarról angolra?
KV: Igen.
DM: Magyarról angolra.
I: Az nehéz.
KV: Hát igen.
DM: Ráadásul mondta, hogy figyeljünk arra, hogy a szótagszám is egyezzen, szóval, nem
elég, hogy a jelentést átvigyük, hanem fontos, hogy megmaradjon a zene ritmusának
megfelelő forma is.
KV: Ez azért nehezebb, mert eddig ilyennel nem nagyon találkoztunk. Ez egy kihívás.
I: Szóval, az órán el szoktátok kezdeni a fordítást, és otthon befejezitek, és előfordul, hogy
külön is ad valamit a tanár otthonra.
KV: Igen. Például ezt.
I: És akkor van a négy tantárgyatok, és mindegyikből kaptok (feladatot) minden héten?
KV: Igen.
I: Összesen hány órát jelent a négy tantárgy? Mert azért más is van, ugye, ez csak a
specializáció?
KV: Persze.
MD: Dupla óra mind a négy.
I: Tehát négyszer két óra, jól mondom?
DM: Összességében hat óra.
244
KV: Plusz ebben a félévben van még egy CAT tools nevű óránk, ahol arról van szó, hogy
milyen programok tudnak segíteni a fordításban, de ott nincs lecke.
I: És milyen programok?
KV: A memoQ-t vesszük, eddig legalábbis ennyit vettünk (nevetés).
I: Az mi?
KV: Hát egy ilyen…
I: Szoftver?
KV: … software, ami segít… hogyha például ugyanazt kell később lefordítani, főként az ilyen
gazdasági szövegeknél, például ha éves beszámolót fordítunk, és a szöveg ugyanaz, csak
kisebb különbségekkel, akkor az egészet bele lehet másolni és mutatja. Illetve kis nyelvtani
hibákat is mutat… Nem fordít helyettünk, de segít.
MD: El lehet benne tárolni a korábbi munkákat, és azok alapján kielemzi a fordítási
stílusunkat…
I: Ez nagyon hasznos, nem? És tudjátok is használni?
MD: Azt tanuljuk most így órán... kis apróságokat.
I: És ezt angoltanár csinálja vagy IT szakember?
KV: Angoltanár, persze. Aki ezt tartja, ugyanúgy tart más órát is, például a politikai és jogi
szövegek fordítását, ő nagyon gyakorlott.
I: Tehát akkor használtok szoftvereket, még mit?
DM: Online szótárakat…
I: Nyomtatott szótárt nem?
KV: (Nevet.) Hát azt már nem nagyon.
I: Online milyen szótárt? Egynyelvűt, kétnyelvűt, mindkettőt?
DM: Mindkettőt, úgy vegyesen.
KV: Webfordítót, amit szintén egy tanárunk javasolt, aki készít különböző szótárakat is, és
mondta, hogy elég megbízható. Google fordítót is néha, de azt főként inkább ellenőrzésre. Én
például ha egy szót nem ismerek egyáltalán, akkor megnézem több szótárban, hogy ugyanazt
írják-e…
DM: Mondjuk a Google fordítót én sokszor csak helyesírás ellenőrzésre használom.
I: Mi a helyzet a korpuszokkal? Volt-e szó arról valahol, hogy korpuszt is lehet használni?
KV: Volt róla szó, (nevet) de nem igazán használtam még.
I: Én azt tanultam, hogy a korpuszok nagyon hasznosak, ha valaki fordít.
KV: Ezt mi is hallottuk előadáson, de ennyi.
I: Nem kérik tőletek, hogy a fordításnál ellenőrizzetek korpuszban dolgokat?
245
KV: Nem igazán. Nem.
I: Tehát korpuszt nem használtok akkor.
KV: Nem. Tudjuk, hogy vannak korpuszok, lehet, hogy majd fogunk…
I: Beszéljünk az értékelésről meg a feedbackről.
KV: Már így tantárgyak szerint?
I: Azt mondjátok, sokat írtok, sok a házi feladat… Majdnem mindig van valami elkészíteni
való, amit részben beadtok, részben közösen átnézitek.
MD: Beadjuk nyomtatva, vagy dropboxba elküldjük.
I: És mit tesz vele a tanár? Rögtön megnézi? Kaptok visszajelzést?
KV: Például az egyik órán megnézzük, kijavítjuk és a tanár ott szóban is értékeli, illetve félév
végén…
I: Gondolom, ilyenkor egyforma feladatot kap mindenki, ugye?
KV: Igen, igen. Ugyanaz a fordítás.
I: És mindenki nézi a maga szövegét.
KV: Hát nem. Úgy szokott lenni, hogy valakiét nézzük, mindenki, és akkor azt nézzük, hogy
jól csinálta-e, nem jól csinálta e…
I: Nem úgy, hogy a magadét összeveted vele?
MD: De, magunk előtt ott van a sajátunk.
I: És amikor nagy feladat van? Hogy történik az értékelés egy olyan feladatnál, amire az év
végi jegyet kapjátok? Azt otthon csináljátok, vagy órán, tanteremben?
MD: Azt is otthon ugyanígy megcsináljuk, csak ugye jobban odafigyelünk rá, elküldjük, és
kapunk egy visszajelzést. Az előző félévben egy kis probléma volt, hogy egészségügyi ok
miatt a tanár nem tudta időben megcsinálni a dolgokat, és nem volt visszajelzés igazán. Csak
egy jegyet kaptunk, de azt nem nagyon tudtuk, hogy mire.
KV: Azt később kaptuk meg, a kiértékelést.
MD: De most amire legutóbb jegyet kaptunk azt rendesen átnéztük, és ott, előttünk
osztályozta.
I: És általánosan, csoport szinten, és név szerint is kaptok értékelést? Rámutatnak arra, ami
kifejezetten a te hibád?
KV: Igen, igen… Hát mindig egyvalakiét nézzük, aztán néha átmegyünk, és megnézzük a
sajátunkat, és ilyenkor igen, személy szerint mondja a tanár. A kielemzés többnyire szóban
történik, de az utolsóra kapunk egy részletes visszajelzést.
I: Segítenek ezek a visszajelzések?
MD: Persze.
246
KV: Mindenképpen. Nagyon hasznosak, mert rájövünk olyan hibáinkra, amelyeket nem
biztos, hogy magunktól észrevennénk.
DM: És így a tanár is tudja, hogy személyenként kinek mi az erőssége, gyengesége és
megjegyzi, és nyomon követi, hogy ki hogyan fejlődik.
I: A fordítás elég szubjektív tud lenni. Te így fordítod, én úgy fordítom, és mind a kettő jó
lehet. Olyan nincs, hogy valakinek sértő visszajelzést adnak, vagy megaláznak, mert nem
olyan jó fordulatot használt, mint a többiek?
KV: Igazából nyilván elmondják, hogy mi a probléma, de teljesen normális módon.
I: És van-e olyan, hogy egymás munkáit értékelitek? Például, mindenki belenézhet a
dropboxba, és bárki munkájához írhat megjegyzést?
MD: Látjuk egymás munkáját, de értékelni nem kell a másikat. Néha megnézzük, hogy a
másik hogyan fordította ezt és azt, ez is egyfajta segítség.
I: Tudtok-e három olyan dolgot mondani, amit abszolút motiválónak találtok az óráitokban?
KV+MD: (Hosszú hallgatás).
I: Ez lehet a tanár személyisége, a fordítandó szöveg… Nem akarom ellőni a lehetséges
válaszokat.
KV: Hát, nem tudom. A fordítandó szövegek mindenképpen, ami nagyon nem érdekel, azt
nehezebb fordítani.
I: Az segít, ha mondjuk humoros a szöveg?
KV: Mindenképp. Meg, ha kicsit érdekes. Például, a múltkor egy gyerekkönyvből
fordítottunk, ami sokkal élvezetesebb volt.
MD: Maga a fordítás is élvezetes. Részben ezért is választottam ezt a specializációt.
I: Milyenek a tanárok? Motiválnak benneteket?
KV: Ööö… Igen. Igazából…
MD: Amikor halljuk, hogy (elneveti magát) mennyit keresnek… az elég „motiváló”…
I: A közelmúltban voltam egy konferencián, ahol több előadásban is szó volt a fordítók
díjazásáról, és az derült ki, hogy a szakfordítást ötször-hatszor jobban fizetik, mint a
műfordítást.
KV: Hát igen. Ez az egyik motiváló tényező nekem, hogy inkább afelé menjek, de hobbi
szinten mindenképp foglalkozni szeretnék műfordítással is.
I: Van-e olyan, ami szerintetek abszolút demotiváló, és a legjobb lenne, ha nem is lenne?
KV+MD: (Hosszú hallgatás).
MD: Szövegfüggő.
247
I: Ha egy, egytől ötig terjedő skálán osztályoznátok, hogy értékelnétek magát a
specializációt? Beleértve az elvárásaitokat, a tanárok tudását, személyiségét, az órák
tartalmát, a feladatokat…
KV: Én egy ötöst mondanék.
MD: Szerintem is. Semmi problémám nem volt még eddig. Se a fordítással, se a tanárokkal.
Én elégedett vagyok.
I: A „Mark my professor” oldalon (nevetés) nem szoktátok osztályozni a tanáraitokat?
KV+MD: Nem, nem igazán.
I: Gondolom, mindegyiknek 5-öst adnátok…
KV: Igen.
I: És magatoknak a munkátokra?
KV: Hát, szerintem ügyesen beadogatjuk mindig, úgyhogy…
MD: Ha kicsit megcsúszva is, de megcsináljuk. Meg is kell, különben nem zárnának le
minket.
KV: Tényleg sok a munka, minden egyes órára kell fordítani, de ez nem probléma. Azért
vagyunk itt, hogy ezt csináljuk.
I: Van-e valami, ami eddig a legnagyobb élményetek volt a fordítástanulás során? Valami,
amire azt mondtátok, hogy hú, ez nagyon jól sikerül… alig volt benne javítás…
KV: Sajnos, ilyen még nem volt. (Nevetés)
MD: Hibátlan fordítást még nem sikerült csinálnunk…
KV: Igen, azért hibák mindig vannak. Szóval, ilyen élményem még nem volt, de talán jövőre
lesz.
I: Milyen jegyeket szoktatok kapni?
KV: Hármas, négyes.
MD: Igen.
I: Van, aki 5-öst kap?
KV: Biztos.
MD: Másodévesek.
I: És azt látjátok, hogy aki 5-öst kap, az miben jobb?
KV: Tapasztalatban, szerintem.
I: Van-e valami, amit szerintetek még beépíthetnének a kurzusba? Amit hiányoltok… Ami jó
lenne, ha lenne.
KV: Hmmm… szerintem nincs.
MD: Talán egy külön óra, ami a kulturális különbségekre megy rá.
248
I: Nem gondoltatok arra, hogy egy fordítóirodánál kipróbáljátok magatokat? Az motiváló
lenne, nem?
KV: Én még nem érzem magam készen erre.
MD: Én gondolkoztam már rajta, hogy esetleg nyáron csinálhatnék valami hasonlót, majd
meglátjuk, hogy találok-e egyáltalán valamit. Biztosan jó tapasztalat lenne.
I: Milyen terveitek vannak a jövőre nézve?
MD: Elvégezni az MA-t, aztán, mint mondtam, szeretnék külföldre menni, leginkább Angliába,
hiszen fordítani onnan is tudnék.
KV: A külföld nálam sem kizárt. Gondoltam már rá, hogy lehetnék EU fordító.
I: Tulajdonképpen ezeket szerettem volna megkérdezni. Nem tudom, van-e valamit, amit még
hozzátennétek?
KV: Hát, esetleg ajánlanám mindenkinek a specializációt, aki érdeklődik a fordítás iránt, mert
bár képesítést nem ad, papírunk nem lesz róla…
I: Nem is kaptok semmilyen igazolást, hogy ezt a specializációt elvégeztétek?
KV: Elvileg a diplománkban szerepelni fog, hogy elvégeztük. De nem is csak papír
szempontjából fontos, hanem magunk miatt is, hogy tudjuk, hogy másoknál többet tettünk,
ügyesebbek vagyunk.
DM: Nagyon sokat tanulunk a fordításból. Jobban olvasunk, jobban írunk angolul, például a
tanárainknak is, akikkel angolul e-mailezünk.
I: Hát… akkor a végére értünk. Én nagyon szépen köszönöm nektek, hogy a rendelkezésemre
álltatok. Azt kívánom, hogy továbbra is érezzétek jól magatokat a specializáción, és váltástok
valóra a terveiteket.
KV+MD: Köszönjük.
249
képzéssel kapcsolatban láttam problémákat, magamban is, és még nem voltam készen arra,
hogy ezt befejezzem. Ez öt évet vett volna igénybe, és mivel fogytak az államilag támogatott
féléveim, találni kellett valamit, hogy ne kelljen tandíjat fizetnem a képzésért. Így esett a
választás, mivel az angol már úgyis megvolt, az anglisztikára. A szakfordító specializáció
nagyon hirtelen döntés volt. Pillanatok alatt kellett meghoznom, mert hat államilag támogatott
félévem volt, és az anglisztika hat féléves képzés. Sok kötelező tárgyam már megvolt,
akkreditáltattam ezeket, viszont a második félévben kellett választani minort, vagy
specializációt, illesztve egy sávot. Nekem ezt rögtön az első félévben meg kellett oldanom. A
proficiency vizsgám már megvolt, így a minor, ill. a specializáció választás nem ütközött
nehézségekbe. Megnéztem a lehetőségeket. A minor nem igazán jött számításba. Azt hittem,
pszichológiát is lehet választani, és kiderült, hogy nem. Nagyon örültem, amikor hallottam,
hogy van szakfordítói spec, ami kiváltja a minort, és a sávot is. Korábban is csináltam már
fordításokat. Édesapámnak, pl. aki iskolában tanít, és most benne vannak egy nemzetközi
projektben. Egy LEGO robottal végeznek feladatokat. Több országon átívelő együttműködés,
észak-ír, török, portugál, román olasz kapcsolatokkal, és a projekt nyelve az angol.
I: Egy picit álljunk meg. Szeretném, ha beszélnél a családi hátteredről. A szüleid beszélnek-e
idegen nyelvet?
GÁ: Nem. Ők még oroszt tanultak, de nem beszélik.
I.: Igen, ti az a generáció vagytok, akiknek a szülei még az oroszt tanulták idegen nyelvként,
de közülük, különböző okoknál fogva, ezt valóban nem beszélik. Van testvéred?
GÁ: Egy darab öcsém.
I: Ő is beszél idegen nyelvet?
GÁ: Angolból van nyelvvizsgája, de programozóként dolgozik egy kisebb cégnél, és most
végzi az MA tanulmányait, és vannak angol nyelvű kurzusai. Neki fordítottam le szívességből
angolra egy hosszabb, tizenvalahány oldalas tudományos szöveget.
I: Ezt mekkora vállalkozásnak tartottad? Nagy munka volt? Angolról magyarra, ugye?
GÁ: Igen, és ezt egyszerűbbnek is érzem, nem tudom miért. B. Tanár Urat megkérdeztem,
hogy mire vállalkoztam, és azt felelte, hogy ő ezt a szöveget nem vállalná el, vagy csak
nagyon borsos áron. Ekkor fogalmazódott meg bennem, hogy a jó szívem fog a sírba vinni,
mert mindig szívességből végzek ilyen munkákat, mert amikor kiderült, hogy ezért akár
70.000 Ft-ot is el lehetett volna kérni, akkor arra gondoltam, hogy ebből akár meg is lehetne
élni.
I: Erre majd később még visszatérünk egy picit. Az elején már kérdeztem, hogy te mikor
kezdtél angolul tanulni. Ez volt az első idegen nyelved?
250
GÁ: Nem. A némettel kezdtem az általános iskolában, Csabrendeken. Az angolt
középiskolában tanultam, öt évfolyamos gimnáziumban, Ajkán, nyelvi előkészítő osztályban,
amit azóta meg is szüntettek. Első évben volt 12 angol óránk, utána mindegyikben hat. Plusz
volt egy matek, egy magyar, öt számítástechnika.
I: Tehát 14 éves korodban találkoztál először az angollal, előtte nem tanultad.
GÁ: Egyáltalán nem. Viszont érettségiztem belőle, először középszinten, mert a joghoz nem
kellett emelt, de amikor ide jelentkeztem, csináltam egy szintemelőt.
I: Ez azt jelenti, hogy az érettségivel együtt kaptál egy B2, azaz középszintű nyelvvizsgát.
Miután ide felvettek, még várt rád egy proficiency exam, ha jól tudom.
GÁ: A szintekkel nem vagyok tisztában, sajnos, nem tudom. A proficiency vizsga azonban
mindenki számára kötelező, az anglisztika szakosoknak az első év végén kell letenni.
Osztatlanon kicsit más, ott az első három év során kell valamikor letenni, ha jól tudom. Elsőre
nem sikerült, másodikra már nem volt probléma.
I: Amíg nincs meg ez a vizsga, addig nem kezdhetitek el a specializációt.
GÁ: Nem. Az összes tárgynak ez az alapfeltétele.
I: Mikor kellett eldönteni a specializáció választást?
GÁ: Az első év végén.
I: És te most hányadéves vagy?
GÁ: Én papíron másodéves vagyok.
I: És igazából?
GÁ: A BA-n csak első évemet töltöm, mert most jelentkeztem át osztatlanról anglisztikára.
De mivel szinte minden tárgyamat elismerték, a másodévesekkel vagyok együtt.
I: Értem. Ez azt is jelenti egyben, hogy most kezdted a specializációt, ősszel, így ez a második
szemesztered.
GÁ: Igen.
I: Nagyszerű. Ha arra gondolsz, hogy amikor idekerültél, mennyit tudtál, és most mennyit
tudsz angolul, mit gondolsz, biztosított neked az egyetemi tanulásfejlődési lehetőséget?
Egyáltalán, miből fejlődtél a legtöbbet?
GÁ: Csak a történelem-angol osztatlan képzés elején éreztem úgy, hogy abból élek, amit
addig összeszedtem, de most, hogy elkezdtem a fordítóit, úgy érzem, hogy kaptam egy újabb
löketet.
I: Amikor a mások szakról bekerültél, nem érezted úgy, hogy hátrányban vagy azokkal
szemben, akik már egy évet megcsináltak anglisztikán?
251
GÁ: Nem, mert történelem-angolon ugyanazokat a tárgyakat tanultuk. Nem éreztem magam
hátrányban, és a tudásom sem volt kevesebb.
I: Ha neked most meg kellene nevezned a skillek közül, hogy mi az erősséged, mit választanál,
és mit sorolnál a nehezebbek közé?
GÁ: Első helyre mindenképpen a fogalmazást tenném, angolul és magyarul egyaránt. Ha
magyarul jól fogalmaz az ember, és látja a nyelv szerkezetét, akkor össze tudja rakni a
mondatokat a másik nyelven is, és az is segít, hogy szerintem elég jó háttértudásom van a
világ dolgairól.
I: Azért a két nyelv szerkezete eléggé különböző. Ugyanolyan jól fogalmazol angolul is, mint
magyarul?
GÁ: Azt nem mondtam. Csak azt mondtam, hogy segít a magyar.
I: Akkor, angolból is erősségednek érzed a fogalmazási készséget.
GÁ: Magyarul jól írok. Szoktam is írni, szeretek is írni. Az, hogy angolul… (hosszú hallgatás)
lesznek olyanok, hogy lefagyok, már volt ilyen korábban, ezt szeretném előre leszögezni.
I: Semmi baj, akkor majd folytatjuk valami mással. Most például azzal, hogy mi nem megy
annyira jól. Nekem például mindig bajom volt a listeninggel, mert azt nagyon keveset
csináltuk órán, sőt, nem is volt ilyen része a nyelvtanulásnak.
GÁ: A listeningnek nem látom értelmét, laboratóriumi körülmények között semmiképp. Mi
értelme van annak, hogy valamit pl. a vidámpark háttérzajával lejátszanak? Alapból nem
értem a zaj miatt, de ez szerintem nem jelenti azt, hogy nem értem magát a nyelvet. Az ember
ha az anyanyelvén beszél telefonon az ismerősével, és közben elmegy mellette egy autó,
ugyanúgy nem érti, nem? Úgy érzem, nem hitelesen adja vissza a tudást.
I: Áttérnék egy kicsit a fordításra. Azt mondod, nagyon gyors döntés volt. Valami befolyásolt
abban, hogy ezt a gyors döntést meghozd?
GÁ: Igen. Konkrétan, hogy amikor átjelentkeztem, ha a tanterv szerint haladtam volna és az
elsőéves terv szerint kellett volna haladnom, nem lett volna sok értelme 180 kilométeres
távolságból ideutazgatni. Ehelyett a másik végletet választottam: hogy 2 év alatt elvégzem a 3
évet. Most egy olyan évben vagyok benne, hogy a 2 félév alatt 132 kreditem lesz.
I: Az nagyon-nagyon sok. Ugyanakkor azt mondtad, voltak más lehetőségek is. Mi az, ami
miatt mégis a fordítást választottad?
GÁ: A gyakorlati haszna miatt. A másik alternatíva az alkalmazott nyelvészet lett volna,
történelmet nem akartam, abból jöttem. Német lett volna még, de hiába van középfokú
nyelvvizsgám németből, az annyit fog érni, hogy megkaphatom a diplomámat.
252
I: Tulajdonképpen tehát az motivált, hogy már volt némi fordítási tapasztalatod, és láttad,
hogy lehet gyakorlati haszna a későbbiekben.
GÁ: Biztos voltam benne, hogy ennek lesz a legtöbb gyakorlati haszna.
I: Feltételezem, nem csak az édesapádnak és a testvérednek fordítottál…
GÁ: Hobbi szintű tapasztalatom is van.
I: Az nagyon fontos. Sok komoly dolog hobbi szinten kezdődik.
GÁ: Főleg dalszövegeket szoktam fordítani. Nem tudom, ismered-e az Euróvíziós
Dalfesztivált. A győztes dalt lefordítottam magyarra, és elküldtem a nem is tudom, mi a
hivatalos megnevezése, az egyik menedzsernek vagy egy producernek, és visszaírtak, hogy
nagyon tetszik nekik a fordítás. Azt gondoltam, ha ők azt mondják, hogy jó, akkor van keresni
valóm ezen a területen, és a jogdíjak is eszembe jutottak...
I: Huszonhat évesen – ugye, azt mondtad, annyi vagy? – valóban elgondolkozik az ember
azon, hogy mivel keresse a kenyerét. Ez teljesen nyilvánvaló. Dolgoztál-e valaha
fordítóirodának?
GÁ: Nem.
I: Tehát többnyire hobbiból fordítottál, vagy valakinek segítettél.
GÁ: Igen, pl. a polgármesternek is lefordítottam egy pályázat szövegének a részeit.
I: Akkor mondhatjuk, hogy a választás fő motivációja az volt számodra, hogy a jövőben
pénzkereső tevékenyégként tudod majd végezni a fordítást?
GÁ: Simán.
I: A tanáraid, akik itt tanítottak, nem voltak rád motiváló hatással? Nem javasolták neked ezt
a specializációt?
GÁ: Olyan gyorsan meg kellett hoznom ezt a döntést, hogy erre nem volt lehetőség. De azt
tudom, hogy a tájékoztatón a tanárok erősen szorgalmazták a fordítást, de én ebből
kimaradtam és egyedül hoztam meg a döntést, de nem bántam meg.
I: Nem tudom, ott voltál-e a múlt héten a Czipott Péter előadásán.
GÁ: Igen, ott voltam.
I: Ott elhangzott, hogy híres emberek, nyelvészek, műfordítók miként határozták meg a
fordítást. Nagyon érdekes definíciókat hallhattunk. Te tudsz azonosulni valamelyikkel, vagy
számodra valami mást jelent a fordítás?
GÁ: Nem tudom… én azon a véleményen vagyok, hogy jelentést fordítsunk.
I: És ez mit jelent a te értelmezésedben?
GÁ: Azt, hogy nem felétlenül szóról-szóra fordítunk, a lényeg, hogy átvigyem a jelentést.
I: Mi a helyzet jogi szövegeknél? Ott fontos a precíz, pontos fordítás.
253
GÁ: Van ilyen kurzusom. Ott nem lehet nem pontosnak lenni. Ott azonban segít, hogy jártam
jogra, és nagyjából ismerős a nyelvezet.
I: Tehát azt gondolod, hogy ugyanazt a jelentést valamilyen módon vissza kell adni.
GÁ: Például most a Cs. tanárnő egyik óráján volt egy szövegünk, és végtelenül örültem…
Eddig mindig olyan tárgyilagos szövegek voltak, és most egy díjátadón mondott beszédet –
egy zenész kapott egy díjat – kellett lefordítani. Ott éreztem végre, hogy van egy kis
szabadságom arra, hogy hogyan viszem át annak az embernek az érzéseit és gondolatait, aki
azt a beszédet tartotta.
I: Az óráitokon mikor fordítok angolról magyarra, és magyarról angolra?
GÁ: Mindkettőhöz külön kurzusok vannak.
I: Ez szemeszterenként változik?
GÁ: Húúú…
I: Hány heti órában tanuljátok a fordítást?
GÁ: Változó. A tanterv szerint nem tudom. Én most egyébként is nagyon halmozom.
I: Mi a neve az óráitoknak?
GÁ: Van egy alapozó, van angolról magyarra I-II, ezen kívül van számítástechnikai
szövegfordítás, jogi-politikai szövegfordítás, gazdasági A-B és társadalomtudomány. És még
szépirodalmi is.
I: Mindegyiket más tanár tanítja, gondolom.
GÁ: Igen.
I: És amikor év elején, vagy szemeszter elején az első órán, kaptok valamilyen útmutatást a
tanévre, szemeszterre vonatkozóan?
GÁ: Egyrészt kapunk syllabust, másrészt, ha az adott oktatóval volt már órám, tudom, hogy
nagyjából mire számíthatok.
I: Tehát van egy fix syllabus, ami szerint haladtok, dátumokra rögzített anyaggal.
GÁ: Igen, de az élet nem mindig úgy hozza. Nem tarjuk feszesen, de ez szerintem teljesen
rendben van, hogy veszünk egy szöveget, és ha nem jutunk a végére, nem csinálunk pánikot
belőle. Meg hát a syllabus az csak egy ütemterv, amit bármikor tudunk módosítani.
I: Vannak gyakorlati óráitok meg előadásaitok.
GÁ: Igen. Azok teljesen elméleti jellegűek.
I: És ott, például, adtak nektek meghatározást, definiálták a fordítást, mint tevékenységet…
Valami ilyesmivel kezdődik, gondolom.
GÁ: Hát, hogy megpróbáljuk meghatározni, hogy mi a fordítás, meg nemcsak a fogalma,
hanem a történelme is téma. Hogy ez hogyan alakult az évek során.
254
I: Hát igen, hosszú története van a fordításnak, a bábeli zűrzavarig nyúlik vissza…
GÁ: Az egy kicsit túlzó azért, de…
I: Tulajdonképpen ott különöltek el a nyelvek, legalábbis a Biblia szerint, nyilván szükség volt
fordítókra. Térjünk vissza a gyakorlati órákhoz. Melyik áll hozzád a legközelebb? Mondta,
hogy a joginál sokat segít a jogi egyetemi múltad.
GÁ: Igaz, de ezt nem szeretem a szárazsága miatt. De például a közgazdaságtudományi
egyetem ott van a tőszomszédságban, és ott voltam kollégista, elég sok KTK-s ismerőm van.
És ha az ember elmegy egy társaságba, vagy akár csak sörözni, mindig szóba kerül, hogy az
adott ember mivel foglalkozik, mi mozgatja, milyen dolgok történnek vele az egyetemen,
szóba kerülnek dolgok. Meg amúgy is, nagyjából azért tisztában vagyok az ilye
fogalmakkal… Ha úgy adódik az életemben, hogy közgazdasági jellegű szövegeket kell
fordítanom…
I: Nem fogsz kétségbe esni.
GÁ: Nem. De azt se szeretem. Viszont ami meg vonzó, meg érdekes, a társadalomtudományi
szövegek… az meg bitang nehéz.
I: Műfordításban gondolkodtál-e? Mondtad, hogy jól fogalmazol…
GÁ: Nem tudom, eljutottam-e arra a pontra már, hogy ha azt ajánlják fel nekem, akkor… De
tulajdonképpen bármi jöhet… orvostudomány, csillagászat…
I: És ha itt végzel… Esetleg MA-n továbbtanulsz?
GÁ: Nem! Nincs már rá félévem.
I: Akkor BA-n befejezed, és megpróbálsz a fordítás területén dolgozni.
GÁ: Igen. Viszont, ha összegyűlik a keresetemből annyi, és lenne rá lehetőség, lehet, hgy
később visszajönnék egy MA-ra.
I: Értem. Mindenképpen a fordítás világában képzeled el a jövődet.
GÁ: Igen.
I: Szerinted milyen tulajdonságokkal kell rendelkezni annak, aki hivatásszerűen szeretné
végezni ezt a tevékenységet?
GÁ: Bírja a strapát, hogy hosszú ideig ugyanazzal a témával foglalkozzék, nem esik kétségbe
attól, hogy másoktól segítséget kell kérnie, mert itt bizony előfordul, akár a szerzőtől, akár a
lektortól vagy akárkitől… Nem kell félni ettől, és büszkeségből sem kell erről lemondani.
Szükség van némi alázatra. Kitartásra. Mentális frissességre. Nem szabad a végtelenségig
zsigerelni az embernek magát. Az öcsém szövegével voltam így. Az egyetem mellett nem
nagyon volt időm. Megcsináltam négy oldalt, és arra gondoltam, hogy ezzel így sohasem
fogok végezni. Nézem a szerdai órarendemet, és láttam, hogy csupa olyan óra van, amin még
255
mindegyiken ott voltam. Fogtam a szerdai napot, kijelöltem, hogy ezt most egyben
lefordítom, nem érdekel… És látástól mikulásig, reggel 7-től este 9-ig. Nonstop.
Megcsináltam, de utána úgy voltam, hogy nem akarok többet találkozni vele. Mondtam az
öcsémnek, hogy még egy ilyen, és elég csúnya helyre foglak elküldeni. Aztán persze 2 hétre
rá jött egy rövidebb, 3-4 oldalas, és akkor jött az a mondat, hogy oké, megcsinálom, de csak
mértéktartással.
I: Mit értesz pontosan mértéktartás alatt?
GÁ: Embere válogatja, szerintem.
I: Nekem a nagyon szoros és következetes időbeosztás szokott segíteni. Én így tudok dolgozni.
GÁ: Ez nálam nem működne.
I: Mi az, ami még kell a tevékenyéghez? Milyen fajta tudás, képesség?
GÁ: Ha magamat veszem alapul, akkor… Nekem szegényes a szókincsem. De ez nem
akadályoz abban, hogy fordítsak.
I: Nekem nagy úgy tűnik.
GÁ: Angolul igen.
I: A szakirodalom szerint fontos legalább két nyelv, a forrásnyelv és a célnyelv jó szintű
ismerete.
GÁ: Szerintem az elég, ha a magyart elég jól tudom.
I: Ezért fordítasz szívesebben angolról magyarra.
GÁ: Igen. Érdekes módon mégis azt veszem észre az órákon, hogy a magyarról angolra
fordításaim sokkal jobban sikerülnek. És ez aláássa kicsit a magyar tudásomba vetett
önbizalmamat.
I: Milyen fajta más tudás kell a fordításhoz?
GÁ: Alvás. Tudjon az ember aludni. Komolyan mondom. Hogyha az ember nem tudja… én
nagyon szerencsés vagyok abban, hogy rohadt rövid idő alatt kipihenem magam. Nekem 3 óra
alvás annyi, mint másnak 7.
I: De mondjuk, a japán gazdaságról kell fordítani egy szöveget. Akkor elég-e az a szókincs,
amit szótárban megnézel, elegendő-e a nyelvi szabályok ismerete?
GÁ: Nem, hát kontextusban kell a dolgokat látni. Azért, hogyha az ember nincs képen arról,
hogy mi a helyzet Japánban…
I: Vagy, hogy esetleg teázás közben kötik az üzletet…
GÁ: Jaj, tényleg, a kulturális különbségek!
I: Tehát nem árt, ha az ember ismeri az adott kultúrát…
GÁ: De annak utána tud nézni.
256
I: Igen. Utána tud nézni, és kell is. De hol? A társaid azt mondták, hogy „Our best friend is
the Google”. Erről mi a véleményed?
GÁ: Nem tudom, én azért büszke vagyok rá, hogy annyi mindennek nem kell utánanéznem…
I: Jó. Azt mondtad, hogy magyarról angolra fordításhoz nem elég jó az angol szókincsed. Ha
angolra fordítasz, akkor milyen eszközöket használsz a fordításhoz?
GÁ: Magyarról angolra… Jó kis papír alapú szótár, online szótár…
I: Használsz papír alapút még?
GÁ: Persze.
I: Másik két diáktársad nem használ papír alapút, csak elektronikusat.
GÁ: Miért? Valamiért, nem tudom, megvan az a… nem is tudom… megbízhatósága. Bár ott
is vannak kreténségek, van, ami hiányzik belőle. mégis, van amikor könnyebbnek érzem,
hogy fogom és lapozom. Úgy van ez, mint az írással. Az emberek el fognak felejteni írni,
mert mind csak gépezünk, gépezünk…
I: Ha kapsz otthonra egy feladatot, egy fordítást magyarról angolra, mit készítesz oda magad
mellé?
GÁ: Innivalót. Számítógépen csinálom. Szótárt nem mindig, elmegyek érte, ha kell. Igazából
ami fontos még, az egy kényelmes ülőpozíció.
I: Plusz gondolom, amit megtalálsz az interneten.
GÁ: Az az elsődleges forrás, igen.
I: Kétnyelvű szótárakat használsz, vagy inkább egynyelvűeket?
GÁ: Kétnyelvűeket. Egynyelvű is van, de a szinonimákat könnyebben megtalálom például az
interneten.
I: Mi a helyzet a korpuszokkal?
GÁ: Hm… (Hosszú hallgatás).
I: Az oktatás során találkoztatok korpuszokkal? Foglalkoztatok ezzel a témával?
GÁ: Hm… Talán a Cs. tanárnő óráján volt róluk szó.
I: Akkor legalább a tananyagban benne van. Ilyeneket tanultok-e, hogy fordítási módok,
fordítási technikák, fordítási stratégiák?
GÁ: Igen. Az a baj, hogy… ezeknél…
I: A stratégia az mire vonatkozik?
GÁ: Hm…
I: Arra, hogy bizonyos dolgokat, szófordulatokat hogy fogsz lefordítani. Vannak kedvenc,
bevett, hogy is mondjam… fogásaid?
257
GÁ: Vannak kurzusok, amelyekből kihalászhatjuk a számunkra hasznos metódusokat, de
amúgy…
I: Ösztönös?
GÁ: Igen.
I: Mi a helyzet, ha olyan szöveget kapsz, ami tele van jogi szófordulatokkal, vagy egy
általános szöveget, de az tele van idiómákkal?
GÁ: (Nevetés) Alaptörvény??
I: Akár. Vagy például egy szövegben azt kell lefordítanod, hogy a „fürdővízzel együtt kidobták
a gyereket”.
GÁ: Micsoda?
I: Hát, van egy ilyen magyar szólás. Vagy, hogy az „elkövetőnek bottal üthették a nyomát”.
Vagy „megette a kutya a telet”.
GÁ: De nem jogi szövegben, ugye?
I: Nem, nem.
GÁ: Mert azért válasszuk szét… ilyen jogi szövegben nem fordulhat elő. A többségnek
szerintem van hivatalos fordítása, csak azt nagyon nehéz megtalálni. Másrészt, van amikor
saját magunknak kell kitalálni, főleg versfordításban.
I: Vagy másik irányban. „I am just pulling your leg” és hasonlók.
GÁ: Most az a példa jut az eszembe, hogy az A. C. R-nek az egyik fordítási feladatában volt
egy olyan, hogy törpe bögre, görbe bögre. Az első dolgom az volt, hogy keressek angol
nyelvtörőt. Nézem, nézem, találtam is párat. Az „r” betű, ami nagyon ilyen domináns benne.
Találtam is egy angolt, nem emlékszem rá pontosan, amiben sok „r” volt, aztán a törpe szóval
játszadoztam. Magyarul szoktam kísérletezni szóalkotással, hát most az angolban is kénytelen
voltam. A gnóm szóval próbálkoztam. Igen, „yellow roller, lower roller” volt, amit végül
használtam. A „yellow”-t kicseréltem „gnomer”-re.
I: És elégedett volt vele a tanár?
GÁ: Nem.
I: Végül is megadtad a választ arra, amit kérdeztem. Próbálsz valamilyen ekvivalenciát
keresni…
GÁ: Igen. De hát ez magától értetődik.
I: Ez azért nem egészen biztos.
GÁ: És ha magyarra kell fordítani, nyilván egyszerűbb.
I: Összegezve, milyen tevékenységekből áll egy tipikus gyakorlati óra? Milyen feladatokból?
GÁ: Igazából ellenőrizzük a feladatokat.
258
I: Tehát van egy feladat, valamilyen fordítás…
GÁ: Igen, és akkor a tanárnő vagy a tanár… (Hallgatás.)
I: Gondolj egy órára, hogy mit csináltok. Fordítotok?
GÁ: Nem, órán nem fordítunk élesben. Talán egyedül csak az A-nál.
I: De akkor mégis, milyen feladatokat kaptok órán?
GÁ: Ott igazából elemzés van. Ott… A gyakorlati rész az az otthoni feladat, és órán szanaszét
elemezzük a hibáinkat, hol tévesztettük, hogyan, ezt milyen módon lehet másképp…
I: És ez hogy történik?
GÁ: Hát szerencsére…
I: Egyáltalán, hányan vagytok a csoportban, hány ember munkáját szeditek szanaszét?
GÁ: Van olyan csoport, ahol csak ketten vagyunk.
I: Kettő fordítást lehet ellenőrizni egy órán, gondolom.
GÁ: Igen, de majdnem mondatokra lebontva csináljuk. Mindig egy emberét kivetítik, azt
nézzük, és össze tudjuk vetni a sajátunkkal.
I: hasznosnak tartod ezt a fajta ellenőrzési módot?
GÁ: Nem nagyon tudnék jobbat mondani, bár most eszembe jutott valami. Rákérdeztél az
elején, hogy milyen elvek mentén szoktam én fordítani. Javításnál, például, sokszor van az,
hogy bekiabálással megy a dolog, és ezzel nincsen semmi baj. Hanem hogy egy diák
fordítását kivetítik, abban van valami, ami helyett más szót lehetne használni, és akkor a tanár
felszólít egy másik diákot, hogy nálad ez hogy néz ki, és akkor elmondja, és… na jó. Ez jó.
Vagy mondja a tanár, hogy ő hogyan csinálná ezt a részt. És ilyenkor mindig azt nézem, hogy
mindenki mondja a magáét, és nem veszik figyelembe, hogy szerencsétlen diák, akinek a
fordítását szanaszét szedjük, az ott ül… és hogy amit csinálunk, az bizonyos szempontból a
fordítás fordítása. Hogy nemcsak az alapművet vesszük alapul, hanem a diák lefordított
munkáját.
I: Lehet, hogy jobb lenne, ha az eredeti szöveget vetítené ki a tanár?
GÁ: Nem, nem erre akartam kilyukadni. Amit mondani akarok, az az, hogy én amikor
próbálok belejavítani másnak a fordításába, nem a saját megoldásomat kiabálom be, hanem
megpróbálom azt a még mindig jó megoldást bemondani, ami a legkevesebb változtatással jár
az ő fordításában. Hogy ne érezze azt már, hogy rossz, amit csinált, mert látom, hogy nem
rossz, és jó vonalon indult el. Próbálok úgy változtatni rajta, hogy ne kelljen sokat változtatni
rajta.
I: Tehát a gyakorlati órák többsége azzal telik, hogy van egy fordítási feladat, amit otthon
megcsináltok, és az órán azt elemzitek.
259
GÁ: Igen.
I: Mire kapjátok a jegyeteket?
GÁ: Egyrészt ugye órai részvétel, de az nem számít igazán, az elvárt dolog, bizonyos
hiányzási határ felett nem is fogadják el a kurzust. Aztán vannak ugyebár ezek a fordítási
feladatok, amik közül kettőt vesznek figyelembe.
I: Azokat is otthon csináljátok?
GÁ: Igen. Csak vizsgaszituációban csináljuk laborban a fordítást. Az értékelés azonban úgy
megy, mintha vizsga lenne. Úgy tűnik, így több ideje van az embernek, de nem, mert a vizsga,
ahogy megtudtam az 6 órás, két fordításara. Nem tudom, oda kispárnát kell vinni, hogy végi
tudja ülni az ember. Ez szerencsére csak a záróvizsga.
I: tehát amíg a záróvizsgáig eljutsz, a jegyeket otthoni munkára kapod, amit leadsz a
tanárnak. Csak jegyet kapsz, vagy értékelést is kapsz mellé?
GÁ: Mivel az értékelés már az órán megtörténik…
I: Vizsgafeladat esetén is? Vagy olyan feladatnál, amit a jegyért írsz otthon?
GÁ: Nem… Például a Cs. tanárnőnél úgy van, hogy a gyakorlati feladatot mindig megkapjuk
kijavítva, kommentekkel. Azt kinyomtatva viszem én már az órára, és az alapján dolgozunk.
Ha kérdésünk van, arra az órán még választ kapunk.
I: Tehát mindenki személyre szabott értékelést kap.
GÁ: Igen.
I: Az jó dolog, nem?
GÁ: Miért lenne rossz?
I: Van-e olyan, hogy egymás munkáit értékelitek?
GÁ: Az fura lenne.
I: Miért? Az is egy értékelési mód.
GÁ: Persze, ha szorosan vesszük, akkor persze, akkor szerintem mindenki minden órán
elmondja, hogy nekem ez tetszik, vagy nem tetszik, másképp csinálnám…
I: De gondolom, ez nem számít bele abba, hogy milyen jegyet kaptok?
GÁ: Logikailag… ez miért számítana?
I: Nekünk van olyan óránk, ahol a diáktársak értékelése adja az 50%-ot. Ilyen, ezek szerint
nincs nálatok.
GÁ: Hát ez elég borzalmasan hangzik. Nem azért, hogy a diáktársaink véleménye nem
számít… Szavamra, ez bullshit. Nonsense. Ezzel most nagyon megleptél. Illetve, tudtam
ilyet… Mármint hogy diáktársaknak van ilyen feladata, de hogy ez beleszámítson a jegybe…
260
I: Visszatérve az ellenőrzéshez… Gyakorlatilag minden órán mindenki kap feedback-et arra,
amit otthon csinál.
GÁ: Igen.
I: Mi az, amit igazán motiválónak találsz az óráidban? Amire azt mondod, hű, ezért szívesen
járok ide.
GÁ: Hm… Nem is tudom.
I: Vagy valami, amit utálsz, amitől előre borsózik a hátad? Amit unsz.
GÁ: B. tanár úr óráit nem szívesen hagyom ki.
I: Ez a tudása vagy a személyisége miatt van így?
GÁ: Both. Leginkább, mert korrekt. Mindig megmondja, mi a baj, és adok a véleményére.
I: Ez mindegyik fontos. És tanulsz is tőle, gondolom.
GÁ: Persze.
I: A tanár tudása és személyisége is fontos.
GÁ: Ami a személyiséget illeti… Van egy oktató, ugyebár, aki nagyon nehéz tárgyat tanít, és
nem bírom. Szó szerint irritál. És egyszer leültem vele dumálni, és megbuktatott a
tantárgyából, amivel nem is volt bajom, mert teljesen korrekt volt, de valahogy most már úgy
vagyok vele, hogy a személyiségi kérdésből nem szabad ügyet csinálni. Hogyha ebből van az
embernek problémája, azt mihamarabb tegye félre, mert azt tudom, hogy a tárgyi tudás
megvan mögötte, meg hogy ért hozzá. Hogy bírom, vagy nem bírom a fejét, az nem ide
tartozik.
I: Számít az, hogy kikkel ülsz ott az órán? A diáktársaidra gondolok.
GÁ: (Hosszú hallgatás). A rögtönzött válaszom az lett volna, hogy nem. De most, hogy
jobban belegondolok… igen. De főleg azért, mert vannak olyan diáktársaim, akik úgy…
irritálnak. De persze számít, hogy ott vannak, mert máskülönben a tanár nekem egyedül
tartaná az órát, meg, tanulok is a hibáikból. Másrészt meg olyan megoldásokat hoznak fel,
amire én nem gondolok… ez a tipikus több szem többet lát… Van egy probléma, amire
mondanak valamit, és teljesen ki vagyok akadva, mert az nekem nem jutott eszembe…
Sokszor kapom magam azon, hogy mások megoldásai által javítom a saját fordításaimat, ami
szerintem az egyik alapköve ennek a szakmának.
I: Számít az, hogy mit kell fordítani? Az egyik tanár mindig jó szöveget hoz, a másik nem
annyira jót…
GÁ: Arról nem tehetnek. A téma adott… ööö…
I: Van-e olyan, hogy valamilyen szinten beleszólhattok abba, hogy mit fordítsatok?
261
GÁ: Volt. A B. tanár úrnál egyszer egy alapozó I angolról magyarra fordításnál nekünk kellett
szöveget javasolni.
I: És az motiváló?
GÁ: Őszintén? Nem változtatott semmin.
I: Van-e valami, ami csalódást okozott az elvárásaidhoz képest?
GÁ: A specializációval kapcsolatban?
I: Igen.
GÁ: Hm… Ha sokáig kell gondolkozni az embernek, akkor azt hinné, hogy nincs, de azért
keresek valamit.
I: Nem muszáj… Esetleg van-e valami, amit tanítani kéne, de nincs… Amit hiányolsz.
GÁ: Na ez már így egyértelműbb nekem. Igazából van még két fajta óra, ami nem volt, tehát
szépirodalmi és számítástechnikai szövegek vannak vissza… a skála széles, nagyon bővíteni
nem hiszem, hogy kéne. Öö… (madárcsicsergés).
I: Ha ennyit kell gondolkodni, akkor nagyjáéból rendben van a kurzus, nem?
GÁ: Igen, nagyjából rendben van.
I: Most egy nagyon okos kérdés következik. Ha osztályoznod kellene egytől ötig, az óráidat
hogyan értékelnéd.
GÁ: Átlagot mondjak?
I: Igen. Egy számot egy és öt között.
GÁ: Az nehéz. Hm…
I: Ha mindig mindennel elégedett vagy, az nyilván 5, ha semmivel sem vagy elégedett, az 1…
GÁ: Ja, ez ugyanaz, mint a felmérésekben feltett kérdések… Adjunk neki egy 4-est.
I: A tanáraid felkészültek?
GÁ: Afelől nincs készségem. Teljes mértékben.
I: Te, mint diák mindent megteszel azért, hogy jó fordító váljon belőled?
GÁ: Erre is adok egy 4-est magamnak.
I: Ha lehetőséged lenne rá, jelentkeznél MA képzésre fordításból? Bár említetted, hogy ez
nálad főleg anyagi kérdés.
GÁ: Attól függ, mit takar. Most meg tudod nekem mondani, hogy ez a képzés mit takarna?
Mi szerepelne benne?
I: Azt gondolom, gyakorlatilag minden, ami fordítással kapcsolatos. Ha megnézed az ELTE
leírását… Részben biztos szerepelne benne, amit most tanultok, plusz sokkal több elmélet
szerintem…
GÁ: Az elég para lenne, mert a fordítás az, hogy gyakorlat, gyakorlat, gyakorlat…
262
I: Pedig vannak, akiknek fontos az elmélet, mert tanítani, publikálni akarnak.
Megkerülhetetlen szerintem MA szakon. De ha valaki fordítóként akar dolgozni, mint te is
szeretnél… Úgy tudom, ez a specializáció nem adat neked ilyen papírt.
GÁ: (Hosszú hallgatás.) Szerintem a záróvizsga benn lesz az indexben, de valóban nem ad
szakfordítói képesítést.
I: Másképp kérdezem. Tervezed-e olyan szakképesítés megszerzését, amivel a fordítást
hivatásszerűen művelheted?
GÁ: Talán a lektorálást végezhetek anélkül is. Valamelyik tanárunk mondott valamit arról,
hogy az egyik egyetemen lehet ilyen vizsgát tenni úgy, hogy gyakorlatilag bárki besétálhat az
utcáról…
I: És úgy tervezed, hogy ha lehet, teszel ilyen vizsgát?
GÁ: Igen.
I: Van-e valami, amiről nem beszéltünk, de amit fontosnak tartasz vagy kiemelnél a
specializációval kapcsolatban?
GÁ: (Hosszú hallgatás). Talán az, hogy megtanultam jobban beosztani a fordáshoz szükséges
időt. Ez fontos, mert a fordítás időigényes feladat. A kapott szövegek alapján tudom, hogy
egy fordítás kb. 1,5 óra, és így jobban tudok gazdálkodni a napommal is. Szerintem a fordítás
határozottan segít ebben.
I: A szókincsed fejlődött-e a fordítással?
GÁ: Szeretném azt mondani, hogy igen, de leginkább a passzív szókincsem. Az szerintem
300 százalékkal bővült. Hogy aztán ebből mennyit tudok aktivizálni, az már más kérdés.
I: Nos, nem akarom az egész délutánodat elrabolni; nagyon elszállt az idő. Köszönöm, hogy
vállaltad a beszélgetést, és hogy ennyi mindent elmondtál.
GÁ: Szívesen.
263
Appendix B: Motivation and autonomy in Translation Studies classes. Student
Questionnaire
Dear Student,
I would like to ask you for your help in my research aiming to explore motivation, content
and assessment issues in translation classes. Please fill in the questionnaire by choosing the
best answers. I will keep your answers confidential.
Thank you for your cooperation,
H. Prikler Renáta PhD student
University of Pécs, Doctoral Programme in English Applied Linguistics and TEFL/TESOL
A) Language competence
1. Which year are you in? ______________________________________________
2. How long did you study English before entering university? (years)___________
3. What is your English language proficiency level on the CEFR scale? Please, check the
table below and circle your level.
B2 C1 C2
Level Characteristics
Can understand the main ideas of complex text on both concrete and abstract
topics, including technical discussions in his/her field of specialisation.
Can interact with a degree of fluency and spontaneity that makes regular
B2 interaction with native speakers quite possible without strain for either party.
Can produce clear, detailed text on a wide range of subjects and explain a
viewpoint on a topical issue giving the advantages and disadvantages of
various options.
Can understand a wide range of demanding, longer texts, and recognise
implicit meaning.
Can express him/herself fluently and spontaneously without much obvious
searching for expressions.
C1 Can use language flexibly and effectively for social, academic and
professional purposes.
Can produce clear, well-structured, detailed text on complex subjects,
showing controlled use of organisational patterns, connectors and cohesive
devices.
Can understand with ease virtually everything heard or read.
Can summarise information from different spoken and written sources,
C2 reconstructing arguments and accounts in a coherent presentation.
Can express him/herself spontaneously, very fluently and precisely,
differentiating finer shades of meaning even in more complex situations.
4. What other languages and how long have you been learning?
Language Years
264
5. How would you specify the strengths and weaknesses of your English language
competence? Please, tick the areas listed as your strength or weakness.
Strengths Weaknesses
reading reading
writing writing
listening listening
speaking speaking
pronunciation pronunciation
grammar grammar
vocabulary vocabulary
translation / mediation from translation / mediation from
Hungarian to English Hungarian to English
translation / mediation from translation / mediation from
English to Hungarian English to Hungarian
other: other:
B) Translation as a specialization
6. Why did you choose translation as your specialization? List three reasons.
a. _______________________________________________________________
b. _______________________________________________________________
c. _______________________________________________________________
7. Did you have any previous experience in translation before you began studying
translation at the university? If so, please, specify it by circling the answers that fit you
most.
a. doing translation tasks in English classes
b. translating / interpreting for school events
c. working for a translation agency
d. translating literary texts for publishing companies
e. participating in a course, namely:
_______________________________________________________________
f. other, namely: ___________________________________________________
8. Please mark on a scale of 1 to 4 how easy you find translation. Circle the most
appropriate answer and explain why. (1 – the easiest; 4 – the most difficult)
a)____________________________________________________________________
b)____________________________________________________________________
265
9. Please mark on a scale of 1 to 4 how difficult you find the listed activities when
translating a text. Circle the answers that fit you the most. (1 – the easiest; 4 – the most
difficult)
Activity Difficulty
spelling 1 2 3 4
translating words 1 2 3 4
translating set lexical phrases (including
1 2 3 4
idioms)
sentence structure 1 2 3 4
word order 1 2 3 4
addressing 1 2 3 4
preserving formality 1 2 3 4
preserving genre characteristics 1 2 3 4
cultural, social and professional
1 2 3 4
differences
Other: 1 2 3 4
10. Are you planning to continue your translation studies at MA level? Please, circle the
answer that fits you, and give your reasons, too.
a) Yes b) No
________________________________________________________________________
________________________________________________________________________
11. In what ways do you think you will be able to use your translation skills after
graduation?
________________________________________________________________________
________________________________________________________________________
266
14. Is the number of the classes provided in the program enough to improve your
translation skills? Please, circle the answer that fits you.
a) Yes b) No
15. Name the activities / tasks you find the most useful for developing your translation
skills. Give your reasons, too.
____________________________________________________________________________
____________________________________________________________________________
____________________________________________________________________________
16. Name the activities / tasks you find the least useful for developing your translation
skills. Give your reasons.
____________________________________________________________________________
____________________________________________________________________________
____________________________________________________________________________
17. In what ways do you think the courses could be more useful?
____________________________________________________________________________
____________________________________________________________________________
____________________________________________________________________________
____________________________________________________________________________
D) Learner autonomy
18. How do you prefer to work in your translation classes? Circle the option you find most useful
and give your reasons.
a. on my own, because _________________________________________________
b. in pairs, because ____________________________________________________
c. In groups, because __________________________________________________
d. Directed by the teacher, because _______________________________________
__________________________________________________________________
19. How do you solve a translation problem? Circle the answers that fit you the most
(0=never; 4 = generally)
Problem Frequency
I ask one of my teachers. 0 1 2 3 4
I ask another student. 0 1 2 3 4
I ask a professional translator. 0 1 2 3 4
I try to find the best equivalent in a 0 1 2 3 4
printed dictionary.
I use online tools (Google, dictionaries) 0 1 2 3 4
I consult a corpus. 0 1 2 3 4
I simply omit the word/expression. 0 1 2 3 4
I use my own words to give back the 0 1 2 3 4
essence of the problematic part.
267
Other _________________________ 0 1 2 3 4
20. What translation tools do you use when you do your home assignments? Number the
options to mark their frequency. (0 – never; 7 – generally)
21. How useful do you find the listed classroom activities and home assignments? Please,
circle the option that fits you most. (0 – not useful at all; 4 – the most useful)
0 1 2 3 4
0 1 2 3 4
0 1 2 3 4
0 1 2 3 4
0 1 2 3 4
f) other,______________________________________________________
0 1 2 3 4
22. Which of the above listed activities do you find the most useful of all? Why?
_____________________________________________________________________
_____________________________________________________________________
268
23. How do you use the translation skills you learnt out of classroom? Give a few
examples.
_____________________________________________________________
_____________________________________________________________________
24. How do you prepare for an exam? Circle the option that fits you most.
25. How do you benefit from the evaluation received for your exam tasks? ____________
_____________________________________________________________________
26. What do you think could help you to become more autonomous in your translation
studies? ______________________________________________________________
_____________________________________________________________________
27. There are nine statements here regarding ways to learn translation and two different
columns. The one on the left asks how responsible you think you should be for doing
this to yourself. The one on the right asks you to what extent you actually do it. Please,
underline the number you find the most appropriate.*
How To what
responsible did Statement extent did you
you feel for it? really do it?
1. 1 2 3 4 identifying my own strength and weaknesses 1 2 3 4
2. 1 2 3 4 setting my own learning goals 1 2 3 4
3. 1 2 3 4 deciding what to learn outside the classroom 1 2 3 4
4. 1 2 3 4 evaluating my own learning process 1 2 3 4
5. 1 2 3 4 stimulating my own interest in translation studies 1 2 3 4
6. 1 2 3 4 learning from my peers, not just from the teachers 1 2 3 4
7. 1 2 3 4 becoming more self-directed in doing translations 1 2 3 4
facing difficulties in translations on my own rather
8. 1 2 3 4 1 2 3 4
than waiting for solutions from my teachers
9. 1 2 3 4 offering opinions about what to learn in classroom
1 2 3
4
269
E) Feedback and assessment
28. Which type of evaluation is the most frequent in your translation classes? Please,
circle the options to mark frequency. (0 = never; 4 = the most frequent)
29. How useful do you find the different types of evaluation for your development?
(0 – not useful at all; 4 - the most useful)
32. How does the feedback you get help your development?
____________________________________________________________________________
____________________________________________________________________________
35. Has it ever occurred to you that you lost interest in translating?
a. If yes, what was the reason for it? ____________________________________
__________________________________________________________________
b. How did you get over it? ___________________________________________
__________________________________________________________________
270
Appendix C: The institutional background of translator training in Hungary:
universities
Level
BA Post-gradual training MA PhD
Institution
271
University, Eger, Faculty of translation texts on cultural history & Translation
Humanities (English EU English and
Studies BA) Interpreting
specialized (English,
translation German);
(German translation
Studies BA) specialization
Eötvös Loránd University, English specialized translation of MA in Translation
Budapest, Faculty of translation Economic and Legal texts Translation Studies
Humanities skills (English and lector and
Studies BA) specialized translation of Interpreting
Economic and Legal texts (English,
training for terminologists German,
specialized translation and French,
audiovisual translation Chinese);
specialized translation of translation
texts on social sciences and and
economics interpreting
specialized translation and specialization
interpreting for Slavic and
Baltic languages
training in translation and
interpreting for court and
public authorities
European Masters
Conference Interpreting
Kodolányi János University European Masters
of Applied Sciences Specialized Translation
(English and German)
specialized translation of
texts on economics and
social sciences
specialized translation of
texts on economics and
social sciences &
interpreting
Károli Gáspár University of translation and specialized translation in
the Reformed Church in English for two foreign languages in
Hungary, Faculty of specific the field of humanities,
Humanities purposes religion, law, economics,
(English technical sciences, EU &
Studies BA) literary translation
translation and (language options: English,
German for French, Dutch, Japanese,
specific Chinese, German)
purposes
(German
Studies BA)
University of Miskolc, translation specialized translation of MA in
Faculty of Humanities specialization texts on economics and Translation
(English & social sciences and
German) Interpreting
(English,
German);
translation
specialization
272
University of Miskolc, specialized
Faculty of Law translation of
legal texts as
specialization
for medical
students
University of Nyíregyháza specialized translation of
texts on economics and
social sciences &
interpreting
University of West-Hungary, specialized translation of
texts in the field of
Benedek Elek College of technical and agricultural
Pedagogy sciences, IT and economics
& terminologist training
specialized translation of
texts in the field of
technical and agricultural
sciences, IT and economics
& interpreting
specialized translation of
texts in the field of
humanities, social sciences,
education, arts, art
mediation & terminologist
training
specialized translation of
texts in the field of
humanities, social sciences,
education, arts, art
mediation &interpreting
specialized translation of
texts in the field of natural,
medical and sport sciences
& terminologist training
specialized translation of
texts in the field of natural,
medical and sport sciences
& interpreting
University of Pannonia specialized translation of MA in
Faculty of Modern Philology texts on economics and Translation
and Social Sciences social sciences and
Interpreting
(English,
German,
French);
translation
and
interpreting
specialization
273
Pázmány Péter Catholic literary and literary translations of MA in
University, Faculty of special English texts Translation
Humanities translation specialized translation of and
(English texts in the field of Interpreting
Studies BA) humanities, social sciences, (English,
law and economics German,
(English) French,
Italian,
Spanish);
translation
and
interpreting
specialization
University of Pécs, Faculty Specialized specialized translation of
of Humanities translation texts in the field of
(German) humanities (French-
(German Hungarian)
Studies BA) specialized translation of
texts in the field of
humanities (Italian-
Hungarian)
translation as
specialization
(English
Studies, BA)*
specialized translation of
texts in the field of
humanities (French)*
University of Pécs, Faculty specialized translation of
of Medicine medical texts &
interpreting (English)
Semmelweis University, specialized translation of
Faculty of Health and Public medical texts &
Services interpreting (English)
Szent István University, specialized translation of
Faculty of Food Science texts on agriculture and
natural sciences
Szent István University, specialized translation
Faculty of Economics and
Social Sciences
University of Szeged, “Basics of specialized translation of MA in
Faculty of Humanities translation and texts on economics and Translation
interpreting” social sciences and
specialization specialized translation of Interpreting
(English texts on economics and (English,
Studies BA) social sciences & German,
interpreting French);
Translation and specialized translation of translation
interpreting texts on natural sciences and
specialization (English-Hungarian) interpreting
(German specialization
Studies, BA)
274
University of Szeged, specialized translation of
Faculty of Law Anglo-Saxon and English
legal texts for specialized
lawyers
specialized translation of
Anglo-Saxon and English
legal texts for legal
consultants
specialized translation of
French legal texts for
specialized lawyers
specialized translation of
French legal texts for legal
consultants
specialized translation of
German legal texts for
specialized lawyers
specialized translation of
French legal texts for legal
consultants
University of Szeged, specialized translation of
Faculty of Medicine medical texts &
interpreting (English)
Institutions total: BA total: 13 Post-gradual total: 56 MA total: 8 PhD total: 1
26 faculties / units of 17
universities
* These institutes are new additions; they are missing from Vermes Albert’s table.
275
Appendix D: The assessment scale used for assessing exam translations. Teacher
interviews
Dear Colleague,
When studying the assessment scales used in FLA, I focused on the ones which are used to
assess translations, and although I could find a few, they admittedly could not fulfill their aim.
That gave me the idea to try to develop a scale which targets the written product – that is the
translations – of TS students.
I would like to ask you to participate in my study, and answer my questions concerning the
methods and scales you use when assessing translations. If you could devote a little part of
your time to this problem, I would be happy to call on you in one of your office hours.
------------------
How often and how do you assess your students during the term at your translation studies
classes?
What criteria do you follow when you evaluate the written assignments?
Although I understand that the final translation exam was cancelled a few semesters ago, I am
interested in your opinion on the assessment scale that was used for exam translations.
How is this assessment different from the during-the-term evaluations?
How does it meet your expectations? Is it appropriate in every respect in your opinion for
assessing BA translations?
276
What are the difficulties/challenges of using it?
What modifications would you recommend and why to make it more suitable/appropriate for
your purposes?
16/12/2019
Duration: 22 minutes
I: Köszönöm szépen, hogy ilyen gyorsan válaszoltál a levelemre, és lehetővé tetted ezt a
beszélgetést. Mivel mondtad, hogy kevés időd van, rögtön fel is tenném az első kérdésemet.
Évközben milyen gyakran és milyen módszerekkel értékeled a hallgatóidat a fordítástudomány
órákon?
R1: Kétféleképpen értékelek. Az egyik az, amit használtunk, hogy nevezzük… kishiba-
nagyhiba rendszer. Ja… Mire gondolsz pontosan?
R1: Az van, hogy [a hallgatók] minden órára csinálnak egy fordítást, majdnem minden héten,
és azt úgy értékelem, ha végső értékelésre gondolsz, hogy azonosítom a kishiábak és a
nagyhibákat, összesítjük, ahogy a vizsgán is, plusz kapnak visszajelzést minden egyes
problémáról, buborékokban. How often… minden alkalommal, amikor megírnak egy
fordítást. Nagyjából hetente, minden feladatnál. E-mailben küldik el a fordítást, és a feedback
segít abban, hogy fejlődjenek. A feedbacket a konkrét megoldásaikra kapják.
I: Hogyan történik a hibák megbeszélése?
R1: Elküldik e-mailben a fordításokat, akkor van rá egy napom, mindenkiét kijavítom,
kommentelem, h, H, aztán visszaküldöm nekik emailben az óra előtt, így látják a saját
fordításaik kijavítását, és az órán ezeket a tipikus meg nem tipikus hibákat átbeszéljük.
Kigyűjtöm, PPT-t készítek, így szoktam. Mindenki kap egyéni visszajelzést. Az órán rá is
kérdezhetnek azokra a hibákra, amit a sajátjukban bejelöltem.
I: Megkérdezhetem, hogy mi a neve a szemináriumodnak?
R1: Gazdasági fordítás. Fordítási gyakorlat. Nagy része elmegy a hibák megbeszélésével.
Rengeteg dolog előjön ezek kapcsán, hogy hogyan kellett volna fordítani, mi lett volna a
megoldás, hogy ne legyen ez a probléma. Nagyon alaposan átnézzük.
277
I: Mi a célod az értékeléssel a fordítás órákon? Van, amire nagyobb hangsúlyt fektetsz?
Nyelvtan, szókincs…
R1: Mindenen hangsúly van, egy fordításnál minden számít. Nem mindegy, hogy milyen
szövegről beszélünk, hogy milyen célból készül egy fordítás. Egy gazdasági szövegnél
mindennek súlya van. A cél az, hogy ne legyen hiba, főleg, hogy nagyon jól utána lehet nézni
mindennek. Ebben a nagy hiba-kis hiba rendszerben egyébként le van írva, bár nem túl
részletesen, és én ezeket hangsúlyozom. A nagyhiba az tényleg nagyhiba, amikor akár
nyelvtani, akár szókincsbeli, akár strukturális, érthetetlen, vagy félreértést eredményez.
I: Milyen kritériumok szerint értékeled a fordításokat?
R1: Úgy általában? Mindenre. Arra, hogy a fordítás legyen tudatos, hogy úgy tudják
megcsinálni, hogy ne legyen hiba. Én tanítok nekik stratégiákat, tudják, hogy mindent
ellenőrizni kell, de végső soron az a fontos, hogy ne legyen értelmetlen mondat, és hogy az
értelmét mindennek találják meg, ne csak felszínesen. Kutasson, nézzen utána, ha nem tiszta
valami.
I: Ismerik a hallgatók az útmutatóban megállapított követelményeket?
R1: Persze. Általában elérhetővé tesszük nekik, fent van a honlapon. Nyilván a kurzuson nem
csak ez számít, de ez jó visszajelzés arra, hogy hogy sikerülne a vizsga. Nem tudom, ki
fejlesztette és mi alapján, de azt hiszem, ez ELTÉ-s értékelő, amiben főleg a nagyhiba esik
latba. Minden annak a számít, ami az értelmet megváltoztatja, legyen az nyelvtani,
központozásbeli. A hallgatók tudják, hogy én ezt használom, ez alapján értékelem a
munkáikat.
I: Hogyan osztályozod a hallgatód fordításait?MI számít bele az értékelésbe?
R1: Hogy megcsinálja-e rendesen a feladatait, és hogy látható-e valami fejlődés. Az egész
kurzushoz való hozzáállás. Azt szoktam mondani, akkor kapnak jegyet, ha minden feladatot
megcsinálnak. Ez azért nem olyan nagy dolog. Fél oldal, nem nevezhető hosszúnak.
Volt egy magyar-angolkurzus meg egy angol-magyar kurzus.
I: Otthon bármit használhatnak a fordításhoz, ugye?
R1: Persze. Ahogy a való világban is.
I: A diákokkal készítettem egy kérdőíves felmérést. Abból egyértelműen kiderült, hogy a
diákok nagyon motiválónak találják, ha megdicsérik őket, és nagyon demotiválónak, ha a
278
hibáik kiemelésével megalázzák őket. Te ki szoktad emelni az egyéni jó, vagy rossz
megoldásokat?
R1: Minden hibát kigyűjtök PPT-be, akár egész, nagyon rosszul sikerült részeket is, és akkor
megbeszéljük együtt. Név szerint csak akkor mondom, ha valakinek nagyon jó megoldása
van. Úgy szoktam, hogy copy-paste-tel beteszem PPT-be, és akkor megnézzük. Persze,
előfordul, hogy ha nagyon béna valami, azt is szóvá teszem. Persze, inkább viccelni szoktunk
vele, nem az a cél, hogy bárkit megalázzak. Sokkal inkább megdicsérem azt, akinek nagyon
jó a fordítása. A hibák tömkelege, ezzel szemben, leginkább név nélkül jelenik meg az órán.
I: Bár már nincs vizsgafordítás, továbbra is használod a javítási útmutatót. Mi a véleményed
róla?
R1: Volt oka annak, hogy a vizsgadolgozatokat kivették rendszerből. Az egyik az, hogy bár
megvan ez a grid, nem mindenki ragaszkodott hozzá, és nagyon szubjektíven is lehetett
végezni az értékelést. És hát a hivatalos ok az, hogy a specializáció olyan, mint egy minor, és
a minoron sincs olyan szigorlat, ami megakadályozná, hogy egy szakdolgozatát már megírt
hallgató (major szakon megírt dolgozatról van szó) ne tudjon egy egészen más tárgy, jelen
esetben a szakfordítói követelmények nem teljesítése miatt, diplomát védeni.
Egyébként, én Én X kollégával javítottam, szerintem mi tartottuk magunkat az
útmutatóhoz, soha nem egyeztettünk előre. Utólag persze mindig megbeszéltük, hogy ki mit
adott, és maximum egy jegy eltérés, ha volt köztünk. Nagyjából ugyanazt adtuk, nálunk
tényleg működött a grid.
I: Miben különbözik a vizsgadolgozatok javítása az évközbeni javítástól?
R1: Évközben sokkal több feedback-et kapnak a hallgatók.
279
kompenzálás lehetne valamiért, vagy a csuda tudja. Csak hiba, hiba, hiba, kicsi, nagy, egyéb,
A kishibák összeadódnak nagyhibává, és mindenütt megjelennek… az viszont nem jellemző
az értékelési rendszerekre, hogy ugyanígy megjelenjenek a pozitívumok is, és lehetne őket
értékelni. Különösen az olyan jó megoldásokra vonatkozik ez, amelyeket nem is lehet előre
jelezni. Pedig ez motiváló lenne, de a vizsgák sajnos neme erről szólnak. Lehetne balanszba
hozni a pozitívumokat a negatívumokkal. Számomra újfajta megoldás lenne, ha egy olyan
skála készülne, ami a pozitívumokra koncentrál. Ez nyilván nem az alapmegoldásokra
vonatkozik, hanem a kifejezetten nehezebb szerkezetekre. Szerintem ez átgondolandó.
Egyébként nekem ez a skála is könnyen használható, jól követhető.
I: Szerinted van valami, amit nem érintettünk?
I: Akkor ennyi lett volna. Köszönöm szépen, hogy időt szántál rám, és elmondtad a
véleményedet.
18/12/2019
Duration: 54 minutes
I: Elsőként azt kérdezném meg, hogy évközben milyen gyakorisággal, és milyen módszerekkel
értékeled a hallgatóidat a fordítástudomány órákon?
R2: Jellemzően szemináriumokat, gyakorlatokat tartunk, három olyan szeminárium van
félévekre elszórva, számítástechnikai szakszövegek fordítása, humán- és
társadalomtudományi szakszövegek fordítása, műfordítás – ezt a hármat szoktam csinálni. A
számítástechnikai szakszöveg fordítása talán a legegyszerűbb. Ezek javításához az értékelési
skálát alkalmazom, hogy szokják, hogy lássák azt, hogy mi az a szint, amit fordítóként hozni
kell ahhoz, hogy valaki munkát is kapjon, ne csak papírja legyen. Hétről hétre fordítunk,
hasonló hosszúságú, 2000 – 4000 n hosszúságú, 1 hetes munka, megbeszélem velük, hogy
időre töltik fel, 2 nap után kell visszatölteni a kész fordítást, majd változó rendszerben
egymást lektorálják. Órán véletlenszám generátor segítségével kisorsoljuk, hogy kinek a
fordítását nézzük meg úgy, hogy mindenkinek nyitva van közben a sajátja, és
280
összehasonlítjuk a megoldásokat. Nézünk egy konkrét szöveget, abban megkeressük a
félrecsúszásokat. Én olvasom a forrásnyelvet, hangosan, és mindenki nézi hozzá a saját
fordítását, mert nagyon gyakran, ha hangosan elhangzik egy mondat, akkor derül ki, hogy
valami elcsúszott. Ezt muszáj. Számítástechnikai szakszövegeknél nyilván az a legnagyobb
kérdés, hogy lehet-e úgy fordítani, hogy valaki nem ért a számítástechnikához, nem ismeri azt
a területet, amiről fordít. Nagyon egyszerű nyelvvel indulunk.
Egy fordítást javítunk ki együtt, van egy mid-term, és van egy final test, és a kettőből
egyszer kell átmenni. A heti feladatokra nem kapnak jegyet, az gyakorlás, viszont, ha valakit
kisorsol a számítógép, és az a szövege olyan, hogy adható rá jegy, azt megtarthatja. 3
lehetőségük van jegyet szerezni, a 3 jegyből a legjobbat kapja a diák.
I: Mi a célod az értékeléssel a fordítás órákon?
R2: Egyrészt az, hogy a diákok lássák, szokják a szintet, amit fordítóként teljesíteni kell. A
másik, hogy szakmai visszajelzést kapnak arról, hogy ők hogyan látják önmagukat, a társaik
hogyan látják őket, és hogy én hogy ítélem meg a munkájukat. És ebből áll össze a végső
jegyük, amit akkor is megkapnak, ha nem feltétlenül reális.
I: Milyen kritériumok szerint értékeled a fordításokat? Mi az, ami neked, javítónak különösen
fontos egy fordításban?
R2: Elolvasom a szöveget. Az olvashatóság az utolsó szint. Ezek a hallgatók bajban vannak
az angollal, de leginkább a saját anyanyelvükkel. Más szemináriumokon egy olyan értékelési
rendszert léptettem életbe eltérő hangsúlyokkal, ami 3 komponensű. Az egyik komponens az
önértékelés, ami max. 30%-a az értékelésnek, ami arról szólt, mennyit készült az órára, és
hogy az olvasmányokból mennyit olvasott el a félév folyamán, és ez alapján készít egy rövid
szöveges önértékelést, amit aztán egy 0-30-ig terjedő skálán egy egész számmal kifejez. A
második láb a társak értékelése, akik az adott hallgató kiselőadását értékelik. A harmadik az
írásbeli munka értékelése a tanár által. Ebből a 3 komponensből áll össze egy jegy úgy, hogy
átlagot számolok a társak értékeléséből, az önértékelés %-át úgy, ahogy az önértékelő diák
megadta, beírom, és a szakmai rész értékelését is megkapják, ebből összeáll a jegy. Végül
megkérem őket, nézzék meg, hogy a 3 elem mennyire korrelál egymással. Érdekes, ahogy az
egymáshoz való viszony alakul. Ezt a módszert azonban fordításnál nem alkalmazom.
I: Mi az akkor, amit a fordítások értékelésénél különösen fontosnak tartasz?
R2: A szakfordításról már volt szó. A humán-társadalmi szövegek fordításakor azt igyekszem
tanítani nekik, hogy jól megírt terminológia, fogalmi háttér van, amit ismerni kell, és a
legfontosabb kritérium az a gondolati pontosság. Ismerniük kell az adott tudományterületet,
annak az alapjait legalább, és itt olyan szövegrészletek kerülnek elő, ahol szükséges háttér
281
információnak a begyűjtése. Meg kell tanulniuk észrevenni, hogy ott valami van, például egy
szakkifejezés, aminek utána kell járni. Tanulnak természetesen terminológiát, és mégiscsak
anglisztika képzésről van szó, és az egyéb tárgyaikon, kultúratudomány, irodalomtudomány,
nyelvészet, tanulnak ilyen dolgokat. Az a modern kultúratudományos gondolkodás, ami az
angolszász világot 40-50 éve jellemzi, ami az irodalmat pl. a kultúra igen kicsi részének
tekinti, az egyáltalán nem jött át a tantárgystruktúrába. A bevezető kurzusok keretében
nyelvészetből, alkalmazott nyelvészetből, interkulturális kommunikációból, irodalom és
kultúra elméletből kapnak ilyen tudást, de az jellemzően elméleti szintű.
Értékelésben itt nagyobb hangsúlyt kap a gondolatiság, az érvelés, amit nehezebb
értékelni, itt a H-k kerülnek előtérbe. Itt hatalmas hangsúlyt fektetünk például a
félrefordításra, kihagyásra, betoldásra. Mindkét kurzuson csak az kap jegyet, aki minden
fordítását időben beadta. Ez azért fontos, mert a fordítók határidőre dolgoznak. Ez rászoktatás
a rendszeres munkára, egy fordító attól fordító, hogy folyamatosan munkában van, és a
készségeit karbantartja. Számítástechnikai szövegeknél a terminológia a hangsúlyosabb, itt
talán a háttértudás. Előbbi pontosabb, precízebb nyelv kellene, hogy legyen, azt könnyebben
is veszik – ha értenek a számítástechnikához.
A harmadik kurzus a műfordítás. Én azt gyakorlatilag nem osztályozom. Ez nem is
vizsgaanyag. Ez inkább jutalomjáték a képzés végén, nemcsak fordítói, hanem kreatív
feladat is, mert a jellegéből adódóan rövid szövegeket – novella hosszúság – fordítanak, 3 – 8
óra alatt. Ez félig szakma, félig művészet.
I: Használsz ezekhez javítási útmutatót? Ha igen, milyet?
R2: A műfordítás kivételével a közös skálát alkalmazom, más hangsúlyokkal, hozzáigazítva a
gyakorlati óra céljához.
I: Bár tudom, hogy a fordítás záróvizsgát kivették a szakfordítói kurzusból, érdekelne a
véleményed a vizsgadolgozatok javításához használt javítási útmutatóról.
R2: A záróvizsga azért került ki a programból, hogy ez egy nagyon rövid, és külön képesítést
nem adó program, és ezért az értékelés hangsúlyait inkább a szemináriumokon és a
gyakorlatokon kellene érvényesíteni. Ez a szakfordítói értékelési skála az ELTE hasonló
értékelési skálája alapján készült, kis könnyítéssel, a hibák számában ide-oda csúsztattuk a
határokat, és kicsit más kritériumrendszerrel a Hu-En és az En-Hu fordítás esetében. Ez
gyakorlatilag megszűnt, az utolsó hallgatók esnek át ezen a szakfordítói vizsgán. Most, és
tavasszal lesz még néhány hallgató, aki vizsgát akar tenni.
Nagyon sok kritika érte kívülről-belülről ezt az értékelési rendszert, én egyetlen egy
kritikát tudok vele szemben fenntartani: túlságosan megengedő. Nemcsak az, hogy
282
megengedő, tehát nincsenek mindene esetben világosan meghatározva a kritériumai.
Szögezzük le, ez egy hiba alapú értékelés. Egy nagyhiba – kishiba alapú rendszer, ezenfelül 6
kishiba egyenlő egy nagyhibával, és ez alapján van skálázva egy jegy, ami Hun-E esetében 6
nagyhibát enged meg 2000 n terjedelmű fordításban.
I: A nagyhibák elég jól meg vannak határozva.
R2: Igen. Viszont azt gondolom, hogy 2000 leütésnyi szövegben, amire a hallgatónak 3 órája
van bármely irányban, az 5 nagyhiba rengeteg. Ilyen terjedelmű szövegben megengedhetetlen
egy fordító számára. Maguk a kritériumok is pontosításra szorulnak, ezzel az értékeléssel
gyakorlatilag olyan embereket enged ki a program, akiknek esélyül sincs arra, hogy
fordítóként munkálkodjanak.
I: Miben látod az okát annak, hogy sok esetben nagy különbségek vannak az értékelők által
adott jegyek között?
R2: Egyrészt különböző elvárásokkal állunk neki ennek az egésznek. Mindenkinek más és
más fordítói tapasztalata van, fordítói gyakorlata, kérdés az, hogy ezt a gyakorlatot milyen
területen szerezte meg, milyen rendszerességgel fordít. Szerintem az a fordító, aki könyveket
fordított le… Az oktatóknál is nagyon megoszlik, hogy ki milyen terülten fordított és
mennyit. Csak egy példát mondok. Visszatérő probléma volt a kollégák között, hogy egy
bonyolult mondatszerkezetnek a felbontását és feldarabolását több mondatba az egyik kolléga
szerint minden esetben meg kell tennünk, mert az a feladatunk, hogy egy felhasználóbarát
szöveget állítsunk elő. Egy más területen ez a hozzáállás megengedhetetlen. Vajon meg kell-e
őriznünk irodalmi szöveg esetében a szövegben előforduló szándékos hibákat vagy jobbra
kell fordítanunk, stilisztikailag felfelé kell-e fordítanunk, vagy meg kell hagynunk az eredeti
darabosságát, figyelembe kell-e vennünk a szövegromlást, amikor az eredeti forrásszöveg is
olyan minőségű, amilyen. Ezek mind olyan kérdések, amelyekben még egy ilyen kis oktatói
közösségen belül sincs egyetértés. Nem vagyok benne biztos, hogy gyakorló fordítóként
mindenki észreveszi például a hogy-ok használatát, azt hogy ott vesszőre van szükség. A
fáradtság, a figyelmetlenség, adott esetben más nyelvi igény is közrejátszhat abban, hogy
nagy eltérés van két jegy között.
Eltérések mindig lesznek, kérdés, hogy ezek… szerintem abban, hogy mi a H, abban
nem kellene ekkora szórást mutatni az értékelésnek, mert az jól definiált. A kishibákkal
nehezebb, sokszor az van, hogy van egy végzős hallgató, akinek a technikai lebonyolítására a
vizsgának, két hét a vizsgaidőszak, és ebben az időszakban kellene megszerezni neki az
összes, félévre esedékes fordítói jegyet. És nekünk úgy kellene 2 vizsgaidőpontot hirdetni,
hogy az beleférjen ebbe a két hétbe, hogy ki tudjuk rendesen javítani, miközben még zajlik az
283
oktatás. Kell időt hagyni a fellebbezésre – a szórás miatt. Mindig van egy külsős, aki annyira
szorítkozik, hogy a jobb eredményt adja meg a diáknak.
I: Milyen módosításokat javasolnál ahhoz, hogy a javítási útmutató job, objektívebb legyen?
R2: A briliáns megoldásokat ez a skála semmilyen formában nem értékeli, az ismétlődő
hibákra sem tér ki. A gyakorlatban, ha ismétlődik egy hiba, akkor azt egyszer számoljuk, mert
az a javítás szempontjából is egyszerűbb, és ez a skálában nincs benne, csak mi csináljuk így
egyezményesen. Pl, 5-ször rosszul írt terminológiát egyszer pontozunk. Hiányos szövegre
sem tér ki kellő részletességgel, mondatkihagyásra. Ez a sála gyakorlatilag egy formai leírás.
Ha specifikusan, tárgyterülethez készülne a skála, jobban bele lehetne venni olyan elemeket,
mint a faithfulness, mert az teljesen más dolgot jelent számítástechnikai és humán szövegek
fordításánál. Ez a skála javarészt formai szempontok alapján írja le, hogy mi nagyhiba és
kishiba, nagyon nehéz egy viszonylag elvont fogalmi értékelést végezni vele. Nem veszi
figyelembe a szöveg szak-specifikus tulajdonságait. Nem igazítja rá arra az értékelést.
Lehetne egy ún. alapskála, amit szak-specifikusan kellene súlyozni. A tartalmi szempontok
ebben a skálában nem jelennek meg. Az első három szöveget még nagyon szorosan javítja az
ember, aztán a későbbiekben már szinte tudja izolálni a hibákat, különösen a típushibákat,
amelyek kétféleképpen égnek be a két javító memóriájába, attól kezdve másra koncentrálnak,
ami különbségekhez vezet az értékelésben. Ha én kardinális pontokat azonosítok a szövegben,
amelyekről azt gondolom, hogy meg kell lenniük, lehet, hogy a kolléga nem ugyanazokat
azonosítja, vagy másképp ítéli meg őket. Régebben csináltunk olyat, hogy az elsőt együtt
javítottuk, és azonosítottuk azokat az elemeket, amire mindketten odafigyelünk a
továbbiakban. Jó, ha van skála. Ez a kályha, amitől elindulunk. Szemináriumon visszajelzést
ad arra, hogy a hallgatónak mire kell figyelnie. Ez egészülhetne ki szöveg specifikus
elemekkel. És ez talán végszó is lehetne.
I: Azt gondolom, minden lényeges dologról beszéltünk. Elnézést kérek, hogy túl sok idődet
elraboltam. Köszönöm, hogy a rendelkezésemre álltál.
R2: Én köszönöm, hogy megkerestél, és beszélgettünk.
284
Transcription of the interview with Rater 3
24/01/2020
Duration: 45 minutes
285
I: Mire fordítottál még hangsúlyt javítóként?
R3: Fontos volt a precízség, még érzelmiség visszaadása szintjén is. Főleg szakszövegeknél
nagyon fontos a maximális pontosság, még a központozás vonatkozásában is. Egy rosszul
kitett vessző megváltoztathatja a szó jelentését. Ami benne van az eredetiben, az benne kell,
hogy legyen a végső változatban. Ha nincs, akkor hanyag fordításról beszélünk. Nem
megengedhető a könnyebb megoldás. A változatosságot vissza kell adni, a szó hangulatát is.
Ha lehetőség van a tükörfordításra, mindig azt kell alkalmazni. Bűn, ha a fordító a nyelvi
játékokat is így fordítja, mindig meg kell keresni a megfelelőt. Például: pertut inni. Hogy
lehet visszaadni angolul?
I: Tényleg, hogyan?
R3: Ezt nem fogom megmondani. Neked kell kitalálnod.
I: Amennyire tudom, már nincs fordítói záróvizsga, de amíg volt sem vettél részt benne
javítóként, így nem is használtad a javítási útmutatót.
R3: Nem, de azért ismerem.
I: Jó. De meg is tudom mutatni. Elmondanád róla a véleményedet? Például, mi lehet az oka
annak, hogy az azonos útmutató ellenére két javító jelentősen eltérő eredményre jut?
R3: Szakfordításokról van szó: irodalomkritikai, gazdasági szövegekről. Melyik értékelésében
van nagyobb eltérés? A gazdaságiban kevesebb eltérésnek kellene lenni, úgy gondolom.
Mindenkinek van egy saját elképzelése arról, hogy mi a jó. Van egy tendencia, ha magyarról
angolra fordítunk, egy idő után az ember úgy érzi, hogy ez a mondat angolul nem mehet
tovább, és bontja. Ugyanazt az üzenetet visszaadja. Az egyik értékelő pontot von le, mert a
fordító megváltoztatta az eredeti szerkezetet. A másik javító kreatívnak tartja ugyanezt. Vagy,
ugyanaz a hiba többször is előfordul. Szerintem nem szabad ugyanazért a hibáért többször
büntetni valakit egy szövegen belül. Lehet, hogy az egyik javító csak egyszer büntet, vagy
amit első alkalommal nagy hibának minősít, azt később már kis hibaként értékeli, a másik,
ezzel szemben, minden alkalommal levonja érte ugyanazt a pontot. A javítók között… nem is
tudom, a kollégáim között van-e valaki, aki jól tudja használni a punctuationt. Lehet, hogy a
két javítónál ez különbséghez vezet. Vagy, néha szükség van magyarázó betoldásra a
célnyelvben, hiszen anélkül a célnyelvi olvasó nem értené. Van, aki értékeli, van, aki bünteti
az ilyen betoldást.
Nagyhibákat és kishibákat kell figyelni. A javító néha felülbírálja a kulcsbeli leírást,
mert ő máshogyan tanulta. Mindenképpen jó azonban, ha vizsgadolgozatot ketten, és skála
alapján javítanak.
286
I: Most, hogy alaposan megnézted a javítókulcsot, szerinted mi hiányzik belőle? Milyen
módosításokat javasolnál, hogy még jobb, még használhatóbb legyen?
R3: Nagy eltérésnél csak úgy lehet közelebb hozni az eltérő értékeket, ha a javítók leülnek
egymással, és egy thinking aloud folyamat mentén megbeszélik, miért azt az értékelést adták,
és megpróbálják közelíteni az álláspontjukat, esetleg bevonnak egy harmadik személyt. És
mindenképpen legyen benne az útmutatóban, hogy hogyan kell számolni az ismétlődő
hibákat.
I: Van még valami szerinted, amiről nem beszéltünk?
R3: Csak annyi, hogy gridet én ugyan nem használtam, autodidakta módon javítottam, a
tapasztalataimra hagyatkozva, jó, ha vizsgadolgozatot ketten, és skála alapján javítanak.
I: Köszönöm, hogy válaszoltál a kérdéseimre.
R3: Szívesen, bármikor. Ha nem is mondtam sokat, remélem, segíteni fogja a munkádat.
I: Egész biztosan.
Q: How do you assess your students during the term at your translation studies classes? How
often do you assess them?
A: The students have to submit 7-8 translations during the course on Neptun. Each class a
student gives a presentation of the translation, discusses the hardships encountered, the class
collectively comes up with better solutions. The presentation is assessed on a 0-20 scale,
based on the quality of presentation, the translation and effort. The students’ grades are based
on two major translations and a revision of a chosen translation. Students work in pairs, they
have to revise a translation and translate and get revised by a peer.
Q: What is your aim with assessment in translation classes?
A: To give grades primarily, and show if the students have difficulties with certain skills.
Q: What criteria do you follow when you evaluate the written assignments?
A: Criteria are set down in a grading sheet available for the exam throughout the course.
Q: What do you focus on/consider important in your assessment?
A: Content and language use.
287
Q: Do you use any evaluation grids? If yes, what is it like?
A: The exam grid is used.
Q: How do you grade your students’ work?
A: See question one. Based on expertise and the sheet.
Q: Although I understand that the final translation exam was cancelled a few semesters ago, I
am interested in your opinion on the assessment scale that was used for exam translations.
A: It is still in use, only the revised program canceled the sheet. The scale concentrates on
mistakes, which is demotivating.
Q: How is this assessment different from the during-the-term evaluations?
A: Entirely the same.
Q: How does it meet your expectations? Is it appropriate in every respect in your opinion for
assessing BA translations?
A: It is good at BA level. Although ideal testing for translations is impossible, the sheet
provides a sufficient tool. It is detailed enough to find out if a translation reaches the required
level or not.
Q: What are the difficulties/challenges of using it?
A: It does not stipulate the case of repeated errors, whether a mistake should be counted as
many times as it is made or not. Does not leave place for giving extra points for creative or
better-than-average solutions.
Q: What modifications would you recommend and why to make it more suitable/appropriate
for your purposes?
A: Extra points should be given for outstanding solutions, which could lower the final number
of errors. Counting only the errors demotivates the students on the long run.
288
Appendix E: Rater questionnaire on the two (PIER vs. UP) assessment scales
ER1
1. Which of the two scales do you find better for assessing translations? Please, explain
your choice.
PIER UP Explanation
X PIER is much more reflective and surprisingly easier to evaluate.
Advantages Disadvantages
PIER Fair, easy to use, good options Difficult to evaluate if options are not in the key
offered
UP No advantages Too rigid, does not evaluate the standard of English
as well as PIER.
3. Which of the two scales offers a fairer assessment? Please, explain your choice.
PIER UP Explanation
X I regard PIER as a fair and apt method, however there are options not presented
in the key. Yet, the latter offered a wide variety of options ranging from
moderate to advanced.
4. Is there anything you would change in the scale you marked better in the first
question? If yes, specify it, please.
No, I have found it fair and since I used the method with PROFEX (PROficiency
EXamination) is an English for Legal and Administrative Purposes
(http://profex.aok.pte.hu/en) for seven years as an interrogator, from results we have seen
a steady growth in good solutions.
5. Any other remark: Looking forward to having the results of the empirical research.
289
ER2
1. Which of the two scales do you find better for assessing translations? Please, explain
your choice.
PIER UP Explanation
x - It is easier to use because of the pre-selected items.
- It aims to look for what the candidate knows.
3. Which of the two scales offers a fairer assessment? Please, explain your choice.
PIER UP Explanation
Both scales can offer fair assessment if they are used in a responsible way,
however, the PIER tool offers more objective assessment, as it does not involve
the rater’s subjective choice in error treatment
4. Is there anything you would change in the scale you marked better in the first
question? If yes, specify it, please. ---
290
ER3
1. Which of the two scales do you find better for assessing translations? Please, explain
your choice.
PIER UP Explanation
X PIE does not assess the whole text, parts of it remain unchecked with mistakes;
therefore, it involves the element of luck. The old scale assesses the whole text;
therefore, both major and minor problems in the translation are considered in
the process of assessing candidates. All in all, the old scale better assesses the
translation performance of examinees than the PIE scale.
3. Which of the two scales offers a fairer assessment? Please, explain your choice.
PIER UP Explanation
X It reflects real life and takes the whole text into consideration when assessing
students’ performance. It better reflects students’ translation skills regarding the
use of terminology, their understanding of structures, and their overall mastery
of the second language.
4. Is there anything you would change in the scale you marked better in the first
question? If yes, specify it, please.
More detailed instructions for assessors would be appreciated regarding the type of
mistakes (e.g. the same grammar or vocabulary problem occurring in the text multiple
times such the use of tenses, articles, or punctuation.)
291
ER4
1. Which of the two scales do you find better for assessing translations? Please, explain
your choice.
PIER UP Explanation
Once the items are selected, it is easy to use.
X Using the selected items for assessment offers a higher degree of objectivity,
and a better agreement between the raters.
3. Which of the two scales offers a fairer assessment? Please, explain your choice.
PIER UP Explanation
X Although the old scale processes the whole text, it is difficult to decide on the
type of the identified errors, so the rater’s decision is often subjective. Using the
PIE list the raters check only the pre-selected elements, however, if is more
objective, as every rater must accept what is offered in the list; there is no
weighting of mistakes, which makes assessment easier.
4. Is there anything you would change in the scale you marked better in the first
question? If yes, specify it, please.
As the PIER scale neglects big chunks in the text, I would definitely include a holistic
part to decide how the translated text reads in the target language, and how true it is to
the source text.
292