When students seek to study internationally, or more specifically when international students seek to study at English-speaking universities, they are often instructed to take a test that measures their level of mastery as it relates to the use of English. Among the most common of tests used for this purpose is the IELTS. However, there is some doubt regarding the effectiveness of using these tests. As such, the primary research question of interest will be: In what ways are the IELTS tests inaccurate in assessing the verbal and written performance levels of ESL learners?
How does the IELTS Test learners?
Why is IELTS used?
In what ways is it inaccurate?
Can its accuracy or usefulness be improved?
This topic matters for a number of reasons, all directly related to the educability and educational opportunities of international students. Student placement in course work, and their entrance into university programs is determined, in many cases, solely upon the outcome of the IELTS exam, as such, it is important that it accurately project academic performance.
The research will take a primarily exploratory approach, working to define qualitatively the effectiveness of the IELTs test, using self-reported data from students, and considering previous research. It will also complete a case study of student outcomes within a single university. This case study will allow us to break down the detailed areas in which the IELTS is failing students, and to consider the impacts, or implications of those failings. This will, then in turn, provide more specific research questions, or problems to be pursued in future research, from a more quantitative or descriptive approach.
Hypothesis: There are flaws in the IELTS testing system which not only pose a disadvantage for ESL students, but which inadequately measure their speaking, writing, and listening ability, because they measures skills which are not inherent to the objective of the specific test section, as a result of poor design.
First, it is significant to define what the IELTS is and in what way it is currently being used. IELTS stands for International English Language Testing System. It is not truly a single test, but rather a series of testing events that measure reading, writing, language, and speaking skill (IELTS, 2016). More than 2 million IELTS tests are administered each year to those seeking international employment, Visa and immigration status, entrance into international universities, and more (IELTS, 2016). As such, the test results are extremely high-stakes for those undergoing the testing, and must be accurate in order to be useful. The testing company asserts that it has gone to great lengths to ensure that the current test is as accurate as possible, by ensuring testing activities are similar to genuine language exchange, or communication events (IELTS, 2016). However, the nature of such testing, specifically testing that is given or scored by standardized means, is that it is necessarily artificial, and there is not currently a meaningful way to avoid a certain degree of bias or inaccuracy inherent in the testing process.
There is a significant body of literature supporting the concept that the IELTS is flawed. From a general perspective, this can be considered with regard to the standardized test’s ability, as a whole, to measure language mastery. For any standardized test to accurately measure a non-native speaker’s level of mastery, the test itself must be culturally derivative. This means that the test must measure the technical elements that denote mastery by using subject matter that is directly related to the test-taker’s personal experiences (Samuda, 1998). This would mean that the IELTS test would have to have many testing versions, so that an appropriate version could be given to each test taker. IELTS is not variated in this way. Rather, the IELTS, and most other standardized tests, assume that the test taker is a member of traditional western culture, and can respond to questions that relate to western life experiences. As a result, the test inadvertently measures the test takers’ knowledge of western culture, rather than their mastery of the English language (Samuda, 1998).
Further, the tests are graded by individual graders, and as such there is personal preference and bias integrated into the score. While IELTS maintains that its training process removes human error and opinion from the grading process, there is evidence that test scoring between examiners often varies 10 percent or more (Shaw, 2004; Blackhurst, 2004). While this is relatively accurate for a single-scored test, handled by a variety of scorers, it represents a significant inaccuracy when the outcomes are used in a high-stakes way.
Generally, this will be studied qualitatively by examining existing literature, and by performing a case study of students in a single university, in order to determine how their test scores look when compared with their university performance, and how the self-reported accuracy of class placement and program acceptance relates to IELTS scores and ability level. This is the most suitable method for pursuing the current question, because the topic is relatively under researched, and exploratory in nature. As such, it is key that the case study help to clarify the issue, and to develop specific points for future research, by creating a more solid understanding of the shortcoming of the IELT test, as it relates to student performance, and how the test can be improved in order to provide students with more accurate results as they relate to program acceptance and course placement.
The greater relevance, or application, of this research is to determine a set of more significant problems that need descriptive or quantitative research, within the topic of IELT efficiency and applicability. I also believe that it can inspire change in the current testing environment, in order to improve the testing models and procedures so that the results are more accurate, or so that scores more accurately reflect student fluency and language use. This will in turn improve the educational experience of the effected international students.
Annotated Bibliography:
Cooper, Trish. "Can IELTS Writing Scores Predict University Performance? Comparing the Use of Lexical Bundles in IELTS Writing Tests and First-year Academic Writing." Stellenbosch Papers in Linguistics Plus 42.0 (2014): 63. Web.
This study asks whether or not there is a strong connection between IELT writing scores and classroom performance. It takes aim at creating meaningful data around the way that IELT score can be used in the University setting to determine what a student is actually capable of, and to what degree educators can expect them to engage in academic level writing.
Feast, Vicki. “The Impact of IELTS scores on performance at University. International Education Journal Vol 3, No 4, 2002. Print.
Rather than simply establishing correlation, this study works to create a cause and effect relationship between IELT testing and University performance. This means that misscoring could actually lead to student failure.
Educational Research Conference 2002 Special Issue
Heitner, Reese, Barbara J. Hoekje, and Patrick Braciszewski. "Tracking English Language Proficiency and IELTS Test Scores in an International Undergraduate Conditional Admission Program in the United States." Measured Language: Quantitative Studies of Acquisition, Assessment, and Variation. Ed. Jeffrey Connor-Linton and Luke Wander Amoroso. Washington, DC: Georgetown UP, 2014. 183-198. Print
This is useful in demonstrating why placement accuracy, as it relates to IELT scores is significant. If the test is not scoring student fluency and mastery correctly, then they may not be admitted into a program for which they are prepared, or may be placed in an inappropriate level of courses. As such, the integrity of University level education is dependent on the test being accurate.
Moore, Tim, and Janne Morton. "Dimensions of Difference: A Comparison of University Writing and IELTS Writing." Journal of English for Academic Purposes 4.1 (2005): 43-66. Web.
This study is in some ways similar to Muller’s study of the gap between IELTS and performance, but rather, this considers the gap between IELTS testing strategy and classroom tasks. In other words, it considers the way in which IELTS measures a different set of skills than academic work. This is very significant to our study.
Müller, Amanda. "The Differences In Error Rate And Type Between IELTS Writing Bands And Their Impact On Academic Workload." Higher Education Research & Development 34.6 (2015): 1207-1219. Academic Search Complete. Web. 9 Mar. 2016.
This study meaningfully demonstrates that IELTS score does not always accurately reflect academic performance level of students. It quantitatively considers the gap between what is demonstrated in standardized testing, and what students are actually capable of doing within the academic classroom.
Uysal, Hacer Hande. "A Critical Review Of The IELTS Writing Test." ELT Journal: English Language Teaching Journal 64.3 (2010): 314-320. Academic Search Complete. Web. 9 Mar. 2016.
Uysal critically reviews the reliability of the IELTS test, giving special attention to various flaws in the testing procedure which could lead to a lack of reliability and trying to determine whether or not they have created unreliability in practice. He also provides suggestions for changes that should be made in order to increase the reliability and usability of the test. This is similar to our own purpose, but from a slightly different perspective, and so is useful in terms of literature review, and honing in on exactly what we are hoping to determine in the course of the study.
References:
Blackhurst, A. ‘IELTS test performance data 2003.’ Research notes 18 (1), 18-20. 2004. Print.
Ferlazz, K. & Sypnieski, K.H. (2012). The ESL/ ELL Teacher’s Survival Guide. Hobokon, NJ: John Wiley and Sons
IELTS. What is IELTS. 2016. Web.
Research notes 16, 3-7. 2004. Print.
Samuda, R. (1998). Advances in Cross-Cultural Assessment. New York: SAGE.
Shaw, Shawn. ‘IELTS writing: revising assessment criteria and scales (phase 3).’