Background
The IELTS test is one of the most significant single measures of student ESL skills, as it relates to class placement, program acceptance and more. As such, I feel that it is imperative that these tests accurately measure skill and predict performance. Members of the educational community should be concerned with this question, because it directly impacts international enrollment and student success, as well as graduation rates, and program acceptance. The findings derived from the study could be used to improve program criteria, or testing approach in order to increase accuracy of placement, and improve student performance, among non-native speakers of English who are applying to English speaking university programs.
The question has been posited: Is the IELTS written test a valid predictor of student performance, or a meaningful guide for student acceptance and placement, within the University setting? This question will specifically focus on the relationship between “band scores”, or score range that a student is awarded when taking the writing portion of the IELTS, and student performance and outcomes, both with regard to university admission and with regard to in-class performance. I predict that the study will uncover significant flaws in the IELTS, and will demonstrate a need for more accurate assessment in order to appropriately place students, and improve their success at the academic level. This is based both of evidence uncovered in the literature review, and in my casual observation of students who have taken the IELTS, and who have been incorrectly placed as a result of their scores.
Literature Review
There has been an active field of research surrounding the accuracy and efficacy of the IELTS writing test. One significant area of research is related to the IELTS writing test’s ability to predict University Performance. One such study, by Trish Cooper (2014) asks whether or not there is a correlative relationship between the IELTS writing scores and classroom performance. Cooper’s study takes aim at creating meaningful data around the way that IELT scores are used to make high stakes decisions in the University setting, including being used to determine what a student is actually capable of, and to what degree educators can expect them to engage in academic level writing. The study found that the IELTS writing task was far more accurate at, or had more in common with, discourse skills, or oral language patterns, than it did with the language patterns in typical written discourse. These findings indicated that this demonstrates that the IELTS Task 2 writing assessment is not a valid predictor of a student’s university performance, as it relates to first year writing tasks (Cooper, 2014). Similarly. Vicki Feast (2002) studied the relationship between university performance and IELTS scores, going a step beyond Cooper, in order to establish a causative, rather than correlative relationship. In Feast’s (2002) study a strong positive relationship was established between IELTS test score, and University performance as measured by GPA. As such, the study recommended, in direct opposition to Cooper’s findings, that an increase in minimum English Proficiency as measured by the IELTS could actually improve student performance at the graduate and post-graduate level, by correctly predicting student success and readiness. These studies can be more completely understood through the theoretical lens provided by Cyril Weir (2005), who stated that context validity should be considered both with regard to context, and with regard to known theory. The test’s context must be statistically considered as it relates to not only performance conditions, but also operations (p.56). Theory, in contrast, looks at construct validity, and uses relationships to address the constructs ability to measure those relationships, (Weir, 2005).
Bearing these things in mind, and given the difference of opinion based on basic correlation, it is significant to consider the work which has been done to determine what the core relationships are between IELTS writing tests and student writing performance, or academic ability. One study by Amanda Muller (2015), meaningfully demonstrates that IELTS score does not always accurately reflect academic performance level of students. It quantitatively considers the gap between what is demonstrated in standardized testing, and what students are actually capable of doing within the academic classroom. Similarly, a study by Moore & Morton (2005), like Muller, considered the gap between IELTS and performance, but rather, this considers the gap between IELTS testing strategy and classroom tasks. In other words, it considers the way in which IELTS measures a different set of skills than academic work. This is very significant to our study. If the tasks are comparable then we should see a strong correlation in the data that we collect in our research process, but if they are not comparable then we will see variance, which is indicative of a major flaw in the testing system.
More specifically relating to the issues of both the correlation and the performance findings, it could be said that the testing system is flawed and in need of redesign. Uysal (2010) completed a critical review of the IELTS test, giving special attention to various flaws in the testing procedure which could lead to a lack of reliability and trying to determine whether or not they have created unreliability in practice. He also provides suggestions for changes that should be made in order to increase the reliability and usability of the test. This is strong evidence, as our hypothesis suggests, that changes need to be made in order to improve the effectiveness of the test overall.
Proposed Methodology:
The Methodology:
My study will be a Quantitative study, because I am interested, from a descriptive statistic standpoint in the relationship between performance in English and Major related coursework and the performance that students have on the IELTS test. The Qualitative data that would be related might tell me more about how students feel about their performance in each scenario, but would not provide me a numeric comparison of measurable performance. I think that Quantitative data will show whether or not the IELTS is serving its purpose, and correctly predicting student success in these two main areas: English related coursework, and field of interest.
Data:
Written IELTS test score and performance in English 101/ entry level English class based on percentage grade in the course, and performance in entry level class in the student’s degree area, based on percentage grade in the course.
The IELTS score will be measured by band scored. The bands of interest are likely to be between 5 and 8, with most students falling in the 6.0 to 7.5 range of bands.
Discussion of Possible Outcome:
While it is possible that we will find that this is accurate and meets the needs of students, universities, and professors, I think it more likely that we will discover critical flaws that need addressed with regard to both accuracy and use.
In either case, the findings will be significant in determining the future of collegiate program. For example, the validity of the test is integral in setting admission requirements, establishing course level which a student can be admitted into, and in establishing remediation for certain low preforming students.
If it is discovered that it is accurate and meets student’s needs, then there is no need to revamp current testing and admission strategies, and current performance markers will be fully supported, and can be considered best practice, with supporting evidence of efficacy.
However, if the findings indicate that the testing program is not appropriately measuring student performance, and is contributing to the misplacement of students within the academic setting, then the testing, or the way the tests are used needs to be improved in such a way that it increases the level of student success overall.
Sample Survey:
Age: _________________________________________________________________________
Country of Citizenship: __________________________________________________________
IELTS Score: _________________________________________________________________
Current GPA: __________________________________________________________________
Average grade in majors area courses: ______________________________________________
Average grade in general requirements and electives: __________________________________
How did you prepare for the IELTS Test: ___________________________________________
How do you feel that the test relates to coursework: ____________________________________
What would you like to see change ? ________________________________________________
Do you feel your IELTS score correctly placed you in the program ? ______________________
IQ Test Results: ____________________________________________
Other placement tests results:______________________________________________________
______________________________________________________________________________
References:
Cooper, Trish. "Can IELTS Writing Scores Predict University Performance? Comparing the Use of Lexical Bundles in IELTS Writing Tests and First-year Academic Writing." Stellenbosch Papers in Linguistics Plus 42.0 (2014): 63. Web.
Feast, Vicki. “The Impact of IELTS scores on performance at University. International Education Journal Vol 3, No 4, 2002. Print.
Moore, Tim, and Janne Morton. "Dimensions of Difference: A Comparison of University Writing and IELTS Writing." Journal of English for Academic Purposes 4.1 (2005): 43-66. Web.
Müller, Amanda. "The Differences In Error Rate And Type Between IELTS Writing Bands And Their Impact On Academic Workload." Higher Education Research & Development 34.6 (2015): 1207-1219. Academic Search Complete. Web. 9 Mar. 2016.
Uysal, Hacer Hande. "A Critical Review Of The IELTS Writing Test." ELT Journal: English Language Teaching Journal 64.3 (2010): 314-320. Academic Search Complete. Web. 9 Mar. 2016.
Weir, C. Language Testing and Validation. 2005. New York: Springer.