Clinical practice evaluation
Despite the existence of a plethora of literature on the issue of core nursing competencies, there is no consensus on the definition of competence or the indicators to be used in assessing competence. A number of methods/tools are used in assessing competence even though the evidence base to support their efficacy is weak. In this paper, the assessment methods of portfolios, direct observations, and objective structured clinical examinations (OSCEs) will be explored and recommendations regarding their use made.
Portfolios
Portfolios are a constellation of materials collated by individuals as evidence of knowledge, skills, attitudes, and achievements. They reflect an individual’s current development and activity. Although portfolios are used as assessment tools for nurses in clinical settings, there is limited evidence to suggest that they are effective tools for measuring nursing competencies. They are based on the premise that they provide an opportunity for students to students to document evidence of personal and professional development, learning outcomes and processes, and areas that require further development. Findings from studies suggest that portfolios have the following benefits; they provide information on the number of interventions accomplished by a student and insight into a student’s level of competence and areas that need further development. They also demonstrate learning over time and form a focus for discussion between assessors and students. They additionally prompt students to take more responsibility in their own learning, reflect on their practice, and be critical thinkers (Evans, 2008; Buckley et al., 2009).
The demerits of using portfolios as assessment tools include: they are time consuming, students may tailor their contents to meet the assessor’s expectations as opposed to reporting their true experiences, subject to inter-rater reliability, and may be confusing for both students and assessors particularly if expectations and outcomes are not clearly stated (Buckley et al., 2009). Studies that have explored their use as tools for measuring competence recommend that the expectations for both students and assessors be clearly stated and communicated, the use of a template and prompts to support documentation, the use of both summative and formative assessment approaches, training and certification of assessors to ensure consistency, networking of assessors to reduce inter-rater subjectivity, and the structure and way of writing outcomes and evidence to be commensurate with the level of the student and professional qualifications (Evans, 2008).
Objective structured clinical examinations
An objective structured clinical examination (OSCE) refers to a series of stations and/or exercises via which individual students rotate for the purpose of demonstrating a number of skills and knowledge (Baid, 2011). Each station tests one of more practical or theoretical skill areas. Students are signaled by the assessor on when to move to the next station. The evidence base on whether OSCEs are valid and reliable measures of competence is thin. They, however, provide a safe and controlled environment for assessing student skills. Other benefits of OSCEs as assessment tools include; they provide a satisfactory way of evaluating the clinical skills, knowledge, communication skills, and intent of students. In particular, they enable assessors to assess the competence of individual students in specific situations (Evans, 2008; Baid, 2011).
The demerits of OSCEs include they can be stressful and intimidating for students, they are costly and time consuming, and students may be unable to reflect due to time constraints at each station and noise constraints from other stations. Others incorporate lack of realism of scenarios and equipment and long waiting times if students are many in number and because of the need to set up equipment. Various suggestions have been advanced on how to address these issues and in effect, improve the effectiveness of OSCEs. They include adequate preparation of assessors and actors to ensure inter-rater reliability and consistency in approach as well as adequate preparation of students by ensuring they are provided with written information early and they are advised if they are going to be tested on scenarios they are not well versed with. Feedback should always be provided and students given time for reflection. To enhance the realism and relevance of the scenarios, relevant clinical staffs and equipment should be used. To optimize outcomes, the number of stations should be limited. An independent assessor can be used to reduce assessor bias (Evans, 2008; Baid, 2011).
Direct observation
Direct observation is another form of clinical assessment that entails observing students perform nursing activities in a clinical setting. It is a tool for assessing prior learning. Students are rated based on a scale or checklist for consistency. Notably, even though a myriad of observational checklists are available, the validity of most of them is yet to be determined (Kogan, Holmboe, & Hauer, 2009). Assessment observations may be biased by nervousness of the student, prejudice on the assessor’s part, socialization process, familiarity, lack of knowledge on the part of workplace assessors on what constitutes acceptable competence, resource limitations unknown to the assessor, and lack of practical knowledge on the part of educators. Due to the potential impact of these factors on the outcome of direct observation, it is suggested that assessors be aware of the potential influence of personality characteristics as well as the other variables on practice in clinical settings (Evans, 2008; Fromme, Karani, & Downing, 2009).
Recommendations
Based on the suggestions made in the research articles, I recommend the use of a combination of different methods in the assessment of students in clinical settings. This is because each method has its strengths and flaws. Additionally, the evidence base to support the validity and reliability of each of the three methods discussed is inadequate. I further recommend clear communication between students and assessors during assessments and assessors to be trained on the use of these methods and tools to reduce inter-rater differences. The outcomes to be assessed should also be explicitly stated and if possible delineated in a skills list to help reduce the subjectivity of clinical assessments. Lastly, assessments should utilize both summative and formative approaches (Evans, 2008).
Assessors in clinical settings
Relating to who should assess students in clinical settings, a number of staffs can comfortably undertake the task. They include clinical staffs, educators, mentors, and independent assessors. Clinical staffs can conduct direct observations of students because they spend a lot of time with the students. Independent assessors are recommended for the purposes of eliminating the influence of bias on assessments. Educators and mentors are suitable for evaluating portfolios because they are in a position to follow the progress of students (Evans, 2008).
Role of students during assessments
The role of students during assessments include ensuring that they are adequately prepared by going through necessary materials, carrying out what is expected of them in terms of doing allocated procedures and preparing written submissions, reflecting on their own practice, participating in their evaluation by identifying their areas of strengths and weaknesses, and demonstrating their level of knowledge and skills through appropriate activities (Evans, 2008).
In conclusion, portfolios, OSCEs, and direct observations are three methods that can be used to assess the competence of nursing students. There is limited evidence to support their use as measures of nursing competence. Each method also has its strengths and drawbacks. To capitalize on these methods, they should be used in combination. A number of staffs including educators, mentors, and independent assessors can assess students in clinical settings. The major role of students during clinical assessments is to demonstrate their level of skills and knowledge through appropriate activities.
References
Baid, H. (2011). The objective structured clinical examination within intensive care nursing
education. Nursing in Critical Care, 16(2), 99-105.
Buckley, S., Coleman, J., Davison, I., Khan, K.S., Zamora, J., Malick, S., Morley, D., Pollard,
D., Ashcroft, T., Popovic, C., & Sayers, J. (2009). The educational benefits of portfolios
on undergraduate student learning: A best evidence medical education (BEME)
systematic review. BEME guide no. 11. Medical Teacher, 31(4), 282-292.
Evans, A. (2008). Competency assessment in nursing: A summary of literature published since
2000. Retrieved from http://
http://www.edcan.org/pdf/EdCancompetenciesliteraturereviewFINAL.pdf
Fromme, H.B., Karani, R., & Downing, S. M. (2009). Direct observation in medical education:
A review of the literature and evidence for validity. Mt Sinai Journal of Medicine, 76(4),
365-371.
Kogan, J. R., Holmboe, E. S., & Hauer, K. E. (2009). Tools for direct observation and
assessment of clinical skills of medical trainees: A systematic review. The Journal of the
American Medical Association, 302(12),1316-1326.