Developing culture-appropriate student assessments
The Objective Structured Clinical Examination (OSCE) was developed and then implemented in the US in the mid-to-late 1970s as an evaluation tool for medical students. Over a set period of time, usually two hours, students pass through one to two dozen stations where they evaluate and diagnose ‘mock’ patients.
This examination method was mainly devised to assess how students apply their knowledge rather than how much they can recall. Traditional evaluation methods, such as multiple choice and essay questions, may not adequately evaluate clinical competence, professional skills and cognitive learning abilities.
Faculty at Qatar University’s College of Pharmacy (QU CPH) have developed a modified version of the OSCE, called the Structured Multi-Skill Assessment (SMSA), as an evaluation and, perhaps more importantly, learning tool for their students.
“The OSCE that was originally adapted from the medical profession [for use with pharmacy students] focuses on assessing clinical skills, but there is limited literature providing guidance for contextual adaptation in diverse cultural settings,” write researchers from QU CPH in a paper published in the journal Avicenna.
The OSCE assesses, among other things, students’ communications skills. But interpretations of effective communication can vary widely from one culture to another, explain the team.
The SMSA, first implemented at QU CPH in 2008, “takes into account the cultural context where it is applied, utilization of available resources, and the need for a continuum of patient care activities as they occur in real life,” says associate professor of pharmacy practice Nadir Kheir, who developed the tool.
The SMSA differs from the OSCE in several ways. It is used to evaluate undergraduate students rather than for professional licensing. It is comprised of a smaller number of stations (three to four) and the scores are used as only one component of a total course score. Also, “while the OSCE gives the grading staff only two options, activity done or activity not done, the SMSA allows the grading staff to award full or partial points depending on how the specific action was done,” explains Kheir. “This is so important when it comes to communication skills, for example.”
Another important adaptation of the tool is that mock patients provide feedback to students on communication effectiveness in addition to the feedback provided by supervising faculty members. The patients are also chosen to reflect the diverse resident population of Qatar.
“While no formal evaluation of the SMSA has been conducted so far, the general feeling of students and faculty is that it provides a platform for hands-on application and evaluation of skills,” says Kheir. The assessment evaluates student communication, consultation, interviewing, problem-solving, analytical, drug information provision and care planning skills.
“The routine feedback we provide to students during mock and midterm SMSAs serves as learning opportunities and teachable moments for reflection on learning and for performance improvement,” adds Kheir.
Student perceptions and attitudes toward the SMSA still need to be qualitatively evaluated, the researchers say. There is also a need to train mock patients to ensure consistency in their feedback. Finally, there is a need to evaluate the effectiveness of patient and faculty feedback on improving student performance in subsequent SMSA activities, they say.
Other Information
Published in: QScience.com Highlights, Published by Nature Research for Hamad bin Khalifa University Press (HBKU Press)
License: http://creativecommons.org/licenses/by/4.0
History
Language
- English
Publisher
Nature ResearchPublication Year
- 2016
License statement
This Item is licensed under the Creative Commons Attribution 4.0 International LicenseInstitution affiliated with
- Hamad Bin Khalifa University