Manara - Qatar Research Repository
Browse
- No file added yet -

ChatGPT—A double‐edged sword for healthcare education? Implications for assessments of dental students

Download (242.51 kB)
journal contribution
submitted on 2024-02-20, 10:55 and posted on 2024-02-20, 10:56 authored by Kamran Ali, Noha Barhom, Faleh Tamimi, Monty Duggal

Introduction

Open‐source generative artificial intelligence (AI) applications are fast‐transforming access to information and allow students to prepare assignments and offer quite accurate responses to a wide range of exam questions which are routinely used in assessments of students across the board including undergraduate dental students. This study aims to evaluate the performance of Chat Generative Pre‐trained Transformer (ChatGPT), a generative AI‐based application, on a wide range of assessments used in contemporary healthcare education and discusses the implications for undergraduate dental education.

Materials and Methods

This was an exploratory study investigating the accuracy of ChatGPT to attempt a range of recognised assessments in healthcare education curricula. A total of 50 independent items encompassing 50 different learning outcomes (n = 10 per item) were developed by the research team. These included 10 separate items based on each of the five commonly used question formats including multiple‐choice questions (MCQs); short‐answer questions (SAQs); short essay questions (SEQs); single true/false questions; and fill in the blanks items. Chat GPT was used to attempt each of these 50 questions. In addition, ChatGPT was used to generate reflective reports based on multisource feedback; research methodology; and critical appraisal of the literature.

Results

ChatGPT application provided accurate responses to majority of knowledge‐based assessments based on MCQs, SAQs, SEQs, true/false and fill in the blanks items. However, it was only able to answer text‐based questions and did not allow processing of questions based on images. Responses generated to written assignments were also satisfactory apart from those for critical appraisal of literature. Word count was the key limitation observed in outputs generated by the free version of ChatGPT.

Conclusion

Notwithstanding their current limitations, generative AI‐based applications have the potential to revolutionise virtual learning. Instead of treating it as a threat, healthcare educators need to adapt teaching and assessments in medical and dental education to the benefits of the learners while mitigating against dishonest use of AI‐based technology.

Other Information

Published in: European Journal of Dental Education
License: http://creativecommons.org/licenses/by/4.0/
See article on publisher's website: https://dx.doi.org/10.1111/eje.12937

Funding

Open Access funding provided by the Qatar National Library.

History

Language

  • English

Publisher

Wiley

Publication Year

  • 2023

License statement

This Item is licensed under the Creative Commons Attribution 4.0 International License.

Institution affiliated with

  • Qatar University
  • Qatar University Health - QU
  • College of Dental Medicine - QU HEALTH

Methodology

This was an exploratory study investigating the accuracy of ChatGPT to attempt a range of recognised assessments in healthcare education curricula. A total of 50 independent items encompassing 50 different learning outcomes (n = 10 per item) were developed by the research team. These included 10 separate items based on each of the five commonly used question formats including multiple‐choice questions (MCQs); short‐answer questions (SAQs); short essay questions (SEQs); single true/false questions; and fill in the blanks items. Chat GPT was used to attempt each of these 50 questions. In addition, ChatGPT was used to generate reflective reports based on multisource feedback; research methodology; and critical appraisal of the literature.

Usage metrics

    Qatar University

    Licence

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC