Open Forum

Generative AI tools in reflective essays: Moderating moral injuries and epistemic injustices

Nontsikelelo O. Mapukata
South African Family Practice | Vol 67, No 1 : Part 3| a6123 | DOI: https://doi.org/10.4102/safp.v67i1.6123 | © 2025 Nontsikelelo O. Mapukata | This work is licensed under CC Attribution 4.0
Submitted: 27 January 2025 | Published: 29 August 2025

About the author(s)

Nontsikelelo O. Mapukata, School of Public Health, Faculty of Health Sciences, University of Cape Town, Cape Town, South Africa

Abstract

The emergence of large language models such as ChatGPT is already influencing health care delivery, research and training for the next cohort of health care professionals. In a consumer-driven market, their capabilities to generate new forms of knowing and doing for experts and novices present both promises and threats to the livelihood of patients. This article explores burdens imposed by the use of generative artificial intelligence tools in reflective essays submitted by a fifth of first-year health sciences students. In a curriculum centred around Vision 2030 at a South African university, deviations from prescribed guidelines in an essay requiring students to demonstrate an understanding of the models of disability are presented as moral injuries and epistemic injustices. Considering our obligations as educators to contribute to a humanising praxis, the author evaluates an eroded trust between educators and students and offers an interim solution for attaining skills in academic literacy in a developing country.
Contribution: This article provides health sciences educators with an opportunity to pause and reflect on how they would like to integrate generative AI tools into their assessments.


Keywords

artificial intelligence; academic literacy skills; epistemic injustice; health sciences students; moral injury; reflective essays

Metrics

Total abstract views: 1225
Total article views: 933


Crossref Citations

No related citations found.