Abstract
Aim
The length and sequence of clinical clerkship rotations are essential factors that influence student learning outcomes and clinical performance. This study aimed to investigate whether clerkship sequences influence the performance of sixth-year medical students in emergency simulations of life-threatening scenarios.
Materials and Methods
This retrospective observational cohort study, conducted at our academic institution during the academic year 2024-2025, involved 334 sixth-year medical students divided into two groups based on initial clerkship general surgery (n=169) or general medicine (n=165) who switched at mid-year. Student performance was assessed by the emergency medicine faculty across five competency domains on a 0-5 scale during two high-fidelity emergency simulation scenarios: major trauma and acute myocardial infarction (AMI). Mann-Whitney U test was used to compare performances.
Results
Median scores across the performance domains clinical assessment, critical thinking, patient care, communication, and professional behavior were similar between the student groups in both simulations, with median scores ranging from 1 to 4. In the AMI scenario, surgery-first students had slightly higher overall median scores [13, interquartile range (IQR): 11-15] than medicine-first students (12, IQR: 10-15). However, the Mann-Whitney U test showed no statistically significant differences for either the AMI (p=0.120-0.86) or the trauma simulations (p=0.542-0.742).
Conclusion
Our study indicates that the order of clinical clerkship rotations did not significantly affect sixth-year medical students’ performance in emergency simulations involving life-threatening cases. These findings inform ongoing discussions on optimizing medical education curricula. We recommend further research across multiple institutions to explore the effects of clerkship sequencing in greater detail.
Introduction
During undergraduate medical education, clinical clerkships play a critical role in transforming medical students into skilled practitioners. These clerkships are complex curricular components shaped by numerous factors beyond the control of clerkship directors and medical education leadership, including patient demographics, healthcare team dynamics, curriculum duration, and evolving healthcare system policies (1). This complexity makes standardizing clerkship experiences and accurately assessing the impact of curriculum changes on student performance challenging (2). Throughout their medical education, from the early undergraduate years through the final clinical years (fifth and sixth years), students rotate through various clerkships, each providing diverse clinical exposure and skills. The sequence and duration of clerkships may enhance students’ preparedness for clinical practice by influencing their clinical reasoning, procedural skills, and interdisciplinary collaborations. Additionally, incorporating simulation-based education during rotations can be an effective teaching and assessment method to enhance students’ competencies, knowledge, and clinical abilities (3-5). From an educational perspective, clerkship order may shape clinical competencies relevant to emergency care. Early exposure to general surgery can enhance procedural skills, rapid decision-making, and confidence in managing unstable or trauma patients (4). Conversely, early general medical training may strengthen diagnostic reasoning, clinical assessment, and systematic management of acute medical conditions. Differences in team structure, leadership roles, and levels of responsibility between the surgical and medical services may also influence students’ communication and teamwork skills, which are key competencies assessed during emergency simulations.
Both clerkship length and sequence are considered critical factors influencing learning outcomes. The literature identifies various elements associated with student performance across clinical clerkships from the third year to the sixth year, including rotation order, student-reported stress, academic performance, and clinical evaluations (6, 7). Indeed, the order of clerkship rotations has been shown to affect students’ performance on standardized examinations, including those administered by the National Board of Medical Examiners (NBME) and the Clinical Subject Examinations of the United States Medical Licensing Examination (USMLE) (8). Despite this, the literature presents conflicting findings regarding the impact of clerkship length on examination performance. Several studies examining NBME scores across shortened clinical rotations in various specialties have reported either no effect or, in some cases, positive outcomes (9). One study identified the USMLE Step 1 scores as predictive of which students were at risk of poor performance on NBME subject exams (6). Strowd et al. (2) found no significant differences in student performance or satisfaction following curriculum shortening, suggesting that clerkship length may not be as critical as once assumed.
Although previous studies have addressed the importance of clerkship length, the influence of clerkship sequence on simulation practice performance, particularly during clinical years, remains underexplored. Research indicates that the timing and order of specific clerkships can influence academic outcomes, highlighting the importance of clerkship sequencing (10). Similarly, Hampton et al. (11) reported that rotation sequence could influence educational outcomes, supporting the idea that clerkship order may contribute to the development of clinical skills. Nevertheless, limited research has examined how the sequence of clinical rotations affects clinical performance during emergency simulations, a critical area of assessment for preparing future physicians.
This study investigated whether the order of clerkships for sixth-year medical students during their academic year affects their simulation performance, particularly in managing life-threatening scenarios. This investigation highlights the strategies for improving clinical competencies, medical education, and emergency readiness.
Materials and Methods
This retrospective observational study was conducted among sixth-year medical students at our academic institution during their emergency medicine (EM) rotation and was approved by the Biomedical Ethics Unit. The EM rotation, integrated into students’ general surgery clerkship, was scheduled throughout the academic year. This study was approved by the biomedical Ethical Committee at King Abdulaziz University, Jeddah, Saudi Arabia (reference no: 395-24, date: 05.12 2024). A total of 334 students were divided into two groups: 169 who began with the general surgery rotation and the remaining 165 who began with the general medicine rotation. The groups switched midway through the academic year.
During the EM block, all students underwent routine assessments, including two high-fidelity simulation scenarios: a major trauma case and an acute myocardial infarction (AMI). These simulations were conducted at the University’s Clinical Skills and Simulation Center using high-fidelity manikins. The standardized simulation process involved prebriefing, hands-on simulation, and debriefing sessions. Student performance was evaluated across five domains: clinical assessment and data gathering; critical thinking and decision-making; patient care; communication and teamwork; professional behavior. Each domain is scored from 0 to 5, with higher scores indicating better performance (Table 1).
Assessments were conducted by EM faculty members who were specifically trained in simulation assessments. They participated in standardization sessions to ensure consistent scoring. Data from students who completed both simulation scenarios with complete performance records were anonymized for analysis. This study aimed to compare students’ performance based on their rotation preceding the EM block.
Statistical Analysis
Data analysis was performed using the Statistical Package for the Social Sciences, version 28 employing non-parametric descriptive and analytical methods. The median and interquartile range (IQR) were calculated to summarize descriptive statistics, while the Mann-Whitney U test was used for analytical comparisons. Statistical significance was defined as a p-value of less than 0.05.
Results
Our study compared the clinical performance of two groups of sixth-year medical students based on their rotation sequences during two high-fidelity simulation cases: AMI and major trauma. One group started with a general surgery clerkship (n=169), whereas the other began with a general medicine clerkship (n=165). The median scores across performance domains clinical assessment, critical thinking, patient care, communication, and professional behavior were generally similar between groups across both simulation cases, ranging from 1 to 4 (Tables 1, 2).
For the AMI simulation, the group that started with surgery demonstrated slightly higher overall performance, with a median total score of 13 (IQR: 11-15) versus 12 (IQR: 10-15) for the general medicine group; mean ranks were 175.59 and 159.22, respectively (Table 1). Despite these differences, the Mann-Whitney U test showed no statistically significant difference between the two groups across all domains (p=0.120-0.862). Similarly, in the major trauma simulation, performance scores were closely matched between groups, with a median score of 3 (IQR: 1-4) across most domains. The overall median performance scores were 13 (IQR: 10-15) for both groups, and the mean ranks were nearly equivalent (170.67 vs. 164.25) (Table 2). The statistical analysis revealed no significant differences between the groups: all p-values exceeded 0.05 (range: 0.542-0.742).
These findings indicate that the sequence of initial clinical clerkship rotations did not significantly influence students’ clinical performance during simulated emergency cases, suggesting that both groups demonstrated comparable competence regardless of their clerkship sequence.
Discussion
Evaluating the impact of clerkship rotations on medical students’ learning and performance requires consistent assessment tools and procedures applied before and after the clerkship or through repeated measures to track progress over time (12). However, medical schools often vary in their structure, the sequencing of disciplinary clerkship rotations, and methods of student evaluation during clinically integrated years, making it difficult to standardize assessments (13). Clinical knowledge is inherently complex and multidimensional, presenting challenges in accurately measuring the learning growth derived from clerkship experiences. Additionally, it is logistically difficult to schedule all clerkship rotations simultaneously, complicating efforts to effectively assess the impact of clerkship programs on students’ clinical knowledge development.
In this study, we compared medical students who started their year with a general surgery clerkship to those who began with general medicine and who switched midway through the year. No significant differences were observed between these groups in performance on the high-fidelity simulation cases involving major trauma and AMI (Tables 1, 2). The literature review did not reveal any directly comparable studies that corroborated our findings. However, Phares et al. (10) examined the effect of the timing of surgery and internal medicine clerkships on surgery shelf examination scores in the United States. They concluded that students who completed an internal medicine clerkship before a surgery clerkship achieved higher scores on the NBME Surgery Shelf examination. Additionally, their findings suggest that controlling for the timing of rotations yielded no significant differences between the groups compared. Similar conclusions have been drawn by other researchers, including Adelsheimer et al. (14) and Dong et al. (15), who reported no clear advantage of one clerkship sequence over another. Dong et al. (15) further investigated the impact of clerkship sequence on performance and found that completing internal medicine before surgery led to higher surgical subject examination scores, which, in turn, predicted subsequent USMLE Step scores and explained 28% of Step 2 CK variance. However, these findings are not directly comparable to those of our study, as we focused specifically on clinical performance during emergency simulation cases, rather than on standardized exams.
In our study, a comparison of the clinical performance between the two groups revealed slightly higher mean rank scores for students who started their year with a general surgery clerkship. However, the differences were not statistically significant, suggesting that clerkship sequence may not substantially influence clinical performance during high-fidelity simulation assessments. Although the overall performance was summarized using total simulation scores, domain-specific analyses were performed across the five competency areas. Trauma and AMI simulations differ in their cognitive and procedural demands, and the clerkship sequence could plausibly influence specific competencies, such as clinical assessment or communication. However, no statistically significant differences were observed across individual domains in either simulation scenario, suggesting broadly comparable competency development across clerkship sequences within the limits of the assessment tools. Prospective studies incorporating baseline academic metrics, prior simulation exposure, and multivariate modeling are needed to evaluate the independent effects of clerkship sequencing with greater precision. Furthermore, Gao et al. (8) conducted a study including medical students from three medical schools and found that starting the clerkship year in family medicine, internal medicine, pediatrics, or surgery did not significantly affect subsequent performance on shelf examinations or on the USMLE Step 2 Clinical Knowledge. This suggests that the sequence of clerkship rotations is not associated with improved clinical performance, higher NBME shelf scores, or higher Step 2 Clinical Knowledge examination scores.
In the broader context of evaluating clinical knowledge development, the Comprehensive Clinical Science Examination (CCSE) is a valuable formative assessment tool designed to provide students with feedback on their progress across various clinical rotations, such as internal medicine, surgery, and pediatrics (16). Administered by the NBME, the CCSE serves as a reliable and standardized tool for assessing competency-based medical education progress (17). By analyzing CCSE scores at different stages before, during, and after clinical clerkships researchers can gain valuable insights into how various rotations contribute to the development of students’ disciplinary knowledge and clinical skills. Experiential learning theory, which emphasizes learning through experience and reflection, has been widely applied in clinical rotations and clerkships to enhance students’ practical skills and knowledge acquisition (18). Chang et al. (16) used the CCSE and segmented regression analysis to assess the impact of clinical clerkships on medical students’ disciplinary knowledge, revealing significant improvements, particularly in obstetrics and gynecology and in psychiatry, with smaller gains in surgery. Their findings provided meaningful insights into curriculum design and assessment.
Study Limitations
This study had several limitations. First, the findings may not be generalizable because the study was conducted at a single college of medicine with a specific curriculum structure. Variations in clerkship design across institutions may influence performance. Second, the analyses were limited to univariate comparisons without adjusting for potential confounders. Therefore, unmeasured differences between the groups may account for the lack of statistically significant effects. Third, although a standardized 0-5 scale was used across the performance domains, the simulation tool was designed to assess competency attainment rather than detect subtle group differences. Despite a full score range and the absence of floor or ceiling effects, subtle educational influences of the clerkship sequence may not have been detected. Furthermore, our study lacked an equivalence framework and an a priori power calculation; therefore, non-significant findings reflect limited statistical power to detect differences and provide no evidence of equivalence between clerkship sequences.
Conclusion
Our study found no significant differences in emergency simulation performance between groups based on the clerkship rotation sequence. While existing literature suggests that clerkship timing can influence performance on standardized examinations, our findings do not support the notion that completing internal medicine before surgery enhances clinical performance in emergency cases. These results contribute to a broader discussion of optimizing medical education and may inform future research and curriculum development aimed at improving clinical training and assessments. Further studies at other universities are recommended to validate our findings and to provide deeper insights into the impact of clerkship rotation sequences.


