Volume 39 Issue 3, Fall 2012, pp. 276-282

ABSTRACT

With medical education transitioning from knowledge-based curricula to competency-based curricula, critical thinking skills have emerged as a major competency. While there are validated external instruments for assessing critical thinking, many educators have created their own custom assessments of critical thinking. However, the face validity of these assessments has not been challenged. The purpose of this study was to compare results from a custom assessment of critical thinking with the results from a validated external instrument of critical thinking. Students from the College of Veterinary Medicine at Western University of Health Sciences were administered a custom assessment of critical thinking (ACT) examination and the externally validated instrument, California Critical Thinking Skills Test (CCTST), in the spring of 2011. Total scores and sub-scores from each exam were analyzed for significant correlations using Pearson correlation coefficients. Significant correlations between ACT Blooms 2 and deductive reasoning and total ACT score and deductive reasoning were demonstrated with correlation coefficients of 0.24 and 0.22, respectively. No other statistically significant correlations were found. The lack of significant correlation between the two examinations illustrates the need in medical education to externally validate internal custom assessments. Ultimately, the development and validation of custom assessments of non-knowledge-based competencies will produce higher quality medical professionals.

Critical thinking in the health professions has been identified as an essential non-technical competency.13 According to the National Council for Excellence in Critical Thinking Instruction, “critical thinking is the intellectually disciplined process of actively and skillfully conceptualizing, applying, analyzing, synthesizing, or evaluating information gathered from, or generated by, observation, experience, reflection, reasoning, or communication, as a guide to belief and action.”4

Currently, there are just a few fully validated external instruments for assessing critical thinking including the California Critical Thinking Skills Test (CCTST) and the Watson-Glaser Critical Thinking Appraisal.5,6 These validated external instruments have been used to track the development of critical thinking over time in many health professions. Since critical thinking is considered to be a transferable skill, medical professional programs have developed a variety of custom assessments of critical thinking including written assignments, extended match multiple choice, case vignettes, portfolios, and oral examinations.2,7,8 However, the face validity of these custom assessments has not been challenged. To our knowledge, a study comparing the face validity of a custom assessment in critical thinking to a validated external instrument has not been published to date.

Another way of evaluating critical thinking has been to compare outcomes to Bloom's taxonomy of learning. These levels represent a hierarchy by which learners build upon foundations to increase levels of thinking starting with knowledge and proceeding through comprehension, application, analysis, synthesis, and ending in evaluation. A complete review of Bloom's taxonomy is beyond the scope of this manuscript and interested readers are referred to the literature.9,10

The purpose of this study was to compare student performance on a custom assessment (assessment of critical thinking exam) with a validated external instrument (CCTST). We hypothesized that there would be a direct correlation between the total score and sub-scores of the assessment of critical thinking (ACT) examination and the total score and sub-scores of the CCTST.

Student Population

Of the Western University of Health Sciences College of Veterinary Medicine Graduating Class of 2013, 106 members were administered both the ACT Exam and the CCTST during March 2011 as a normal part of the curriculum. Students who did not take both exams within 72 hours of each other were excluded from the study. Use of human subjects was waived by the Western University of Health Sciences Human Subject Institutional Review Board.

ACT Exam

The ACT exam consists of a paper-based novel clinical scenario including physical examination and preliminary diagnostic testing results with an identified problem list. Students are asked to focus on a single identified problem and the corresponding list of differential diagnoses. Students must choose their top two differentials and explain the pathophysiology of the specific disease process and how it relates to the case, identify supporting and non-supporting information presented in the scenario, and then develop a diagnostic plan to confirm their choices. Using a problem-oriented medical approach,11 differential diagnoses are presented mechanistically using the DAMNIT-V (degenerative, anomaly, metabolic, neoplastic/nutritional, inflammatory/infectious/immune mediated, toxic/traumatic, vascular) system of organization after localization to a major body system. Appendix 1 presents the ACT scenario used in this study as it was presented to students.

The rubric for this exam has nine sections including chosen diagnoses, pathophysiology, supporting history, supporting exam data, other supporting data, non-supporting data, further diagnostic plans, terminology, and legibility that contribute to the 100 point total value (Appendix 2). All sections except diagnostic choice, plans, terminology, and legibility are composites of two separate scores that correlate to each specific diagnostic choice. The examination is administered four times throughout the first two years of the curriculum using four different novel clinical scenarios. The key for each exam scenario is written at the level deemed appropriate for students at their current level of competency. The ACT exam constitutes 10% of the student's final grade. Students in this study had taken this format of exam once during each of the previous three semesters. The key was written for fourth-semester veterinary students as determined by faculty content experts who have board certification in a clinical specialty. Each grader is trained on how to use the rubric and key for the specific section they are grading. This includes mock grading of student exams until sufficient reliability has occurred. Faculty are assigned a single diagnosis to grade. To maintain consistency between students, a single faculty member grades every examination section relating to their assigned diagnosis. A single faculty member grades all plans.

For the study, ACT exam scores were composites of three numerical sub-scores that relate to Bloom's levels of learning. Pathophysiology was considered to be Blooms level 2 (comprehension/understanding) and was valued at 20 points. Identification of supporting and non-supporting data was considered to be Blooms level 3 (application) and was valued at 40 points. Diagnostic choice and diagnostic plans were combined and considered to be Blooms level 4 (analysis) and had a total value of 30 points. Terminology (5 points of total) and legibility (5 points of total) were not specifically used in the correlations although are retained as part of the total ACT score.

California Critical Thinking Skills Test

This study used the CCTST Form A which is a standardized, 34-item multiple choice test. The test provides a total critical thinking skills score and sub-scores for five core areas of critical thinking: analysis and interpretation, evaluation and explanation, inference, deductive reasoning, and inductive reasoning. The CCTST was administered once previously at matriculation (August 2009). The CCTST is an external assessment of critical thinking that was validated using the Delphi method. Details of the examination and its validation process are beyond the scope of this article and can be found in the literature.6 All students at Western University of Health Sciences are required to take the CCTST as part of university outcome assessment.

Statistical Methods

The degree of linear association between total score and sub-score numerical values of the ACT and CCTST was determined using the Pearson correlation coefficient. Significance level was set as p≤.05. Statistical analysis was performed using SAS 9.2 (SAS Institute Inc., Cary, NC, USA.)

Ninety-eight students were included in the study. Student performance on both ACT and CCTST exams are presented in Table 1. As seen in Table 2, ACT sub-scores correlated significantly with the total ACT score. CCTST sub-scores also correlated significantly with total CCTST score. Significant correlations were found between ACT Blooms 2 and deductive reasoning (correlation coefficient 0.24, p=.018) and total ACT score and deductive reasoning (correlation coefficient 0.22, p=.027). No other statistically significant correlations were found. A slight trend was seen between total ACT score and inference (correlation coefficient 0.18, p=.076).

Table

Table 1: Descriptive statistics of second-year veterinary student performance on the ACT exam and the CCTST taken in March 2011

Table 1: Descriptive statistics of second-year veterinary student performance on the ACT exam and the CCTST taken in March 2011

ACT total ACT B1 ACT B2 ACT B3 CCTST total IR DR AI IN EE
Average 84.82 20.55 35.54 24.47 20.89 11.45 9.44 5.09 10.06 5.73
Median 84.25 20 35.5 24 21 11 9 5 10 6
SD 5.84 2.09 3.89 2.97 4.34 2.15 2.77 1.26 2.22 2.13

ACT=assessment of critical thinking; CCTST=California Critical Thinking Skills Test; B=Blooms level; IR=inductive reasoning; DR=deductive reasoning; AI=analysis and interpretation; IN=inference; EE=evaluation and explanation; SD=standard deviation

Table

Table 2: Pearson correlations between total and sub-scores of ACT exam and total and sub-scores of CCTST administered to second-year veterinary students in March 2011

Table 2: Pearson correlations between total and sub-scores of ACT exam and total and sub-scores of CCTST administered to second-year veterinary students in March 2011

ACT total ACT B1 ACT B2 ACT B3 IR DR AI IN EE CCTST total
ACT total 1 0.69 0.45 0.74 0.00 0.22* 0.14 0.18 0.02 0.14
ACT B2 1 0.24* 0.28 −0.03 0.24* 0.14 0.15 0.04 0.14
ACT B3 1 0.14 −0.05 0.10 0.07 0.07 −0.04 0.04
ACT B4 1 0.04 0.11 0.04 0.15 −0.01 0.09
IR 1 0.54 0.43 0.59 0.85 0.84
DR 1 0.51 0.89 0.63 0.91
AI 1 0.27 0.22* 0.54
IN 1 0.53 0.85
EE 1 0.83
CCTST total 1

*p<.05

p<.01

ACT=assessment of critical thinking; CCTST=California Critical Thinking Skills Test; B=Blooms level; IR=inductive reasoning; DR=deductive reasoning; AI=analysis and interpretation; IN=inference; EE=evaluation and explanation

Results from this study demonstrate weak correlations between student performances on the ACT exam and on the CCTST. If student performance on the ACT exam does not correlate strongly with an externally validated tool designed to test critical thinking, then what does the ACT exam actually test? It is possible that the ACT exam provides another format for testing specific veterinary knowledge. Within the class, there is a subset of individuals who earn high marks on the ACT exam while performing poorly on other knowledge-based assessments including multiple choice and laboratory-based assessments (data not shown). It is possible that these students have the ability to recall information on contextual examinations like the ACT rather than in abstract examinations like multiple choice examinations. In addition, these students could also have well-developed writing skills such that they are able to articulate their knowledge better on the ACT than on other assessments. This version of the examination focused on small-animal medicine. It is possible that different clinical scenarios used on the ACT exam could have stronger correlations with the CCTST. With the trend in veterinary medicine toward small-animal practice,12 it is possible that students used pattern recognition from clinical experience rather than critical thinking to complete the examination. Pattern recognition and critical thinking are both associated with clinical reasoning.1315 Although it is tempting to postulate that the ACT exam actually assesses clinical reasoning rather than critical thinking, it is hard to see how the subset skill of critical thinking would not be demonstrated in any test of clinical reasoning. Since there are currently no validated assessment tools for clinical reasoning, it is impossible to determine if this is what the ACT is actually assessing. The CCTST has been criticized for not relating to clinically-oriented critical thinking in nursing.16 Given that the ACT examination is a clinical case, it is possible that it is measuring a different application of critical thinking than what is tested on the CCTST. Further studies looking at the correlation between performance on the ACT examination and the Watson-Glaser Critical Thinking Appraisal could be done to further elucidate the origin of the poor correlations.

The only significant correlations found in this study were between Blooms level 2 and total ACT score and deductive reasoning on the CCTST. Blooms level 2 is associated with comprehension and the ability to make use of specific knowledge. This level does not imply the full and complete understanding of material and is thought to be equivalent to the thinking ability of advanced high-school students.17 Our results run counter to the published literature that promotes Blooms level 4 or greater as evidence of critical thinking. The CCTST describes deductive reasoning as moving “from the assumed truth of a set of beliefs or premises to a conclusion which follows of necessity.”18 Deductive reasoning based on pattern recognition from the provided list of diagnoses could explain the minor correlation found between ACT total score and CCTST deductive reasoning. Alternatively, the assumption that a description of pathophysiology is Blooms level 2 thinking may be inaccurate. While the rubric was designed to try to tease out the effects of pattern recognition from critical thinking, the lack of correlation could be due to the lack of fine resolution between these variables. In this case, the rubric would need to be further refined. Finally, the assumption that higher Blooms levels reflect critical thinking may be false. More research, including correlations to knowledge-based examinations, is required to truly understand what the ACT exam is really testing.

The lack of correlation in this study demonstrates a need in medical education to validate custom assessments against validated external instruments such as the CCTST. The pursuit of scholarly teaching requires that educators look at the validity and reliability of assessment strategies to truly understand what assessments are needed to ensure well-trained graduates.19 The CCTST is the most predominant validated external assessment of critical thinking used in the medical professional literature and has been used to look at the development of critical thinking over time in a variety of health professions.2022 However, to the authors' knowledge, this is the first publication using this tool to challenge the face validity of a custom assessment strategy.

In conclusion, the ACT exam does not strongly correlate to critical thinking as defined by the CCTST. While this custom assessment could be valid for testing contextual knowledge or clinical reasoning, it is not recommended to implement this examination to assess critical thinking. It does, however, provide a prime example of why medical educators need to subject their custom assessments to rigorous validation using validated external instruments. As medical education shifts from predominately knowledge-based curricula to the development of specific skill sets like critical thinking, it is even more important to have confidence in our assessment techniques. There may also come a time when accreditation bodies will require the use of validated external instruments. Ultimately, having confidence in our assessment strategies will result in better trained graduates.

ACKNOWLEDGMENTS

The authors wish to thank the faculty of Western University of Health Sciences for their ongoing participation in the Veterinary Basic Medical Science course.

1. Hagan, BA (2008).Tools for implementing a competency-based clinical curriculum: the dental school experience.J Vet Med Educ.35,3,369-, 74http://dx.doi.org/10.3138/jvme.35.3.369. Medline:19066353 LinkGoogle Scholar
2. Berkow, S, Virkstis, K, Stewart, J, et al. (2011).Assessing individual frontline nurse critical thinking.J Nurs Adm.41,4,168-, 71http://dx.doi.org/10.1097/NNA.0b013e3182118528. Medline:21430465 MedlineGoogle Scholar
3. North American Veterinary Medical Education Consortium (2011).Roadmap for veterinary medical education in the 21st century: responsive, collaborative, flexible.Washington, DC:NAVMEC Report and Recommendations Google Scholar
4. Paul, R, Nosich, GM (2011).A model for the national assessment of higher order thinking [Internet]., cited 2011 Oct 5, Foundation for Critical ThinkingAvailable from: http://www.criticalthinking.org/pages/a-model-for-the-national-assessment-of-higher-order-thinking/591 Google Scholar
5. Macpherson, K, Owen, C (2010).Assessment of critical thinking ability in medical students.Assess Eval High Educ.35,1,45-58 Google Scholar
6. Facione, NC, Facione, PA, Sanchez, CA (1994).Critical thinking disposition as a measure of competent clinical judgment: the development of the California Critical Thinking Disposition Inventory.J Nurs Educ.33,8,345-, 50Medline:7799093 MedlineGoogle Scholar
7. Swinny, B (2010).Assessing and developing critical-thinking skills in the intensive care unit.Crit Care Nurs Q.33,1,2-, 9Medline:20019504 MedlineGoogle Scholar
8. Connors, P (2008).Assessing written evidence of critical thinking using an analytic rubric.J Nutr Educ Behav.40,3,193-, 4http://dx.doi.org/10.1016/j.jneb.2008.01.014. Medline:18457791 MedlineGoogle Scholar
9. Gilbert, SW (1992).Systematic questioning: taxonomies that develop critical thinking skills.Sci Teach.59,9,41-6 Google Scholar
10. Richlin, L (2006).Blueprint for learning.Sterling, Virginia:, Stylus Publishing, LLC45-51 Google Scholar
11. Lorenz, M (2009).Small animal clinical diagnosis., 3rd ed.Ames, Iowa:, Blackwell Publishing3-12 Google Scholar
12. Willis, NG (2007).The animal health foresight project.Vet Ital.43,2,247-, 56Medline:20411514 MedlineGoogle Scholar
13. Tomlin, JL, Pead, MJ, May, SA (2008).Veterinary students' attitudes toward the assessment of clinical reasoning using extended matching questions.J Vet Med Educ.35,4,612-, 21http://dx.doi.org/10.3138/jvme.35.4.612. Medline:19228917 LinkGoogle Scholar
14. Tomlin, JL, Pead, MJ, May, SA (2008).Attitudes of veterinary faculty to the assessment of clinical reasoning using extended matching questions.J Vet Med Educ.35,4,622-, 30http://dx.doi.org/10.3138/jvme.35.4.622. Medline:19228918 LinkGoogle Scholar
15. Carrière, B, Gagnon, R, Charlin, B, et al. (2009).Assessing clinical reasoning in pediatric emergency medicine: validity evidence for a Script Concordance Test.Ann Emerg Med.53,5,647-, 52http://dx.doi.org/10.1016/j.annemergmed.2008.07.024. Medline:18722694 MedlineGoogle Scholar
16. Allen, GD, Rubenfeld, MG, Scheffer, BK (2004).Reliability of assessment of critical thinking.J Prof Nurs.20,1,15-, 22http://dx.doi.org/10.1016/j.profnurs.2003.12.004. Medline:15011189 MedlineGoogle Scholar
17. Vuchetich, PJ, Hamilton, WR, Ahmad, SO, et al. (2006).Analyzing course objectives: assessing critical thinking in the pharmacy curriculum.J Allied Health.35,4,e253-, 75Medline:19759975 MedlineGoogle Scholar
18. (2011). Scales of the CCTST [Internet].cited 2011 Oct 5Insight AssessmentAvailable from: http://www.insightassessment.com/Products/Critical-Thinking-Skills-Tests/California-Critical-Thinking-Skills-Test-CCTST/Scales-of-the-CCTST Google Scholar
19. Downing, SM, Haladyna, TM (2004).Validity threats: overcoming interference with proposed interpretations of assessment data.Med Educ.38,3,327-, 33http://dx.doi.org/10.1046/j.1365-2923.2004.01777.x. Medline:14996342 MedlineGoogle Scholar
20. Bartlett, DJ, Cox, PD (2002).Measuring change in students' critical thinking ability: implications for health care education.J Allied Health.31,2,64-, 9Medline:12040999 MedlineGoogle Scholar
21. Wong, MS (2007).A prospective study on the development of critical thinking skills for student prosthetists and orthotists in Hong Kong.Prosthet Orthot Int.31,2,138-, 46http://dx.doi.org/10.1080/03093640600983931. Medline:17520491 MedlineGoogle Scholar
22. McCarthy, P, Schuster, P, Zehr, P, et al. (1999).Evaluation of critical thinking in a baccalaureate nursing program.J Nurs Educ.38,3,142-, 4Medline:10102514 MedlineGoogle Scholar
Assessment of critical thinking (ACT) examination scenario and format given March 2011 to second-year veterinary students

Please list your top two choices for differential diagnoses here before handing in the exam:

Differential Diagnosis #1: [Part A1]

Differential Diagnosis #2: [Part A2]

Presenting Scenario:

You are a partner in a busy small-animal practice in Southern California. As you leave one of the exam rooms, your veterinary technician hands you the record for your next patient. “Roscoe” is an 8-year-old male Rottweiler that you have seen regularly since he was a puppy for wellness exams and vaccinations. A quick check of his file reveals that he received his most recent DHP-P and rabies vaccines 18 months ago. Roscoe's owner, Ms. Taylor, is concerned because Roscoe's left eye and face “look odd.” The first thing you notice about Roscoe when you enter the exam room is a slightly lopsided (asymmetrical) appearance to his face and head. When Roscoe stands up and trots over to greet you, his gait appears normal. Ms. Taylor reports that Roscoe has not had any problems with vomiting or diarrhea, and he has not been coughing or sneezing. His activity level has decreased over the past few months, and Ms. Taylor believes that Roscoe has gained some weight. His normal diet is a commercial adult dog food (usually whatever is on sale) fed free choice. Although his appetite has been good, he seems to be eating more slowly in addition to dribbling small amounts of food from his mouth occasionally.

In response to your question about potential injuries, Ms. Taylor sheepishly admits that Roscoe was riding in the back of her pick-up truck a couple of weeks ago when they were involved in a minor rear-end collision. Roscoe slid forward in the truck and may have bumped his head, but he jumped up immediately and seemed to be okay afterward.

Physical Examination:

Temperature: 100.6°F

Pulse: 68/min

Respiration: panting

Weight: 97.5 lbs

Hydration Status Attitude Body Condition

Normal BAR 3.5/5

Mucous Membranes: Pink and moist

Capillary Refill Time: 1 sec

INDICATE: (N-Normal; A-Abnormal; NE-Not Examined)

Table

Table

01 HEAD A 05 ORALCAVITY A 09 MAMMARYGLANDS N 13 RECTAL NE
02 EYES A 06 CERVICAL N 10 ABDOMEN N 14 SKIN N
03 EARS N 07 THORAX N 11 BACK/TAIL N 15 EXTREMITIES N
04 NOSE N 08 HEART N 12 GENITALIA N 16 NEUROLOGIC A

01 Temporalis and masseter muscle atrophy – left side, moderate

02 Diminished palpebral and corneal reflexes OS (menace and pupillary reflexes intact); slight corneal opacity OS

05 Small amount of food is present in left cheek

16 Diminished facial sensation (decreased response to pin pricks) on left side as compared to right side of face; swallow reflex is normal

Laboratory data:

Table

CBC and Blood Chemistry Panel

CBC and Blood Chemistry Panel

TEST Patient Normal Range
Red blood cells 7.3 5.5–8.5×106/µl
Hemoglobin 15 12–18 g/dL
PCV (hematocrit) 46 37–55%
White blood cells 11,500 6,000–17,000/µl
Neutrophils 7,475 3,000–11,400/µl
Lymphocytes 2,645 1,000–4,800/µl
Monocytes 460 150–1,350/µl
Eosinophils 920 100–750/µl
Platelets 350 200–900×103/µl
BUN 20.4 8.8–26 mg/dL
ALT 45 8.2–57 IU/L
Alk Phos 15.5 10.6–101 IU/L
Creatine kinase 85 14–120 IU/L
Ca2+ 10.2 8.7–11.8 mg/dL
Na+ 142 140–154 meq/L
K+ 4.63 3.8–5.6 meq/L
Total Protein 7.0 6.0–7.5 g/dL
Albumin 3.9 2.7–4.4 g/dL
Globulin 3.1 1.6–3.4 g/dL

Problem: Asymmetry of head

Based on your physical examination and laboratory data, choose your top 2 differential diagnoses from the list below and justify your choices on the following pages: (note – this list is NOT comprehensive)

Degenerative

  • Degenerative myelopathy

Metabolic

  • Hypothyroidism-associated peripheral neuropathy

Neoplastic

  • Trigeminal nerve neoplasia (nerve-sheath tumor)

  • CNS neoplasia (meningioma, other)

Infectious/Inflammatory/Immune-mediated

  • Viral meningoencephalitis

  • Masticatory muscle myositis

  • Otitis media/interna-associated neuropathy

Traumatic

  • Horner's syndrome secondary to head trauma

  • Traumatic myopathy

Vascular

  • Cerebrovascular incident (ischemia or hemorrhage)

PARTS A1 and A2
Problem: Asymmetry of head

Differential Diagnosis #1 (from list given): [Differential 1 is Part A1. Differential 2 is Part A2. Students provide the pathophysiology and table information for each differential separately.]

How does the PATHOPHYSIOLOGY of this differential diagnosis explain the presenting clinical scenario?

[Blooms 1, Rubric Row 2]

Table

Table

History Information Supporting:[Blooms 2, Rubric Row 3] Rationale:[Blooms 2]
Physical Examination Information Supporting:[Blooms 2, Rubric Row 4] Rationale:[Blooms 2]
Other Supporting Information:[Blooms 2, Rubric Row 5] Rationale:[Blooms 2]
Non-supportive Information:[Blooms 2, Rubric Row 6] Rationale:[Blooms 2]

PART B
Problem: Asymmetry of head

Differential Diagnosis#1 (from list given): [Blooms 3, Rubric Row 1]

Differential Diagnosis#2 (from list given): [Blooms 3, Rubric Row 1]

How will you determine which diagnosis is most likely? What can you do to help rule out the other potential diagnoses? Justify your answer.

Table

Table

Plan Rationale
[Blooms 3, Rubric Row 7] [Blooms 3]
Assessment of critical thinking (ACT) rubric used March 2011 for grading of ACT exam delivered to second-year veterinary students

Student ID:___________  Grader ID:___________

Table

Table

Two appropriate choicesfrom list of differentialdiagnoses consistent with data available[Row 1][Blooms 3] 10 pts One appropriate choicefrom list of differentialdiagnoses consistent withdata available8 pts No appropriate choicesfrom list of differentialdiagnoses4 pts
Pathophysiologic mechanism is described accurately for the chosen differential diagnosis; demonstrates insight and specific knowledge[Row 2, A1/A2][Blooms 1] 10/10 pts Minor error in description of pathophysiologic mechanism for the differential diagnosis; moderate evidence of insight or specific knowledge8/8 pts Multiple minor errors in description of pathophysiologic mechanism; restricted ability to describe pathophysiologic mechanism of differential diagnosis; some evidence of insight or specific knowledge7/7 pts Major errors in description of mechanism for the differential diagnosis; little evidence of insight or specific knowledge5/5 pts Pathophysiologic mechanism poorly described for the differential diagnosis; specific knowledge not demonstrated3/3 pts
Correct identification of all supporting history information; appropriate rationale for all information[Row 3, A1/A2][Blooms 2] 5/5 pts Correct identification of most supporting history information; appropriate rationale for most information4.5/4.5 pts Correct identification of some supporting history information; appropriate rationale for some information4/4 pts Correct identification of some supporting history information; rationale not given or incorrect3/3 pts Incorrect identification of supporting history information; rationale not given or incorrect1/1 pts
Correct identification of all supporting physical exam information; appropriate rationale for all information[Row 4, A1/A2][Blooms 2] 5/5 pts Correct identification of most supporting physical exam information; appropriate rationale for most information4.5/4.5 pts Correct identification of some supporting physical exam information; appropriate rationale for some information4/4 pts Correct identification of some supporting physical exam information; rationale not given or incorrect3/3 pts Incorrect identification of supporting physical exam information; rationale not given or incorrect1/1 pts
Correct identification of all other supporting information; appropriate rationale for all information[Row 5, A1/A2][Blooms 2] 5/5 pts Correct identification of most other supporting information; appropriate rationale for most information4.5/4.5 pts Correct identification of some other supporting information; appropriate rationale for some information4/4 pts Correct identification of some other supporting information; rationale not given or incorrect3/3 pts Incorrect identification of other supporting information; rationale not given or incorrect1/1 pts
Correct identification of all nonsupportive information; appropriate rationale for all information[Row 6, A1/A2][Blooms 2] 5/5 pts Correct identification of most nonsupportive information; appropriate rationale for most information4.5/4.5 pts Correct identification of some nonsupportive information; appropriate rationale for some information4/4 pts Correct identification of some nonsupportive information; rationale not given or incorrect3/3 pts Incorrect identification of nonsupportive information; rationale not given or incorrect1/1 pts
Thorough, detailed investigative plan associated clearly with all defined differentials; appropriate rationale given for each plan[Row 7][Blooms 3] 20 pts Adequate plan associated clearly with all defined differentials; appropriate rationale given for most plans16 pts Basic plan; some stated in generalities not clearly associated with defined ideas; rationale not well-defined for some plans14 pts Generalities not specific and not clearly associated with defined ideas; incorrect rationale given or missing rationale for most plans10 pts Major action plan of significance to the investigation not defined; little or no rationale given6 pts
Appropriate terminology demonstrating expert use of accurate scientific vocabulary and spelling[Not included] 5 pts Appropriate terminology; usually adequate use of accurate scientific vocabulary4 pts Some use of lay terminology where scientific terms were expected; some misuse of terms; minor misspellings3 pts Inadequate use of precise terminology; embarrassing misuse of terms; significant misspellings2 pts Failed to use appropriate terminology; vocabulary errors would result in miscommunication and injury to the patient1 pts
Notations were orderly, legible and clearly demonstrated the student's thought process and logic[Not included] 5 pts
Notations were effective but present a few problems with order, legibility, thought process and/or logic4 pts
Notations were effective but present multiple problems with order, legibility, thought process and/or logic3 pts
Inadequate or illegible notations; multiple problems with order, legibility, thought process and/or logic2 pts
Failed to communicate most thought process and/or logic1 pts
Scoring Code
A B C D U
% equivalent 90–100 80–89 70–79 65–69 0–64