Volume 39 Issue 4, Winter 2012, pp. 396-403

ABSTRACT

Our second-year core clinical pathology course uses free-response case-based learning exercises in an otherwise traditional lecture or laboratory course format to augment the development of skills in application of knowledge and critical thinking and clinical reasoning. We previously reported increased learner confidence accompanied by perceived improvements in understanding and ability to apply information, along with enhanced feelings of preparedness for examinations that students attributed to the case-based exercises. The current study prospectively follows a cohort of students to determine the ability of traditional multiple-choice versus free-response case-based assessments to predict future academic performance and to determine if the perceived value of the case-based exercises persists through the curriculum. Our data show that after holding multiple-choice scores constant, better performance on case-based free-response exercises led to higher GPA and better class rank in the second and third years and better class rank in the fourth year. Students in clinical rotations reported that the case-based approach was superior to traditional lecture or multiple-choice exam format for learning clinical reasoning, retaining factual information, organizing information, communicating medical information clearly to colleagues in clinical situations, and preparing high quality medical records. In summary, this longitudinal study shows that case-based free-response writing assignments are efficacious above and beyond standard measures in determining students' GPAs and class rank and in students' acquisition of knowledge, skills, and clinical reasoning. Students value these assignments and overwhelmingly find them an efficient use of their time, and these opinions are maintained even two years following the course.

The “Working to Develop a Roadmap for Veterinary Medical Education in the 21st Century,”1 prepared by the North American Veterinary Medical Education Consortium (NAVMEC), calls for a competency-driven curriculum using a spectrum of contemporary teaching and learning techniques to ensure proficiency in core competencies. The same report acknowledges that this must be accomplished in an increasingly resource restricted environment, resulting in a need for the objective evaluation of the effectiveness and efficiency of learning techniques and assessment methods in terms of student outcomes. Unfortunately, a recent systematic review of the literature revealed little published information evaluating educational assessment methods in veterinary medicine.2

To address these educational issues and the gap in veterinary medical education literature, our current study builds on an original report describing the application of case-based free-response writing assignments in a core clinical pathology course designed to encourage the development of key competencies such as application of factual knowledge and learning critical thinking and clinical reasoning.3 Case-based teaching is similar to problem-based learning in that it encourages discussion and exploration of material, however proponents argue that the additional structure provided in case-based learning improves focus and efficiency. This argument is supported by faculty and learner feedback in medical school curricula.4 In our class structure, we use case-based learning exercises to augment an otherwise traditional lecture/laboratory instructional format. In our original study, students reported increased learner confidence, improved understanding and ability to apply information, and improved feelings of preparedness for examinations that they attributed to the case-based exercises. Because many veterinary students adapt to multiple-choice assessments and are less accustomed to being evaluated on free-response items, we evaluated student perceptions of the grading of the cases. Most students felt that grading was appropriate and that feedback contributed to improved performance on subsequent assignments; however, approximately 10% of the class disagreed. Regardless, a large majority of students recommended the continued use of these assignments in the course. We have done so; however, at the conclusion of that study we felt it was critical to characterize the impact of this learning technique relative to traditional methods as students progressed in the curriculum, and as well, felt it was necessary to evaluate the functionality of case-based writing exercises as assessment tools. In particular, the goals of this longitudinal cohort study were to (1) compare free-response case-based writing and standard multiple-choice assessments in the prediction of educational outcomes, (2) determine if the short-term positive impact on learning reported by students in previous years3 persists later in the curriculum, and (3) explore student perceptions of the grading process.

Experimental Design

The second-year, four-credit clinical pathology core course was delivered in 2009. Student performance on assessments was recorded and de-identified according to the University of Minnesota Institutional Review Board guidelines. Specifically, we recorded the scores for the multiple-choice and free-response case-based components of the cumulative final examination, as well as the total course free-response score (homework case-based writing assignments+free-response case-based component of the final examination) and total course multiple-choice score (quiz grades+midterm examination+multiple-choice component of the final examination). A course evaluation survey was distributed at the end of the semester; however, students included their student identifier number and delivered the surveys to the Office of Academic Affairs. The survey data were de-identified and recorded in a spreadsheet. The performance of this cohort of students was followed in the third and fourth years by recording and de-identifying third-year course grade point average (GPA), fourth-year clinical rotation GPA, as well as third- and fourth-year cumulative class rank. Students in this cohort completed a survey about the long-term impact of the clinical pathology course on learning and academic preparedness at the end of the third year didactic curriculum and after six months of fourth-year clinical rotations. The study protocol and surveys were reviewed and approved by the University of Minnesota Institutional Review Board.

Course Description

The clinical pathology course is offered in the second semester of the second year of the DVM program at the University of Minnesota. In 2009, there were three main instructors, who were board certified veterinary clinical pathologists, with two lectures given by a senior resident. The course consists of standard didactic lectures that incorporate numerous short-case studies interspersed with lecture material that are designed to reinforce factual material and develop skills in data interpretation and clinical decision making. The cases consist of brief clinical vignettes that incorporate clinical history, physical examination findings, and clinical laboratory data. The learning activity involves summarizing the information in appropriate medical terminology, articulating a mechanistic interpretation of the data, developing a list of differential diagnoses, and proposing appropriate subsequent diagnostic testing that may or may not be laboratory-based. There are three laboratory sections for microscopy instruction. The course content of the first half of the course is hematology, with a small section on cytology and fluid analysis. The second half of the course focuses on clinical chemistry. The learning is cumulative and integrative, so that, as students progress through the course, they are presented with patient data relevant to the current topic as well as data from all other subjects previously covered.

Assessments include a series of seven take-home, open-book, free-response cumulative case-based problem sets of increasing complexity that are similar in content and format to the case examples discussed in class as described previously.3 Collaborative work was encouraged but each student was required to turn in their own assignment. Grading rubrics and examples were provided before the first day of class as part of the syllabus and were highlighted as part of the introduction to the course.3 On the day these assignments were turned in, there was a multiple choice and fill in the blank quiz given in class based on the material covered in the case set. Scores and written feedback on each case set were returned to students before the next set was due, and the instructors grading the homework assignments identified common areas of confusion and reviewed them during lectures. Only one instructor graded each case to improve consistency. While subjective, grading was performed using pre-determined pointed keys unique to each case, and there was consultation among the three main instructors if there was uncertainty regarding how to grade an individual student's written response. A multiple-choice mid-term examination incorporated fact-based recall questions as well as interpretive case-based questions. The final examination was cumulative, but with a relative emphasis on clinical chemistry. The examination consisted of two parts, a multiple-choice examination similar in format to the midterm, and a free-response case-based section. The case-based section was identical in format to the homework cases other than being closed-book and requiring independent work like all of the examinations in the course. The relative contribution of each assessment to the final grade was as follows: quizzes=10%; free-response case-based homework assignments=20%; multiple choice only midterm examination=30%; free-response case-based component of final examination=10%; multiple choice component of final examination=30%. In total, 70% of the grade was based on multiple-choice assessments and 30% on free-response case interpretations.

Surveys

The second-year survey was designed to elicit feedback on student impression of the learning value of the case assignments compared with traditional learning formats, the amount of effort relative to the benefit, and if mechanisms for feedback were effective. Because of our experimental design, we were able to track changes over time for individual students. The third- and fourth-year surveys were designed to be very short to increase compliance and focused on retention of knowledge and skills and the relative benefit of case-based writing exercises compared with traditional learning and assessment formats for preparing students for subsequent stages of the curriculum. Each survey offered an opportunity to make suggestions for improvement of utilization of the cases. Surveys were approved by the University of Minnesota Institutional Review Board.

Statistical Analysis

Multiple-regression analysis was used to assess the predictive power of the free-response items above and beyond the predictive power of the multiple-choice items. For all regressions, the data were mean adjusted so the intercept reflects the grand mean of the dependent variables (e.g., GPA). Dependent variables included second-year cumulative GPA, third- and fourth-year GPA, and second-, third-, and fourth-year cumulative class rank. A per test α=.05 was chosen for all tests (i.e., p-values less than .05 were deemed statistically significant) to have adequate statistical power and due to the exploratory nature of the analyses. R-squares were calculated to assess how much variation the two independent variables (case-based and multiple-choice assessments) explained in each dependent variable (grade point average and class rank). Paired t-tests were used to assess if the means differed for the free-response grades and the multiple-choice grades on the final examination and the total course grades. Lastly, the omnibus Friedman test was used to assess if survey responses differed over the three years. The Friedman test is a nonparametric counterpart to repeated-measures ANOVA; it does not assume normality and it uses the chi square as its test statistic. Because of the ordinal nature of the Likert scale we were using as well as the skewedness of the results (predominance of 4 and 5 responses), we chose the Friedman test rather than ANOVA. When the omnibus Friedman test was significant, pair-wise comparisons were performed to see where the differences occurred between years. All analyses were conducted using R version 2.14.0.

Descriptive Statistics

Table 1 contains the descriptive statistics for the assessment modeling. The initial number of students in the cohort was 82, however, due to attrition, modeling for the fourth year only contained 80 students. The GPA in the fourth-year clinical practicum (3.49) was significantly higher than in the third-year didactic courses (3.29; mean difference=.18, t(83)=−4.78, p<.001, paired t-test), with minimal variation in the data for the fourth-year GPA. The average free-response grades were significantly higher than the multiple-choice grades for both the final examination (84.3% vs. 79.2% respectively; mean difference=4.7, t(82)=6.47, p<.001, paired t-test) and the total course grades (86.7% vs. 81.9% respectively; mean difference=5.1, t(82)=5.40, p<.001, paired t-test).

Table

Table 1: Descriptive statistics for student performance

Table 1: Descriptive statistics for student performance

Parameter M SD
Cumulative second-year GPA 3.23 0.46
third-year course GPA 3.29 0.38
fourth-year course GPA 3.49 0.25
second-year free response final examination score (%) 84.3 8.8
second-year multiple choice final examination score (%) 79.2 10.6
second-year free response cumulative course grade (%) 86.7 6.4
second-year multiple choice cumulative course grade (%) 81.9 9.0

GPA=grade point average

Predictive Value of Assessments

To answer the question of whether successful work with free-response questions predicted a better GPA or class rank compared to just a student's multiple-choice scores, multiple regression was run on GPA and class rank for years two, three, and four. The total course free-response score significantly improves the prediction for both GPA and class rank compared with total multiple-choice assessment score during second and third years, and for class rank for all three years (see Table 2). The free-response component of the final examination in the veterinary clinical pathology course was significant for only the fourth-year class rank. Holding the multiple-choice cumulative score constant, a 10-point increase in the cumulative free-response score resulted in a 0.23 increase in the cumulative second year GPA and a 14-place improvement in the second-year class rank (p<.001 for both).

Table

Table 2: Summary of multiple regression results*

Table 2: Summary of multiple regression results*

Final examination
Total course grade
Dependentvariable Independentvariable β (SE of β) p R2 Estimate(SE) p R2
GPA Cumulativesecond year Intercept 3.238 (.033) .58 3.239 (.026) .75
MC 0.030 (.004) <.001 0.032 (.004) <.001
FR 0.006 (.005) .239 0.023 (.005) <.001
third year Intercept 3.300 (0.027) .60 3.30 (0.02) .78
MC 0.026 (0.003) <.001 0.029 (0.003) <.001
FR 0.005 (0.004) .221 0.018 (0.004) <.001
fourth year Intercept 3.49 (0.03) .10 3.49 (0.269) .09
MC 0.003 (0.003) .367 0.007 (0.004) .129
FR 0.006 (0.004) .127 0.004 (0.006) .545
Class Rank Cumulativesecond year Intercept 44.0 (1.8) .60 44.0 (1.4) .77
MC −1.7 (0.2) <.001 −1.7 (0.2) <.001
FR −0.4 (0.3) .127 −1.4 (0.3) <.001
Cumulativethird year Intercept 47.0 (1.9) .64 47.7 (1.4) .80
MC −1.9 (0.2) <.001 −2.1 (0.2) <.001
FR −0.5 (0.3) .095 −1.4 (0.3) <.001
Cumulativefourth year Intercept 43.8 (1.7) .64 43.7 (1.3) .80
MC −1.6 (0.1) <.001 −1.9 (0.2) <.001
FR −0.5 (0.3) .038 −1.3 (0.3) <.001

SE=standard error; MC=multiple choice; FR=free response; GPA=grade point average

*The final examination free-response score does not add predictive value beyond the multiple choice final exam results for any year GPA beyond, but it does improve the prediction of class rank in the fourth-year clinical rotations. The total free response score (homework+final examination) significantly improves the ability to predict GPA in the second and third years, as well as class rank years two through four.

The results are qualitatively identical for the third-year course GPA and third-year cumulative class rank (Table 2). For a given multiple-choice cumulative score, a 1-point increase in the cumulative free-response score correlated with a 0.17 increase in third-year GPA and a 14-place improvement in the third-year class rank (p<.001). Both the final examination free-response component and the total free-response course grade were more predictive of fourth-year class rank than multiple-choice items alone; however, neither were significantly associated with fourth-year GPA, likely due to negligible variation in that value. Holding the multiple-choice scores constant, a 10-point increase in the final examination free-response score resulted in a 5-place improvement in fourth-year class rank. A 10-point increase in the cumulative free-response score in the total course resulted in a 13-place improvement.

Survey Data
Second Year

All students agreed that the case exercises were a valuable part of the course, and 91.6% felt that the learning value justified the time required to complete the work (see Table 3). The numbers of cases were considered to be “just right” by 83% of students, while 7% thought there were too few cases and 10% thought there were too many. The median number of hours spent on each set of assignments was reported to be 7.8 hours (range=1.5–24), with 18.5% spending fewer than 5 hours, 35.6% spending 5–7 hours, 31.3% spending 8–10 hours, and 14.6% spending greater than 10 hours per assignment. Almost all students (94%) indicated that the case exercises encouraged them to keep current with material, resulting in less last-minute exam preparation (see Table 3), suggesting that a significant amount of time spent on assignments contributed to examination preparation and overall learning. An overwhelming majority of students indicated that case-based writing exercises were better than standard didactic lectures and multiple-choice tests in helping them understand the material (94.0%), learn clinical reasoning and problem solving skills (98.8%), and develop skills in finding and organizing information (98.8%). Most students (81.9%) even felt that the exercises promoted acquisition of factual information. Of all the assessments in the course, the case-based exercises were most important to learning for 88.6% of students, while small numbers of students felt that the quizzes (2.4%) or the multiple-choice components of the examinations (9.0%) were superior learning tools. For comparison with our previous data, we evaluated student perceptions of the grading of free-response cases. In this cohort, we found that most students (64.0%) perceived the grading of case exercises to be fair, while 23.0% were unsure and the remaining 13.0% were dissatisfied with the grading process. Feedback on writing assignments assisted most students in understanding material (72.0%) and doing a better job on the subsequent case set (61.0%), however only about half of students reported that feedback helped them understand their assignment grade.

Table

Table 3: Second year end of semester survey results*

Table 3: Second year end of semester survey results*

Question SD D U A SA
Case write ups are a valuable part of the clinical pathology course. 0 0 0 13.3 86.7
The time spent working on case assignments was justified by the amount I learned by doing them. 1.2 1.2 6.0 45.8 45.8
Grading of the assignments was appropriate and fair. 1.2 12.0 22.9 54.2 9.6
The case-based writing assignments helped me keep up in class so I did less last-minute studying for exams. 1.2 1.2 3.6 41.0 53.0
Compared with the lectures and multiple-choice tests, the case-based writing exercises helped me better:
learn the factual information. 0 7.2 10.8 49.4 32.5
understand the material. 0 2.4 3.6 44.6 49.4
learn clinical reasoning and problem solving skills. 0 0 1.2 37.3 61.4
develop skills in finding and organizing information. 0 1.2 1.2 48.2 49.4
Feedback on writing assignments helped me:
understand the material better. 0 9.6 16.9 61.4 12.0
do a better job on the next set of cases. 0 15.7 22.9 50.6 10.8
understand my grade on assignments. 9.6 10.8 30.1 36.1 13.3

SD=strongly disagree; D=disagree; U=unsure; A=agree; SA=strongly agree

*Data are expressed as % of the 82 respondents.

Third Year

Retaining adequate knowledge and skill from the clinical pathology course to succeed in subsequent course work was reported by 81.9% of students, while 88.0% continued to report that upon reflection after an additional year of the curriculum, the learning value of the case-based exercises justified the time devoted to them (see Table 4). Likewise, overwhelming numbers of students felt that compared with lectures and multiple-choice assessments, the case-based exercises were better for helping them retain information (92.8%), learn clinical reasoning (90.4%), and develop skills in finding and organizing information (92.7%).

Table

Table 4: Third-year survey results*

Table 4: Third-year survey results*

Question SD D U A SA
I have retained adequate knowledge and skills from the clinical pathology course to succeed in my third-year coursework. 0.0 14.8 13.3 75.9 6.0
I learned and retained sufficient knowledge and skills that apply to my third-year classes to justify the effort I spent on the case-based writing exercises in veterinary clinical pathology. 0.0 4.8 7.2 62.7 25.3
Compared with the lectures and multiple-choice tests, the case-based writing exercises helped me better:
retain information needed for the third-year course work. 0.0 0.0 7.2 62.7 30.1
learn clinical reasoning needed for the third-year course work. 0.0 0.0 9.6 48.2 42.2
develop skills in finding and organizing information. 0.0 0.0 7.2 55.4 37.3

SD=strongly disagree; D=disagree; U=unsure; A=agree; SA=strongly agree

*Data are expressed as % of the 82 respondents.

Fourth Year

Retaining adequate knowledge and skill from the clinical pathology course to succeed in clinical rotations was reported by 85% of students, while 90.3% continued to report that the learning value of the case-based exercises justified the time devoted to them (see Table 5). A large majority of students reported that the case-based exercises were superior to traditional lecture and multiple-choice formats for preparation for the fourth year in retaining information needed for clinical rotations (81.9%), learning clinical reasoning and decision making skills needed in clinical rotations (91.6%), developing skills in finding and organizing information (87.5%), communicating medical information clearly to colleagues in clinical rotations (86.0%), and preparing high quality medical records (77.8%).

Table

Table 5: Fourth-year survey results*

Table 5: Fourth-year survey results*

Questions SD D U A SA
1. I have retained adequate knowledge and skills from the clinical pathology course to succeed in my clinical rotations. 0.0 2.8 12.5 68.1 16.7
2. I learned and retained sufficient knowledge and skills that apply to my clinical rotations to justify the effort I spent on the case-based writing exercises in veterinary clinical pathology. 0.0 2.8 6.9 54.2 36.1
Compared with the lectures and multiple-choice tests, the case-based writing exercises in veterinary clinical pathology helped/prepared me to better:
3. retain information needed for clinical rotations. 0.0 5.6 12.5 48.6 33.3
4. learn clinical reasoning/decision making skills needed in my clinical rotations. 0.0 4.2 4.2 59.7 31.9
5. develop skills in finding and organizing information. 0.0 4.2 8.3 52.8 34.7
6. communicate medical information clearly to colleagues in clinical rotations. 0.0 2.8 11.1 54.2 31.9
7. prepare high quality medical records in clinical rotations 0.0 8.3 13.9 54.2 23.6

SD=strongly disagree; D=disagree; U=unsure; A=agree; SA=strongly agree

*Data are expressed as % of the 80 respondents.

Student Perceptions as a Function of Year in Curriculum

Analysis of selected repeat survey items allowed evaluation of student perceptions over time (see Figure 1). The perception of the importance of the case-based exercises in “learning clinical reasoning” was highest immediately following the end of the course (mean=4.40, where 4 corresponds to agree and 5 corresponds to strongly agree) and remained high but decreased the following two years (mean=4.33) with mean=4.27 for years three and four respectively (χ2[2]=16.62, p=.001, η2=.092).

Figure 1: Evaluation of student perceptions of the value of case-based writing assessments from the second through the fourth year. The assessments were considered to be extremely helpful in learning clinical reasoning in the second year, likely because this is one of the first courses in the curriculum that incorporates clinical decision making. The perception of value decreased but remained very high between years two and three and between years three and four as the rest of the curriculum develops a greater emphasis on clinical reasoning. Assessments were persistently reported to help learn or retain factual information. The perceived value of the cases relative to effort was very high at the end of the course but dropped between years two and three. In year four, the value significantly increased compared with the previous year, but not quite to the original levels.

* Statistically significant difference from the previous year p<.05, Friedman test with pairwise comparisons.

Students also indicated that case-based exercises helped them “learn factual information” (mean=4.18), and these exercises were perceived to be more valuable than traditional teaching techniques for the retention of factual information. This perception persisted, largely unchanged, during year three (mean=4.24) and year four (mean=4.12) (χ2[2]=0.94, p=.62).

The perception of the value of the case-based exercises relative to the effort expended was very high at the end of the course (mean=4.40), and remained high but dropped significantly by the end of the third year (mean=4.11). There was a significant increase from the third year to the clinical rotation survey, but not quite to the original level (mean=4.26), (χ2[2]=12.41, p=.002, η2=.069).

Our data show that performance on case-based writing assessments in a course in veterinary clinical pathology is superior to traditional evaluation using multiple-choice questions in predicting future academic standing. This interpretation is supported by the observation that after holding multiple-choice scores constant, better performance on the case-based free-response exercises led to higher GPA and better class rank in the second and third years and class rank in the fourth year in a cohort of students followed prospectively through the veterinary medical program. The combination of collaborative learning, feedback and corrective instruction in class, and the escalating complexity of the assignments incorporating cumulative reinforcement of previous material may contribute to enhanced acquisition of knowledge and skill necessary to succeed in subsequent stages of the curriculum. This effect remains speculative because our study did not directly evaluate a causal relationship due to experimental design limitations related to ethical concerns. Minimal variation, apparently related to grade compression in the clinical rotations, makes it difficult to sensitively evaluate effects on fourth-year GPA. We speculate that the relatively recent adoption of A-E grading in clinical rotations resulted in a default to an effective pass-fail utilization of the ordinal grading scale with passing students almost always receiving an A or B grade. A cultural reluctance to assign failing grades at the time of this study may have also contributed to low variation in clinical rotation grades.

Student feedback from this cohort is consistent with our previous data showing that students highly value case-based writing exercises as a learning tool.3 Here we explore a few of the more interesting specific observations. Given the density of the veterinary medical curriculum, efficiency of learning strategies is critical. Therefore, a key finding of this study is that students emphatically report that the educational value of case-based writing exercises is a good return on their efforts even years after the experience. It is interesting that the perceived value transiently decreases in the third year, only to increase again during clinical rotations to levels close to those reported at the end of the course. It is reasonable to consider that the resurgence of perceived value is related to the need for knowledge and skills that are more applicable to clinical practice than to the more didactic third-year learning environment. Given that the case-based writing exercises were specifically designed and have been optimized based on student feedback to teach clinical reasoning, it was expected that students would report them to be helpful. The extraordinarily high agreement rating in the second-year survey is likely because clinical pathology is one of the first courses in the curriculum that incorporates clinical decision making. The perception of value declined slightly over time as the rest of the curriculum developed a greater emphasis on clinical reasoning, but remained strong (mean=4.4, 4.1, and 4.3 respectively, on a scale of 1–5 for years two, three, and four). More surprisingly, students persistently and vigorously rated case-based writing exercises highly as a tool for the acquisition and retention of factual material. General factors previously discussed such as the promotion of active and social learning, as well as repetition, presumably contribute, however new educational research suggests additional possibilities. For example, case-based exercises are also likely to promote more efficient and accessible memory structures, which are the basis for expert clinical problem solving, demonstrating clearly how knowledge base and clinical reasoning are linked.5 While the first step in clinical reasoning is data acquisition, a second early step is the formation of a problem representation or an abstract summary of the case. Central to this process is the recollection of “illness scripts” consisting of predisposing factors, pathophysiologic mechanisms, and clinical consequences of diseases. Illness scripts may be conceptual models, however memories of individual patients also contribute significantly to the knowledge base of more experienced clinicians, allowing both analytical reasoning and non-analytical information-based pattern recognition diagnosis.6 Carefully paced case-based learning implemented appropriately in the curriculum could be enabling students to acquire a form of clinical experience that enhances the formation of functional illness scripts and problem representations, resulting in improved retention and application of clinically relevant information.

Our data strongly support the predictive value of free-response case-based assessments and suggest there is significant educational value, the positive impact of which persists into the clinical practicum. There are several challenges to this learning strategy that must be mitigated if it is to be more widely adopted. Although our data address the cost-benefit ratio of the exercises from the student perspective, we must also address the effort on the part of instructors. The development of case material requires a considerable investment of time. There are already some published resources such as the Manual of Veterinary Clinical Chemistry: A Case Study Approach, a book that contains 125 comprehensive clinical pathology cases with keys.7 In addition, the American Society for Veterinary Clinical Pathology sponsors an online case bank that may be accessed by members for teaching purposes.8 We advocate additional collaborative efforts across institutions to share teaching material to improve efficiency and access to case-based material.

Because both students and instructors have concerns about the approach to grading case-based free-response assessments, and to optimize our own approach, we have continued to evaluate student perceptions of grading. Students generally felt that grading of the writing assignments was fair with 16% of students expressing dissatisfaction with the grading of assignments in the previous study and a remarkably similar 13% in this cohort. Likewise, 73% of students in the previous study and 72% of students in this cohort felt that feedback on the writing assignments helped them do a better job on the next set of cases, but both groups of students were less confident that the feedback actually helped them understand the grade they received. While these numbers are still strongly positive, they indicate more ambivalence about grading than about other aspects of the case-based assessments. Second-year students have expressed concern when the grade assigned by the instructor does not meet the student's expectations based on self-assessment of performance or when it does not reflect the perceived effort. A relative de-emphasis on formative assessments in medical curricula with continued movement through new material and a paucity of cumulative assessments and opportunities for practice and revision may complicate students' ability to use feedback effectively, and our ability to provide detailed individual feedback was limited by time available to instructors. The term “unsure” was delineated as the repository for such concerns, but an operational definition was not provided to students, which is a limitation in interpretation of these responses. Concerns about grading reported by some students do not appear to be supported by the data, which indicate higher average grades compared with multiple-choice assessments, and lower rather than the higher variation in scores that might be anticipated if grading of the free-response case-based exercises were inconsistent. Since many veterinary students are highly adapted to multiple-choice exams, a certain amount of discomfort is to be expected at least initially. The development of clear grading rubrics and pointed keys can make the grading process more transparent and efficient, alleviating stress for all participants. Having a single instructor grade a particular case improves consistency. In our experience, students rapidly realize that the learning value of the exercises outweighs technical issues with grading. To curtail unproductive discussion about scoring, our class policy is that all grades are final unless there has been a mathematical error in calculating point totals. With increased class size subsequent to the cohort reported in this study, we have adopted the strategy of picking a single case for grading out of each set to allow us to adapt this teaching technique, but the students do not know in advance which case will be chosen for scoring. We scan the other cases to establish any problem areas that should be addressed with in-class material reviews. Instructors interested in developing the use of case-based writing assessments in their own courses are encouraged to consult with university writing programs or centers for teaching and learning for support.

In summary, this longitudinal study has shown that case-based free-response writing assignments is efficacious above and beyond standard didactic measures in predicting students' GPAs and class rank, and likely contributes to students' acquisition of knowledge, skills, and clinical reasoning. Students value these assignments and overwhelmingly find them an efficient use of their time, and these opinions are maintained even two years after the class.

1. North American Veterinary Medical Education Consortium (NAVMEC) (2012).Working to Develop a Roadmap for Veterinary Medical Education in the 21st Century [Internet]., cited 2011 Nov 12Washington, DC:, Association of American Veterinary Medical CollegesAvailable from: http://www.aavmc.org/Veterinary-Educators/NAVMEC.aspx Google Scholar
2. Rhind, SM, Baillie, S, Brown, F, et al. (2008).Assessing competence in veterinary medical education: where's the evidence?.J Vet Med Educ.35,3,407-, 11http://dx.doi.org/10.3138/jvme.35.3.407. Medline:19066358 LinkGoogle Scholar
3. Sharkey, L, Overmann, J, Flash, P (2007).Evolution of a course in veterinary clinical pathology: the application of case-based writing assignments to focus on skill development and facilitation of learning.J Vet Med Educ.34,4,423-, 30http://dx.doi.org/10.3138/jvme.34.4.423. Medline:18287468 LinkGoogle Scholar
4. Srinivasan, M, Wilkes, M, Stevenson, F, et al. (2007).Comparing problem-based learning with case-based learning: effects of a major curricular shift at two institutions.Acad Med.82,1,74-, 82http://dx.doi.org/10.1097/01.ACM.0000249963.93776.aa. Medline:17198294 MedlineGoogle Scholar
5. Smith, CS (2008).A developmental approach to evaluating competence in clinical reasoning.J Vet Med Educ.35,3,375-, 81http://dx.doi.org/10.3138/jvme.35.3.375. Medline:19066354 LinkGoogle Scholar
6. Bowen, JL (2006).Educational strategies to promote clinical diagnostic reasoning.N Engl J Med.355,21,2217-, 25http://dx.doi.org/10.1056/NEJMra054782. Medline:17124019 MedlineGoogle Scholar
7. Sharkey, LC, Radin, MJ (2010).Manual of Veterinary Clinical Chemistry: A Case Study Approach.Jackson, WY:Teton NewMedia Google Scholar
8. (2011). American Society for Veterinary Clinical Pathology [Internet].cited 2012 Aug 27Madison, WI:American Society for Veterinary Clinical PathologyAvailable from: http://www.asvcp.org/ Google Scholar