ABSTRACT
Case-based e-learning may allow effective teaching of veterinary radiology in the field of equine orthopedics. The objective of this study was to investigate the effectiveness of a new case-based e-learning tool, compared with a standard structured tutorial, in altering students’ knowledge and skills about interpretation of radiographs of the digit in the horse. It was also designed to assess students’ attitudes toward the two educational interventions. A randomized, single-blinded, controlled trial of 96 fourth-year undergraduate veterinary students, involving an educational intervention of either structured tutorial or case-based e-learning, was performed. A multiple-choice examination based on six learning outcomes was carried out in each group after the session, followed by an evaluation of students’ attitudes toward their session on a seven-point scale. Text blanks were available to students to allow them to comment on the educational interventions and on their learning outcomes. Students also rated, on a Likert scale from 1 to 7, their performance for each specific learning outcome and their general ability to use a systematic approach in interpreting radiographs. Data were analyzed using the Mann-Whitney test, the t-test, and the equivalence test. There was no significant difference in student achievement on course tests. The results of the survey suggest positive student attitudes toward the e-learning tool and illustrate the difference between objective ratings and subjective assessments by students in testing a new educational intervention.
Clinical studies at the Faculty of Veterinary Science of the University of Liverpool are concentrated in the last part of the third year and in the fourth and fifth years. Formal teaching is provided in the clinical theory course, which includes lectures. The next part of the course is spent on clinical rotations, which offer an opportunity to gain experience in practical aspects of medicine and surgery under close supervision. Teaching in clinics is centered on work with real cases. Poumay reviewed the different reasons to promote learning and teaching with cases.1 Case-Based Learning Methods (CBLMs) favor reasoning, nourish the learner's conceptual network, promote schemata construction, are sources of vicarious experience, allow personal involvement via emotion, reassure learners, make them feel confident, and help them transfer to practice and develop higher-order skills.1 In equine orthopedics, approximately 1,000 cases present each year at the hospital. As students spend only two weeks on this rotation, it is not possible to guarantee availability of clinical material at any time, nor is it possible to guarantee that all students will see the same range of diseases. Clinical work can develop understanding and skills to a certain extent, but the limited time spent in clinics has immediate implications for students’ self-efficacy (their confidence in their ability to perform a task2) and for the development of clinical approach, which may be improved by repetition of clinical observation online on real cases, with self-assessment and feedback. However, case construction may be time consuming.1 Student satisfaction with the online problem-solving experience can be high, and the experience worthwhile, but the effort required to keep the experience realistic requires a large investment of time by instructors, which may ultimately lead to the cancellation of the online course.3
An e-learning tool was developed recently by the first author at the Faculty of Veterinary Science, University of Liverpool, in the field of equine orthopedics. The system combines a data collecting system and a Web interface. It generates automatic applications in teaching from the data recorded in clinics, thus optimizing the work of the teaching clinician. The whole case load is available to all students through case-based e-learning activities and is easily manageable for lectures and tutorials. The practicability of the data-collection system has been tested, but its effectiveness and acceptability remained to be investigated through a controlled randomized study.4
Veterinary students at the University of Liverpool undertake a one-week lecture course on orthopedics during the fourth year of their undergraduate course. In past years a practical session covered the subject of foot radiography, during which students worked on their own with a set of x-rays, followed by a group discussion. The teaching described here took place from day 3 to day 5 of the teaching week, prior to any other formal exposure to orthopedic radiography in the horse.
The current study had two principal aims: to investigate the effectiveness of e-learning, compared with a standard tutorial, in altering students’ knowledge and skills about interpretation of radiographs of the feet in the horse, and to assess students’ attitudes about the helpfulness and ease of use of the e-learning tool.
We formulated and tested the hypothesis that there would be no significant difference between measures of knowledge and skills in radiographic interpretation of the feet in the two teaching groups
Fourth-year veterinary students (N = 96) undertaking their orthopedic clinical theory course in the academic year 2005/2006 were included in the study. The 96 students were randomly allocated to either lecture or e-learning groups using random number generation in Epi Info 6.a
The objectives of the teaching sessions were identical. They were that, at the end of the session, students would
be familiar with the basics of radiology
be familiar with the terminology relating to the different projections
know the radiographic anatomy of the digit and localize the sites of interest
know a list of words used to describe radiographs
be able to recognize normality and to detect and describe abnormality
be able to diagnose common pathologies of the digit
On the first day, 10 minutes were allocated to explain to students how to use the educational software (VetsDataWeb). They could access online a Microsoft PowerPointb presentation covering the theory. The content illustrated the basics of physics, the description of and terminology for the different projections, the radiographic anatomy of the digit, the principles of radiographic interpretation, and examples of pathologies. Subsequently, they could perform learning activities (detect abnormalities on radiographs, describe abnormalities on radiographs, give a radiographic diagnosis) using the new software.4 The system allows work on real cases.
This e-learning tool was developed by collaboration between a veterinary clinician and a computer scientist. It is a relational database–driven expert system currently housed on an Oracle server, with a user interface delivered via Web pages. The system is designed to cater to a wide range of clinical disciplines.The database consists of a set of connected tables for the management of clinical observations. It provides a comprehensive set of options for observations in specified clinical disciplines. Free text is not required to describe clinical observations.
Clinical data input is associated a systematic series of questions with drop-down menus. These center on the four principles of clinical medicine: detection of an abnormality; description of that abnormality; diagnostic approach; treatment and prognosis. When clinicians input patient data, they can activate a quiz function (Figure 1). This allows the details of the case (i.e., photographs, video and textual descriptions, and a systematic series of questions) to be displayed and made available to learners. The system requires the learner to select an answer from a drop-down menu and submit it (Figure 2). The answer is compared to the hidden data recorded by the clinician (the “expert's answer”) and is followed by mechanized feedback (Figure 3). In this way the tool personalizes learning by generating learning exercises, self-assessments, and feedback. A particular feature of this e-learning tool that is it uses metacognitive indices to display the learners’ assessment of their confidence in their answer (Figure 2). They must submit their answer with a percentage of confidence (metacognitive indices).5–7 Their marks and confidence for each learning activity are stored and can be graphed to analyze their performance, regulate their learning, and guide their choice of learning activities. This can be done by the learner or by the instructor. Students were allowed to work with the tool for three days, until the day of assessment.
A Microsoft PowerPoint presentation, identical to the one available online to the e-learning group, was shown in a lecture on the first day. This took place at the same time the e-learning group was consulting the document online. The lecture group was subsequently given a set of x-rays identical to the x-rays displayed online for the e-learning group. The x-rays were available for viewing in the lab for a further three days, until the day of assessment. A group discussion with the lecturer took place on the afternoon of the second day (90 minutes). These students did not have access to the e-learning tool.
The study received ethical approval through the procedures set up by the Faculty of Veterinary Science. It was decided that if one method appeared significantly more successful, it would be made available to the group that had not been exposed to it.
A multiple-choice questionnaire was designed to assess students’ knowledge and their ability to interpret radiographs by the end of the week. This assessment instrument was based on real cases and real radiographs taken from the hospital case load. Students’ knowledge and skills were assessed using 30 four-stem multiple-choice questions (MCQs). Five questions addressed each of the six learning outcomes. The questionnaire was marked by a staff member who was not told which group each student had been assigned to.
In addition, seven-point Likert-type rating scales were used to assess the students’ opinions of how easy the lecture-based teaching and the case-based e-learning method were (1 = very difficult, 7 = very easy). A questionnaire allowing for free-text answers was submitted, which addressed the difficulties, advantages, and disadvantages encountered with each technique.
Students also rated, on a scale from 1 to 7 (1 = poor, 7 = excellent), their performance for each specific learning outcome and their general ability to use a systematic approach (to reflect in terms of shape, radio-opacity, and architecture) and to interpret radiographs.
The randomization process was checked by comparing the age and gender of the two groups.
Means and medians for the total marks for the examination and for each individual learning outcome, as well as for the students’ evaluation of performance in the learning outcomes, were generated. A two-sample t-test was used to compare the means of the e-learning and lecture groups for the total marks achieved in the exam and for the total learning outcome evaluation. The assumption of normality of the data was not met for the individual learning outcomes, so the medians were compared using the non-parametric Mann-Whitney test. Finally, a standard equivalence-testing approach was used to determine the extent of equivalence between the total marks and the total evaluation scores in the two groups of students.8 This approach was used to assess the evidence in the data that the results from the two teaching methods are equivalent and to assess the magnitude of any difference.
The total number of fourth-year students present for the clinical theory course was 96, of whom four failed to attend the test for various reasons (family commitments or illness). Of the 92 students remaining in the trial, 48 were allocated to the e-learning group and 44 to the lecture group. There was no significant difference in the two groups with respect to age (Mann-Whitney p = 0.4) or gender (χ2 p = 0.5).
The results of the analysis are shown in Table 1. There was no significant difference between the two groups with respect to total marks on the MCQ or total self-evaluation scores. Only marks for outcome 1 on the MCQ were significantly different, with the median rank of marks significantly higher for the e-learning group than for the lecture group (see Table 1).
|
| Learning Outcome | E-Learning (N = 48) | Lecture (N = 44) | Mann-Whitney p-value | ||||
|---|---|---|---|---|---|---|---|
| Mean | Median | Mean Rank | Mean | Median | Mean Rank | ||
| 1. Be familiar with the basics of radiology | 4.2 | 4 | 51.6 | 3.8 | 4 | 41.0 | 0.04* |
| 2. Be familiar with the terminology relating to the different projections | 4.4 | 5 | 49.5 | 4.1 | 4 | 43.2 | 0.2 |
| 3. Know the radiographic anatomy of the feet and localize the sites of interest | 3.9 | 4 | 46.1 | 4.0 | 4 | 47.0 | 0.9 |
| 4. Know a list of words used to describe radiographs | 3.2 | 3 | 48.5 | 3.0 | 3 | 44.3 | 0.4 |
| 5. Be able to recognize normality, to detect and describe abnormality | 2.5 | 3 | 47.3 | 2.5 | 3 | 45.6 | 0.8 |
| 6. Be able to diagnose common pathologies of the feet | 4.3 | 4.5 | 42.0 | 4.6 | 5 | 51.4 | 0.06 |
| SD | SD | 2-sample t-test | |||||
| Total marks | 22.4 | 22.5 | 2.5 | 22.0 | 23 | 3.0 | 0.5 |
*Significantly different at p < 0.05.
There was no significant difference between the two groups with respect to total self-evaluation scores. Only learning outcome 2 showed a difference in student self-evaluation score, with median marks significantly higher for the e-learning group than for the lecture group (see Table 2). However, students’ overall assessments of their own ability to apply a systematic approach to the evaluation of radiographs and to be able to reflect in terms of shape, radio-opacity, and architecture were significantly higher for the lecture group (Table 3).
| Self-evaluation by Learning Outcome | E-Learning (N = 48) | Lecture (N = 44) | Mann-Whitney p-value | ||||
|---|---|---|---|---|---|---|---|
| Mean | Median | Mean rank | Mean | Median | Mean rank | ||
| 1 | 5.1 | 5 | 49.5 | 4.8 | 5 | 43.2 | 0.2 |
| 2 | 5.5 | 6 | 52.6 | 5.0 | 5 | 39.9 | 0.01* |
| 3 | 4.8 | 5 | 46.4 | 4.9 | 5 | 46.8 | 0.9 |
| 4 | 4.7 | 5 | 43.5 | 5.0 | 5 | 49.8 | 0.3 |
| 5 | 3.7 | 4 | 45.2 | 3.9 | 4 | 48.0 | 0.7 |
| 6 | 4.0 | 4 | 44.4 | 4.1 | 4 | 48.8 | 0.5 |
| SD | SD | 2-sample t-test | |||||
| Total | 27.9 | 29.0 | 3.4 | 27.7 | 27.0 | 2.8 | 0.4 |
* Significantly different at p < 0.05.
|
| E-Learning | Lecture | Mann-Whitney p-value | |||||
|---|---|---|---|---|---|---|---|
| Mean | Median | Mean Rank | Mean | Median | Mean Rank | ||
| A | 4.6 | 5 | 38.7 | 5.2 | 5 | 55.0 | 0.002* |
| B | 5.4 | 5 | 62.1 | 3.1 | 3 | 29.5 | <0.001* |
* Significantly different at p < 0.05.
Students’ assessment of the ease of e-learning was higher (mean 5.4) than students’ assessment of the ease of lecture learning (mean 3.1) (see Table 3). In the survey, half the students in the e-learning group thought the topic of radiography should also be taught by means of lectures, while half the students in the lecture group reported that e-learning and access to online radiographs would probably have been beneficial.
Equivalence testing showed that the maximum lower and upper limits between the means of the total marks (p = 0.05) are −0.51 and +1.43. Therefore, e-learning could reduce the average mark by as little as 0.51, but could increase it by 1.43. The limits for the total score of self-evaluation were −0.86 and +1.31. Therefore, e-learning could reduce students’ self-evaluation by 0.86 but could increase it by 1.31.
The pursuit of “quality” has become all-pervasive in modern society and has been adopted by higher education.9,10 Education is now countering the dominance of research in the traditional academic triumvirate of teaching, research, and service, thus increasing the pressure on the academic clinician. The recent recommendations from the Quality Assurance Agency and from the Royal College of Veterinary Surgeons through the Education Strategy Steering Group, stress the need to prioritize and actively support the teaching mission and new technologies, including e-learning.11 The development of epidemiological approaches to studying educational effectiveness should inform institutions about what can be changed in the formal curriculum without compromising graduating students’ clinical competence.12 Faculty can improve their teaching with more objective data, which should enhance learner satisfaction as well.
Many studies comparing e-learning with traditional forms of education have been published, documenting significant or no significant difference in student outcomes based on the mode of education delivery (face to face or at a distance).13–16 In this study, the effectiveness of a new case-based e-learning tool (VetsDataWeb) was assessed using an experimental design. It applied epidemiological techniques, more commonly associated with measuring the frequency and determinants of disease and evaluating rational interventions for treatment, to e-learning. No assessment of students’ knowledge or skills was carried out at baseline. However, the randomization process makes it unlikely that significant differences were present between the groups before the study. In hindsight, a pre-test could have been used to identify participants’ level of knowledge before the study. However, the students would likely not understand why they were taking a test on subject matter they had never been exposed to.
The results show that students who used e-learning performed as well as the lecture group, with no significant differences detected in the total marks for the multiple-choice questionnaire. Furthermore, the equivalence tests show that the maximum number of points by which e-learning is likely to reduce marks is as little as 0.51, but that e-learning could increase marks by 1.43.
In e-learning, the lack of contact with the instructor can be compensated for by the advantage of working at one's own pace, with the possibility of repeating the learning exercises at will and the provision of just-in-time and specific feedback during exercises. Meaningful learning requires an active learning environment in which the learner can acquire the needed information, continually test the mental models being built, and correct or refine those models.17 The availability of Web-based case materials allows increased opportunity to practice problem-solving skills.18 It is interesting to note that even with a short period of exposure, the tool was efficient. However, it would be useful to reassess all students after the use of the e-learning tool for revision over a longer period. The role of learner support would also require investigation.19
Other studies have shown that the general perception of students is that attendance at a lecture leads to a greater increase in knowledge and skills than using a computer tool.15 Wood et al. propose that the continued demand for lectures suggests an insecurity in some students caught between two different paradigms of teaching and learning (experiential, problem-based learning versus lectures).16 In the questionnaire submitted at the end of the present study, similar comments referred to a combination of lectures and e-learning for revision as the ideal approach. This raises again the point, highlighted by Williams et al., of the difference between subjective and objective ratings:
This is particularly relevant at a time when student feedback is increasingly used in judging quality of teaching. It suggests that sometimes students do not know what is good for them. This may erroneously lead to the cessation of innovative teaching approaches that might otherwise have led to gains in knowledge and skills.15
In the present study, an objective rating process was carried out to assess students’ performance on each learning outcome; it showed no overall significant difference between groups. Students’ self-evaluation for each of the six learning outcomes was not significantly different, except for learning outcome 2. However, the students made contrasting comments in their free-text answers (the e-learning group wishing to maintain a lecture component, the lecture group wishing to have access to online exercises). These results illustrate again the possible pitfall of using inquiries about student satisfactions for the implementation of new educational interventions or technologies. A combination of students’ and instructors’ opinions can inform curriculum development, taking into account more objective ratings from experimental treatments.
Interestingly, the e-learning group rated their “overall ability to have a systematic approach of radiographs (reflect in terms of shape, radio-opacity and architecture) and to interpret radiographs” lower than the lecture group, in the context of general questions asked at the end of the study. This was not significant when they had to assess their “ability to diagnose common pathologies of the feet on radiographs” (learning outcome 6), in the context of their reflection on learning objectives. This apparent contradiction in two questions of very similar medical meaning may have a number of different explanations. The e-learning tool has different characteristics that may improve self-efficacy: feedback is automatic and immediate; students submit their answer with a percentage of confidence; they can assess their progress through a spectrum of performance correlating their answers and percentage of confidence, which may help them to develop metacognition and self-efficacy.20 But this difference may also be the result of a semantic effect. The groups were exposed to two different methods that use different words. The importance of a systematic approach may have been more stressed by the lecturer, while every answer submitted to the e-learning tool was accompanied by a percentage of confidence, a notion absent from the lecture. This also illustrates the difficulty of interpreting subjective ratings or comments in surveys.
Student participation in the study was high, and the fact that many respondents subsequently inquired about the results of the study would suggest that they took their participation seriously. Their comments were constructive. The problem of availability of space and radiograph viewers for the lecture group was reported, as in previous years, and this was probably responsible for the difficulties they encountered. The other group felt that e-learning was an easy method. These are other arguments to encourage the future use of e-learning in the discipline of radiography.
The study reported here shows that the e-tool (VetsDataWeb) is a useful resource that could act as a template for other subjects within the veterinary undergraduate curriculum. The development of such a computer program offers great promise in using real clinical cases to teach students with minimal input from clinicians. The software's maintenance is database driven, which means that the tool can be changed centrally, at the level of the database, and does not require updating software on a number of remote computers. This allows the tool the flexibility to adapt to new clinical specialities and extends its use into other fields (e.g., human medicine). The present study also shows how a new educational intervention can be tested objectively. Its results and discussion will be passed on to students, as this may give them a more objective overview of the teaching strategy and therefore, through a better understanding of the mechanisms involved, constitute a step toward improved metacognition. Furthermore, learners can create a personalized e-portfolio. They can keep a record of the clinical cases they have seen. Their performance and progress during the e-learning exercises are recorded automatically, and they can add specific bits of information in “notes” to their personal help files. This is useful for assessment and for informing practitioners on the use of the tool for reflective learning. Another potential outcome would be collaboration across veterinary colleges to share these valuable resources, providing a better return on investment and more efficient use of faculty time. In addition to being content creators in their area of expertise, faculty could select from available resources to fashion the best learning experience for their students.
AAVMC BIOMEDICAL RESEARCH SYMPOSIUM
MERCK-MERIAL STUDENT SYMPOSIUM
“Veterinarians in Biomedical Research Conference:
Building National Capacity”
Committee Chairs
Daryl Buss, DVM, PhD
Dean School of Veterinary Medicine, University of Wisconsin
And
Michael Atchison, PhD
Director, VMC-PhD Combined Degree Program, University of Wisconsin
August 1-4, 2007
Bethesda, Maryland
More information will be available as the symposia develop via the AAVMC Web site (aavmc.org).

a Epi Info version 6.04d (2001), Centers for Disease Control and Prevention (CDC), Atlanta, GA USA.
b Microsoft Corp., Redmond, WA, USA <www.microsoft.com>.
| 1. | Poumay, M. (2001).Why and How Using Case Based Learning Methods (CBLMs): A Report for Hewlett Packard in San Francisco..Belgium:University of Liège Google Scholar |
| 2. | , Bandura, A.Bandura, A. (1980).L’apprentissage correctif.L’apprentissage social..Brussels:, Mardaga77-83 Google Scholar |
| 3. | Dhein, CR. (2005).Online small-animal case simulations, a.k.a. the Virtual Veterinary Clinic.J Vet Med Educ.32,93-102 Medline, Google Scholar |
| 4. | , Vandeweerd, JM, Davies, J.RegeColet, N. (2005).Un systeme d’enregistrement des données et ses applications en Case Based e-Learning.Actes du 22e congrès de l’Association Internationale de Pédagogie Universitaire. L’enseignement supérieur du XXIe siècle: de nouveaux défis à relever [CD-ROM]..Geneva:, AIPU83-84 Google Scholar |
| 5. | Leclercq, D. (1983).Confidence Marking: Its Use in Testing..Oxford:Pergamon Press Google Scholar |
| 6. | Leclercq, D, Bruno, J. (1993).Item Banking, Interactive Testing and Self Assessment [NATO ASI Series F112]..Berlin:Springer Verlag Google Scholar |
| 7. | Leclercq, D, Poumay, M. (2005).3 metacognitive indices for realism in self-assesment., Accessed 01/15/05LabSET, University of Liège<http://www.labset.net/media/prod/three_meta.pdf>. Google Scholar |
| 8. | Christley, RM, Reid, SWJ. (2003).No significant difference: use of statistical methods for testing equivalence in clinical veterinary literature.J Am Vet Med Assoc.222,433-437 Medline, Google Scholar |
| 9. | , Brennan, J.Brennan, J, de Vries, P, Williams, R. (1997).Introduction.Standards and Quality in Higher Education..London:Jessica Kingsley Publishers2 Google Scholar |
| 10. | , Romainville, M, Boxus, E.Leclercq, D. (1998).La qualité en pédagogie universitaire.Pour une pédagogie universitaire de qualité..Liège:, Mardaga13-32 Google Scholar |
| 11. | Royal College of Veterinary Surgeons [RCVS].Consultation paper on veterinary education and training.Accessed 06/15/05<www.rcvs.org.uk/vet_surgeons/consultation/essg/esg_consultation.html>. Google Scholar |
| 12. | Carney, PA, Nierenberg, DW, Pipas, CF, Brooks, WB, Stukel, TA, Keller, AM. (2004).Applying population-based design and analytic approaches to study medical education.J Am Med Assoc.292,1044-1050 Google Scholar |
| 13. | Browne, L, Mehra, S, Rattan, R, Thomas, G. (2004).Comparing lecture and e-learning as pedagogies for new and experienced professionals in dentistry.Brit Dent J.197,2,95-97 Medline, Google Scholar |
| 14. | Russell, TL. (2001).The No Significant Difference Phenomenon: A Comparative Research Annotated Bibliography on Technology for Distance Education.., 5thMontgomery, AL:IDECC Google Scholar |
| 15. | Williams, C, Aubin, S, Harkin, P, Cottrell, D. (2001).A randomized, controlled, single-blind trial of teaching provided by a computer-based multimedia package versus lecture.Med Educ.35,847-854 Medline, Google Scholar |
| 16. | Wood, AK, Lublin, JR, Hoffmann, KL, Dadd, MJ. (2000).Alternatives for improving veterinary medical students’ learning of clinical sonography.Vet Radiol Ultrasound.41,433-436 Medline, Google Scholar |
| 17. | Michael, JA. (2004).Mental models and meaningful learning.J Vet Med Educ.31,1-5 Medline, Google Scholar |
| 18. | Forrester, SD, Inzana, KD, Leib, MS, Purswell, BJ. (2001).Using the World Wide Web to enhance the problem-solving skills of third-year veterinary students.J Vet Med Educ.28,31-33 Abstract, Google Scholar |
| 19. | Thorpe, M. (2001).Rethinking learner support: the challenge of collaborative online learning., Accessed 01/09/05SCROLLA Networked Learning Symposium, University of Glasgow<http://www.scrolla.ac.uk/papers/s1/thorpe_paper.html>. Google Scholar |
| 20. | , Leclercq, D, Poumay, M.Chiadli, A. (2004).Une définition opérationnelle de la métacognition et ses mises en œuvre.Actes du 21e congrès de l’Association Internationale de Pédagogie Universitaire. L’AIPU: 20 ans de recherche et d’actions pédagogiques; bilans et perspectives [CD-ROM]..Marrakesh:AIPU Google Scholar |


