Volume 47 Issue 2, April 2020, pp. 239-247

Video- versus handout-based instructions may influence student outcomes during simulation training and competency-based assessments. Forty-five third-year veterinary students voluntarily participated in a simulation module on canine endotracheal intubation. A prospective, randomized, double-blinded study investigated the impact of video (n = 23) versus handout (n = 22) instructions on student confidence, anxiety, and task performance. Students self-scored their confidence and anxiety before and after the simulation. During the simulation laboratory, three raters independently evaluated student performance using a 20-item formal assessment tool with a 5-point global rating scale. No significant between- or within-group differences (p > .05) were found for both confidence and anxiety scores. Video-based instructions were associated with significantly higher (p < .05) total formal assessment scores compared with handout-based instructions. The video group had significantly higher scores than the handout group on 3 of the 20 individual skills (items) assessed: placement of tie to the adaptor–endotracheal tube complex (p < .05), using the anesthetic machine (p < .01), and pop-off valve management (p < .001). Inter-rater reliability as assessed by Cronbach’s α (.92), and Kendall’s W (.89) was excellent and almost perfect, respectively. A two-faceted crossed-design generalizability analysis yielded G coefficients for both the handout (Ep2 = .68) and the video (Ep2 = .72) groups. Video instructions may be associated with higher performance scores than handout instructions during endotracheal intubation simulation training. Further research into skill retention and learning styles is warranted.

Delivery of information to learners can occur in a variety of forms, including handouts, live demonstrations, computer-based models, videos, and simulation activities. The generation of students currently enrolled in veterinary training programs is presumably more reliant on web-based sources of information that provide information in multiple formats, particularly video. Langebæk and colleagues have reported that 58%–60% of students use a dynamic visual method of recollection during surgical procedures.1 Video was perceived by students as being significantly more influential than other forms of educational input, such as a textbook, lecture, or model.2 Similarly, students receiving audiovisual technology incorporated into a dentistry program had significantly higher scores versus than those receiving traditional instruction.3 Video-based instructions have also been associated with better patient outcomes than a handout in non-veterinary client education programs, including physiotherapy,4 smoking cessation,5 and obesity management.6

However, a study among second-year medical students found problem-based learning to occur at a superficial level with video-based instruction compared with text-based case material, suggesting that video may not be appropriate for teaching all types of subject material.7 Selecting an instructional format for subject material that lacks sufficient educational research is challenging for educators. Research investigating endotracheal intubation during veterinary anesthesia training has focused on the simulation environment. Using simulation to teach endotracheal intubation has been shown to result in improved confidence among undergraduates.8,9 Training with high- and low-fidelity simulation models of endotracheal intubation was associated with significantly higher student assessment scores than training with a handout.10 To our knowledge, canine endotracheal intubation has been not been studied with respect to the effect of video- or handout-based instructions on emotional states and student performance.

The effect of instructional format on student learning can be evaluated with respect to task performance and emotional state. For student performance, veterinary educational institutions have reported using multiple methods to assess competency.11 Simulation is becoming increasingly popular because it allows institutions to provide students with both practice and assessment opportunities for essential skills but balances them with animal welfare concerns.1214 Competency-based assessments should ideally utilize an instrument that incorporates a global rating scale not only to discriminate nuances in student performance but also to optimize reliability and validity.15,16

In terms of emotional states, anxiety can significantly affect student learning. State anxiety refers to one’s feelings of anxiety in the current moment.17 The Yerkes–Dodson Law states that arousal itself can facilitate performance by focusing processing resources on the task at hand.18 Alternatively, too much arousal can inverse that relationship by overtaxing cognitive resources, thereby leading to detriments in performance.18 A relative paucity of veterinary literature exists with respect to state anxiety during endotracheal intubation training. Spielberger and colleagues’ State–Trait Anxiety Inventory (STAI) is the most rigorously tested and extensively used measure of anxiety.19,20

Educational research evaluating the instructional format before a simulation-based assessment of canine endotracheal intubation is warranted to guide the development of teaching and assessment materials. The objective of this randomized, prospective double-blinded study was to determine whether instructions provided in video or text format would lead to improved confidence, anxiety, and performance scores.

The institutional review board at the University of Saskatchewan granted exemption from formal approval for this study (BEH 17-270).

Participants

During their first semester at the University of Saskatchewan, third-year veterinary students learn about endotracheal intubation in an anesthesia course, with live animal (sheep and cat) experiences provided over 1–3 months. A voluntary simulation-based study was offered during the second semester of the third year. Students were unaware of the content and study design. Students were recruited with announcements made between classes, through email, and on Google Drive.

Design

Voluntary participants were scheduled every 20 minutes over 4 days. Participants were randomly assigned to an instructional group (video or handout) and proceeded through the study individually as follows:

  1. completing a consent form and describing their previous endotracheal intubation experience in a pre-instruction questionnaire,

  2. receiving instructions in either video or handout format,

  3. completing a pre-simulation questionnaire,

  4. completing a simulation-based endotracheal intubation assessment (test), and

  5. completing a post-simulation questionnaire.

Students were asked not to discuss any aspect of the study until completion so that future participants would not be biased.

Pre-Simulation (Pre-Test) Assessment Stages

After completing the consent forms, participants were escorted to a private area to learn identical information presented through one of two formats: a handout (see Appendix 1, available online at https://jvme.utpjournals.press/doi/suppl/10.3138/jvme.0618-077r1) or a video (also available at https://jvme.utpjournals.press/doi/suppl/10.3138/jvme.0618-077r1). The students were given a maximum of 10 minutes to review the instructions before the canine endotracheal intubation simulation. Once participants had reviewed the instructions, they completed the post-instruction pre-simulation questionnaire:

  • Usefulness ratings of instructions: Participants rated whether they felt that the instructions received (either video or handout) were useful (“The instructions provided were useful”) on a 7-point numerical rating scale ranging from 1 (strongly disagree) to 7 (strongly agree).

  • Confidence scale: Participants self-rated their canine endotracheal intubation skills (“How confident are you in your canine endotracheal intubation skills?) on an 11-point numerical rating scale ranging from 0 (not confident at all) to 10 (as confident as possible).

  • Anxiety scale: Participants rated their state anxiety on the STAI.20 Immediately before the simulation assessment, participants rated how they felt in the present moment on a variety of states (e.g., I feel calm, I feel tense) on a scale ranging from 1 (not at all) to 4 (very much so). Higher scores on the STAI indicated greater levels of state anxiety.

Participants were then immediately escorted into the simulation lab for the skill assessment.

Simulation-Based Endotracheal Intubation Assessment (Test)

The simulation lab was set up with a synthetic canine model (SynDaver Labs, Tampa, FL), a premeasured 7.5 endotracheal tube, a laryngoscope, a 50 cm piece of kling, a standard anesthetic machine connected to oxygen, and an assistant to facilitate opening the mouth and positioning the head of the model. During the simulation exercise, three raters (two board-accredited veterinary anesthesiologists [BA and SB] and 1 board-accredited registered veterinary technician [CC]) independently evaluated each student using a 20-item global rating scale–based formal assessment tool incorporating a 5-point scale with descriptors anchored at 1, 3, and 5 (Appendix 2). Before the study, we met and agreed on 20 individual skills (items) deemed appropriate at the third-year level for competency-based assessment of canine endotracheal intubation. Total formal assessment scores were calculated, with a possible maximum score of 100.

Post-Simulation (Post-Test) Assessment Stages

Immediately after the simulation assessment, participants re-took the STAI and completed confidence assessments and usefulness ratings of their assigned instructional type.

Statistical Analysis

Shapiro–Wilk normality testing identified a non-Gaussian distribution for most of the data, necessitating the use of nonparametric statistical methods.

Our null hypothesis was that there would be no difference between groups (handout vs. video) or within groups (pre-test vs. post-test) in terms of usefulness ratings of the instructions, anxiety scores, confidence scores, and formal assessment scores. Our alternative hypothesis was that there would be significant differences both between (handout vs. video) and within groups (pre-test vs. post-test).

A Mann–Whitney U test was used for between-group (handout vs. video) comparisons and a Wilcoxon signed ranks test for within-group (pre-test vs. post-test) comparisons. The 20 individual items in the formal assessment tool were analyzed with a Kruskal–Wallis test followed by post hoc evaluation with Dunn’s multiple comparisons test. Univariate analysis of the distribution of total formal assessment scores for potential confounding risk factors and their categories was performed with a Kruskal–Wallis test and excluded from multivariate analysis if p > .2. Inter-rater reliability was evaluated with both classical test theory (Cronbach’s α and Kendall’s W) and generalizability theory analysis (two-facet crossed design). IBM SPSS Statistics Version 24 (IBM Corp., Armonk, NY) was used for statistical analysis, with significance set at p < .05. Figures were prepared with GraphPad Prism Version 6 (GraphPad Software, San Diego, CA).

Participants

The enrollment rate was 57% (45/79). Participants were 11 men (24.4%) and 34 women (75.6%). All participants had live animal experience with endotracheal intubation. The majority of participants (55.6%) reported one to five previous live animal endotracheal intubation experiences (Table 1).

Table

Table 1: Participants’ experience with endotracheal intubation (N = 45)

Table 1: Participants’ experience with endotracheal intubation (N = 45)

Previous experiences n (%)
0 0 (0)
1–5 25 (55.6)
6–10 7 (15.6)
11–20 8 (17.8)
≥20 5 (11.0)
Usefulness Ratings of Instructions

Student feedback pertaining to the perceived usefulness of the instructions revealed no significant pre-test difference (p > 0.05) between the handout group (median = 6; 25th–75th percentile = 6–7; range, min–max = 5–7) and the video group (median = 6; 25th–75th percentile = 6–7; range, min–max = 6–7; Figure 1A). Post-test, however, the video group (median = 9; 25th−75th percentile = 8–10; range, min–max = 7–10) found their instructional method significantly (p < .05) more useful than did the handout group (median = 7; 25th–75th percentile = 6–8; range, min–max = 4–10; Figure 1B).

Figure 1: Usefulness ratings of instructions (A) between the handout and video groups at pre-test, (B) between the handout and video groups at post-test, (C) within the handout group at pre-test and post-test, and (D) within the video group at pre-test and post-test

ns = nonsignificant
* p < .05
p < .001

Within the handout group, perceived usefulness ratings of the instructions provided were significantly higher post-test (p < .05; median = 7; 25th–75th percentile = 6–8; range, min–max = 4–10) than pre-test (median = 6; 25th–75th percentile = 6–7; range, min–max = 5–7; Figure 1C). Findings were similar for the video group, whose perceived usefulness ratings of the instructions provided were significantly higher post-test (p < .001; median = 9; 25th–75th percentile = 8–10; range, min–max = 7–10) than pre-test (median = 6; 25th–75th percentile = 6–7; range, min–max = 6–7; Figure 1D).

Confidence Scores

Confidence scores were not significantly different (p > .05) between instructional types (handout vs. video) either at pre-test or post-test (Figures 2A and 2B). Confidence scores were not significantly different (p > 0.05) at pre-test compared with post-test in either the handout group or the video group (Figures 2C and 2D).

Figure 2: Confidence scores: (A) between the handout and video groups at pre-test, (B) between the handout and video groups at post-test, (C) within the handout group at pre-test and post-test, and (D) within the video group pre-test and post-test

ns = nonsignificant
Anxiety Scores

Anxiety scores were not significantly different (p > .05) between instructional types (handout vs. video) at either pre-test or post-test (Figures 3A and 3B). Anxiety scores were not significantly different (p > 0.05) at pre-test compared with post-test in either the handout group or the video group (Figures 3C and 3D).

Figure 3: Anxiety scores: (A) between the handout and video groups at pre-test, (B) between the handout and video groups at post-test, (C) within the handout group at pre-test and post-test, and (D) within the video group at pre-test and post-test

ns = nonsignificant
Formal Assessment Scores

Total formal assessment scores were significantly higher (p < 0.05) after video versus handout instructions for all three raters (Figure 4). The average of the 3 raters was also significantly higher (p < 0.05) for the video group (median = 94; 25th–75th percentile = 90–98; range, min– max = 79–100) compared to the handout group (median = 88; 25th–75th percentile = 83–95; range, min–max = 73–100).

Figure 4: Total formal assessment scores

* p < .05
** p < .01

Individual skill assessment scores (items) generated by the three raters were pooled for all 20 items (Table 2). No significant differences were observed between groups for 17 of the 20 component skills. Significantly higher individual skill assessment scores were attained during the simulation after receiving video-based instruction compared with handout-based instruction on the following three items:

Table

Table 2: Average assessment scores for each of the 20 individual skills

Table 2: Average assessment scores for each of the 20 individual skills

Skill and instruction type n Median 25th–75th percentile Min–Max Adjusted p
Skill 1 >.999
   Handout 22 5 5–5 5–5
   Video 23 5 5–5 4–5
Skill 2 >.999
   Handout 22 5 5–5 5–5
   Video 23 5 5–5 5–5
Skill 3 >.999
   Handout 22 5 5–5 3–5
   Video 23 5 4–5 3–5
Skill 4 >.999
   Handout 22 5 5–5 4–5
   Video 23 5 5–5 3–5
Skill 5 >.999
   Handout 22 5 5–5 4–5
   Video 23 5 5–5 3–5
Skill 6 >.999
   Handout 22 5 5–5 3–4
   Video 23 5 4–5 1–5
Skill 7 >.999
   Handout 22 5 5–5 2–5
   Video 23 5 4–5 2–5
Skill 8 .011*
   Handout 22 5 3–5 1–5
   Video 23 5 5–5 1–5
Skill 9 >.999
   Handout 22 5 4–5 1–5
   Video 23 5 5–5 1–5
Skill 10 >.999
   Handout 22 5 5–5 1–5
   Video 23 5 5–5 1–5
Skill 11 .323
   Handout 22 5 3–5 1–5
   Video 23 5 5–5 3–5
Skill 12 >.999
   Handout 22 5 5–5 1–5
   Video 23 5 5–5 4–5
Skill 13 >.999
   Handout 22 5 5–5 4–5
   Video 23 5 5–5 3–5
Skill 14 .634
   Handout 22 5 4–5 1–5
   Video 23 5 5–5 1–5
Skill 15 .004†
   Handout 22 4 2–5 1–5
   Video 23 5 4–5 2–5
Skill 16 >.999
   Handout 22 4 3–5 1–5
   Video 23 4 2–5 1–5
Skill 17 .771
   Handout 22 3 1.75–5 1–5
   Video 23 4 3–5 1–5
Skill 18 <.001‡
   Handout 22 3 1–5 1–5
   Video 23 5 5–5 1–5
Skill 19 >.999
   Handout 22 5 5–5 2–5
   Video 23 5 5–5 2–5
Skill 20 >.999
   Handout 22 5 5–5 5–5
   Video 23 5 5–5 1–5

Note: Student assessment scores generated by the three raters were averaged for each of the 20 individual skills (items). Shapiro–Wilk normality testing confirmed a non-parametric distribution. A Kruskal–Wallis analysis with Dunn’s multiple comparison test revealed significantly higher scores for Skills 8, 15, and 18.

* p < .05

p < .01

p < .001

  • Skill 8—Placement of tie to the adaptor–endotracheal tube complex: for the video instruction, median = 5; 25th–75th percentile = 5–5; range, min–max = 1–5, and for the handout instruction, median = 5; 25th–75th percentile = 3–5; range, min–max = 1–5; p < .05.

  • Skill 15—Using the anesthetic machine: for the video instruction, median = 5; 25th–75th percentile = 4–5; range, min–max = 2–5, and for the handout instruction, median = 4; 25th–75th percentile = 2–5; range, min–max = 1–5; p < .01.

  • Skill 18—Pop-off valve management: for the video instruction, median = 5; 25th–75th percentile = 5–5; range, min–max = 1–5, and for the handout instruction, median = 3; 25th–75th percentile = 1–5; range, min–max = 1–5; p < .001.

Potential Confounding Risk Factors

In univariate analysis, the distribution of total formal assessment scores (data not shown) was not (p > 0.2) associated with potential independently associated risk factors and their categories: male and female biological gender; 4 days of possible participation in the study; and five reported levels of previous intubation experience.

Reliability

Classical test theory estimates of inter-rater reliability for the total formal assessment scores documented a Cronbach’s α of .92 and a Kendall’s W of .89. According to current interpretation guidelines, Cronbach’s α, a measure of internal consistency, can be considered excellent (>.9),21 and Kendall’s W, a measure of agreement, can be considered almost perfect (.81–1.00) or strong (.71–90).21,22

Generalizability theory established variance components, percentage variances, and G coefficients for the handout (E p 2 = .68) and video (E p 2 = .72) groups (Table 3). A benchmark of E p 2 = .60 is generally accepted for low-stakes assessments.23

Table

Table 3: Two-facet crossed design generalizability analysis

Table 3: Two-facet crossed design generalizability analysis

Variance component %variance G coefficient(Ep2)
Source of measurement error    Handout Video    Handout Video    Handout Video
Student (object) .089 .062 6.1 8.3 0.68 0.72
Item (facet) .34 .093 23.4 12.4
Rater (facet) .015 .001 1.0 0.1
Student × Rater (facet) .015 .006 1.0 0.8
Student × Item (facet) .613 .365 42.2 48.7
Item × Rater (facet) .042 .010 2.9 1.3
Residual .339 .212 23.3 28.3

Total formal assessment scores were significantly higher for the video- than the handout-based instructions before an endotracheal intubation simulation-based laboratory. The reliability of the data is supported not only by high estimates of inter-rater internal consistency (Cronbach’s α = .92) and agreement (Kendall’s W = .89) as determined by classical test theory but also via G-theory analysis. The percentage of variance for the rater facet was low for both the handout (1.0%) and the video (0.1%) groups, demonstrating that the raters were a minimal source of measurement error. Most of the percentage of variance arises from the student × item facet (42.2% in the handout group, 48.7% in the video group), which we interpret as the ability of certain items in the formal assessment tool to discriminate on the basis of student performance. In fact, 3 of the 20 items were identified in post hoc analysis as being associated with significantly higher scores for the video group (Table 2). Given these findings, it is our opinion that a Type 1 error is unlikely.

To our knowledge this is the first study documenting better student outcomes with video-based instructions than with handout-based instructions for canine endotracheal intubation. Aulmann and colleagues10 previously documented higher student assessment scores after intubation training with simulation- versus text-based instructions. However, that study did not investigate the effect of video. Multiple explanations are possible for the higher performance scores associated with the video group in our study. It is possible that video instructions provide visual and dynamic instruction for learners performing a task, which is not as easily accomplished with a handout. The three items identified in post hoc analysis as being associated with significantly higher scores in the video group may be easier to teach or more readily understood when presented to students in this format. In addition, the comprehensive 20-item global rating scale–based assessment tool may have provided the sensitivity to detect a statistically significant difference between groups because each individual skill could be evaluated in a standardized way for all participants.

Another factor might be that the cohort of students involved in this study may be more comfortable with video instructions than with a text-based handout. Post-test, students in the video group rated the perceived usefulness of the instructions significantly higher than did the handout group (Figure 1B). Students rated the perceived usefulness of the instructions significantly higher post-test than pre-test in both video (p < .001) and handout (p < .05) groups (Figures 1C and 1D). The level of significance between these two groups was markedly different, which may support a student preference for video-based instructions.

It was interesting that we found no significant difference in anxiety scores for between-group and within-group comparisons despite using a validated STAI (Figure 3). It is possible that state anxiety is not influenced by instructional method or that both groups felt similarly prepared to execute the assessment task. It is more likely that the participants simply did not find the assessment or environment to be anxiety provoking. Students may be less likely to become anxious during a voluntary study than during a mandatory competency-based assessment for which credit or promotion is granted. However, the use of a canine simulation model may have reduced state anxiety that would otherwise have been experienced with live animal use. Our interpretation is that simulation-based training and assessment facilitates learning and performance by reducing state anxiety to a level at which it is not deleterious.

Additional practices may also be used to manage state anxiety in undergraduate veterinary students. Reducing assessment-associated anxiety and managing stress has been achieved through coaching workshops and seminars for veterinary nursing students and veterinary students,24,25 respectively. These workshops are believed to empower the student and to promote the development of coping strategies, resilience, and perspective.24,25 Another stress management strategy is mindfulness practice. Veterinary students reported decreased anxiety and depression after at least weekly mindfulness practice compared with a group practicing less frequently.26

However, it is possible that a Type 2 error may have occurred with our state anxiety data, based on the student population chosen for the study. Traditionally, minimally experienced students are studied because they are more likely to have elevated state anxiety scores. Unfortunately, this inexperienced group of students may also become overwhelmed without prior training, which could result in a negative experience and withdrawal from the study. We decided to study more experienced third-year students because it addressed this potential concern. In addition, the third year is the stage in our curriculum at which the incorporation of a competency-based assessment for endotracheal intubation was deemed most appropriate as students are promoted from pre-clinical to clinical training. Although all participants had previous exposure to endotracheal intubation, the majority (55.6%) reported only one to five live animal experiences, and a minority reported higher levels of training (Table 1).

Additional limitations of this study must be recognized. The results may not apply to all subject material or assessments. For example, video-based cases have been reported to disrupt critical-thinking skills during problem-based learning.7 Moreover, our study did not investigate whether video-formatted instructions affect the retention of subject material or the ability to perform the task at a future date. It is our opinion that having students perform the task proficiently, during initial simulation-based training, will promote skill set development before the attainment of undesirable habits that could compromise patient safety. Another limitation is that we did not investigate underlying student learning styles and inquiry into how learning occurred. In addition, the voluntary nature of the study could lead to a selection bias because academically strong, diligent, or dynamic learners may be more likely to participate.

A final limitation of the study pertains to the generalizability of the findings. A relatively small population of students was studied, which may not translate to the greater student population at other institutions. The G coefficients of .68 (handout group) and .72 (video group) were higher than the generally accepted benchmark of .60 for low-stakes assessments such as the voluntary laboratory described in this study.23 However, these G coefficients would not be acceptable for higher stakes assessments such as promotion within a veterinary curriculum or credentialing.23 For reference, Williamson and colleagues23 summarized G-coefficients from veterinary studies of skill assessment: .56, digital assessment of cadaveric celiotomy closure; .32–.56, three objective structured clinical evaluation stations; and .23–.61, feline abdominal palpation.23,27,28

To rigorously evaluate the effect of instructional type, we prioritized keeping the study double-blinded with respect to the students and the raters. Students were presumably not aware of the simulation laboratory content and did not have access to the instructions until immediately before the competency-based assessment. Student–rater communication, including debriefing, was not permitted to limit bias during assessment scoring and to avoid the possibility that students might share feedback with future participants. It is possible that students did not honor the request to refrain from discussing the study and that a few participants may have studied for the simulation. Fortunately, univariate analyses of the total assessment scores with a Kruskal–Wallis test did not identify the 4 days over which the study was conducted to be a confounding factor (all ps > .2). Therefore, we believe that most students remained blinded over the study period.

We conclude that many individuals in our student population would benefit from video-formatted instructions for endotracheal intubation task training and assessment. Formulating teaching materials and competency-based assessments for the simulation laboratory should be guided by educational research to optimize confidence and performance and to minimize anxiety in our student population. Further research is necessary to determine the effect of instructional format on skill retention.

Acknowledgment

We thank the BJ Hughes Centre for Clinical Learning at the Western College of Veterinary Medicine (WCVM) and the 45 participants from the WCVM Class of 2019. Special thanks to Malcolm Whyte for video recording and editing, Kim Dillistone for assisting with the simulation laboratory, Dr. Sarah Parker (WCVM) for statistical support, and Dr. Kent Hecker (University of Calgary Veterinary Medicine) for support with generalizability theory analysis.

1. Langebæk R, Tanggaard L, Berendt M. Veterinary students’ recollection methods for surgical procedures: a qualitative study. J Vet Med Educ. 2015;43(1):6470. https://doi.org/10.3138/jvme.0315-039R1. MedlineGoogle Scholar
2. Langebæk R, Nielsen SS, Koch BC, et al. Student preparation and the power of visual input in veterinary surgical education: an empirical study. J Vet Med Educ. 2016;43(2):21421. https://doi.org/10.3138/jvme.1015-164R. LinkGoogle Scholar
3. Ahmad M, Sleiman NH, Thomas M, et al. Use of high- definition audiovisual technology in a gross anatomy laboratory: effect on dental students’ learning outcomes and satisfaction. J Dent Educ. 2016;80(2):12832. . MedlineGoogle Scholar
4. Reo JA, Mercer VS. Effects of live, videotaped, or written instruction on learning an upper-extremity exercise program. Phys Ther. 2004;84(7):62233. https://doi.org/10.1093/ptj/84.7.622. MedlineGoogle Scholar
5. Stanczyk NE, de Vries H, Candel MJJM, et al. Effectiveness of video- versus text-based computer-tailored smoking cessation interventions among smokers after one year. Prev Med. 2016;82:4250. https://doi.org/10.1016/j.ypmed.2015.11.002. MedlineGoogle Scholar
6. Cheung KL, Schwabe I, Walthouwer MJL, et al. Effectiveness of a video- versus text-based computer-tailored intervention for obesity prevention after one tear: a randomized controlled trial. Int J Environ Res Public Health. 2017;14(10):1275. https://doi.org/10.3390/ijerph14101275. Google Scholar
7. Basu RR, McMahon GT. Video-based cases disrupt deep critical thinking in problem-based learning. Med Educ. 2012;46(4):42635. https://doi.org/10.1111/j.1365-2923.2011.04197.x. MedlineGoogle Scholar
8. Jones JL, Rinehart J, Spiegel JJ, et al. Development of veterinary anesthesia simulations for pre-clinical training: design, implementation, and evaluation based on student perspectives. J Vet Med Educ. 2017;45(2):19. https://doi.org/10.3138/jvme.1016-163. MedlineGoogle Scholar
9. Musk GC, Collins T, Hosgood G. Teaching veterinary anesthesia: a survey-based evaluation of two high-fidelity models and live-animal experience for undergraduate veterinary students. J Vet Med Educ. 2017;44(4):590602. https://doi.org/10.3138/jvme.0216-043R1. LinkGoogle Scholar
10. Aulmann M, März M, Burgener I, et al. Development and evaluation of two canine low-fidelity simulation models. J Vet Med Educ. 2015;42(2):15160. https://doi.org/10.3138/jvme.1114-114R. LinkGoogle Scholar
11. Hardie EM. Current methods in use for assessing clinical competencies: what works? J Vet Med Educ. 2008;35(3):35968. https://doi.org/10.3138/jvme.35.3.359. LinkGoogle Scholar
12. Scalese RJ, Issenberg SB. Effective use of simulations for the teaching and acquisition of veterinary professional and clinical skills. J Vet Med Educ. 2005;32(4):4617. https://doi.org/10.3138/jvme.32.4.461. LinkGoogle Scholar
13. Valliyate M, Robinson NG, Goodman JR. Current concepts in simulation and other alternatives for veterinary education: a review. Vet Med (Praha). 2012;57(7): 32537. Google Scholar
14. Kneebone R, Baillie S. Contextualized simulation and procedural skills: a view from medical education. J Vet Med Educ. 2008;35(4):5958. https://doi.org/10.3138/jvme.35.4.595. LinkGoogle Scholar
15. Ilgen JS, Ma IWY, Hatala R, et al. A systematic review of validity evidence for checklists versus global rating scales in simulation-based assessment. Med Educ. 2015;49(2):16173. https://doi.org/10.1111/medu.12621. MedlineGoogle Scholar
16. Read E, Bell C, Rhind S, et al. The use of global rating scales for OSCEs in veterinary medicine. PLoS One. 2015;10(3):e0121000. https://doi.org/10.1371/journal.pone.0121000. MedlineGoogle Scholar
17. Eysenck M, Calvo M. Anxiety and performance: the processing efficiency theory. Cogn Emot. 1992;6(6): 40934. Google Scholar
18. Cohen RA. Yerkes–Dodson law. In: Kreutzer J, DeLuca J, Caplan B, editors. Encyclopedia of clinical neuropsychology. 1st ed. New York: Springer; 2011. p. 27378. Google Scholar
19. Langebaek R, Eika B, Jensen AL, et al. Anxiety in veterinary surgical students: a quantitative study. J Vet Med Educ. 2012;39(4):33140. https://doi.org/10.3138/jvme.1111-111R1. LinkGoogle Scholar
20. Spielberger CD, Gorsuch RL, Lushene R, et al. Manual for the State–Trait Anxiety Inventory (Form Y). Palo Alto (CA): Consulting Psychologists Press; 1983. Google Scholar
21. Royal KD, Hecker KG. Understanding reliability: a review for veterinary educators. J Vet Med Educ. 2016;43(1):14. https://doi.org/10.3138/jvme.0315-030R. LinkGoogle Scholar
22. LeBreton JM, Senter JL. Answers to 20 questions about interrater reliability and interrater agreement. Organ Res Methods. 2008;11(4):81552. https://doi.org/10.1177/1094428106296642. Google Scholar
23. Williamson J, Farrell R, Skowron C, et al. Evaluation of a method to assess digitally recorded surgical skills of novice veterinary students. Vet Surg. 2018;47(3):37884. https://doi.org/10.1111/vsu.12772. MedlineGoogle Scholar
24. Dunne K, Moffett J, Loughran ST, et al. Evaluation of a coaching workshop for the management of veterinary nursing students’ OSCE-associated test anxiety. Ir Vet J. 2018;71(1):15. https://doi.org/10.1186/s13620-018-0127-z. MedlineGoogle Scholar
25. Hahm N, Augustin S, Bade C, et al. Test anxiety: evaluation of a low-threshold seminar-based intervention for veterinary students. J Vet Med Educ. 2016;43(1):4757. https://doi.org/10.3138/jvme.0215-029R1. LinkGoogle Scholar
26. Correia HM, Smith AD, Murray S, et al. The impact of a brief embedded mindfulness-based program for veterinary students. J Vet Med Educ. 2017;44(1):12533. https://doi.org/10.3138/jvme.0116-026R. LinkGoogle Scholar
27. Hecker K, Read EK, Vallevand A, et al. Assessment of first-year veterinary students’ clinical skills using objective structured clinical examinations. J Vet Med Educ. 2010;37(4):395402. https://doi.org/10.3138/jvme.37.4.395. LinkGoogle Scholar
28. Williamson JA, Hecker K, Yvorchuk K, et al. Development and validation of a feline abdominal palpation model and scoring rubric. Vet Rec. 2015;177(6):151. https://doi.org/10.1136/vr.103212. MedlineGoogle Scholar