Volume 49 Issue 6, December 2022, pp. 790-798

Feedback has been shown to be one of the most powerful and effective influences on student achievement; however, the optimal method for providing feedback to trainees during veterinary skills training has yet to be determined. A prospective mixed-methods study was undertaken to evaluate student perceptions and performance outcomes with self-assessment using video- or instructor-delivered feedback during skills training using a model. Forty participants naïve to intravenous (IV) catheter placement were randomly assigned either to self-assessment using video or to instructor-directed feedback. A questionnaire probing participants’ perceptions of their knowledge level and confidence in their skills was completed before and after the training, and an interview was done at study completion. Final skill performance was recorded using video capture to permit blind evaluations using a standard assessment tool. A quantitative evaluation of the performance and questionnaire scores, as well as a qualitative assessment of the interviews, was performed. Questionnaire scores were significantly higher in the post-study questionnaire for 12 of the 14 questions in both groups. Students assigned to the instructor-directed group had significantly higher scores than students in the self-directed group on the skill performance (p < .05). Self-reported confidence in knowledge and skill related to the IV catheterization technique improved with both self-directed feedback using video and instructor-directed feedback. Skill performance, however, was superior following instructor-directed feedback. Participants expressed positive experiences associated with use of the models for skills training, the value of the learning materials including the video, and guidance during learning.

Veterinary trainees must be competent in a vast array of clinical skills across a broad range of species at graduation.1 To achieve competency in a clinical skill, learners must receive detailed instructions on how to perform the task, be familiar with the standard to which the task is to be performed, have ample opportunity to practice, and receive high-quality feedback on their performance.2,3 In addition to learners achieving competency in clinical skills, their development of self-efficacy— belief in their capabilities to organize and execute a task—must be supported by their environment to maintain their motivation for learning.4 Finally, as we are training individuals to enter a self-regulating profession, instructors need to encourage and develop learners’ skill in self-assessment, including evaluation of personal performance relative to established standards.5 While providing feedback and promoting the development of self-efficacy and self-regulated learning are considered crucial for the mastery of skills, how to achieve these goals in teaching learners who have a range of skills, and with resource limitations, remains an area of continued investigation.

Traditionally, skill training sessions in veterinary medicine have been delivered in a laboratory setting with faculty directing student activities in large groups within a defined time limit. With a relatively high student-to-instructor ratio, instructors observe learners individually performing a skill and often provide verbal feedback in a limited time frame. On one hand, while this feedback is delivered in a timely, specific, and contextual manner (all ideal and desirable characteristics of feedback), students often express the perception that they receive insufficient feedback overall during clinical skill training. Faculty, on the other hand, may perceive that ample feedback was provided to learners during skills training sessions. Interestingly, complaints surrounding feedback are common across a variety of disciplines in both professional and higher education.68

Several possible explanations exist for the mismatch in perceptions between trainees and trainers regarding the adequacy of feedback. First, it is feasible that trainers are providing adequate verbal feedback but the recipients may not be recognizing or assimilating the verbal dialogue as feedback. From the learners’ perspective, performing a clinical skill requires simultaneously recalling knowledge from their previous learning and incorporating information relevant to the immediate situation while physically performing the technical skill and storing the new learning into long-term memory.9,10 This process of assimilating information, known as the cognitive load theory, concludes that the cognitive load for a learner can be large enough during problem solving that working memory capacity is exceeded, and as a result, cognitive processing and long-term memory may be impaired.11,12 More recently, the role of excessive cognitive load on the learner during medical training has been recognized as affecting working memory and learning.10,13,14 In the skills training delivery model common within veterinary curricula, the cognitive load of the learners is plausibly at a level that impairs working memory to such a degree that they are not capable of processing, using, or storing the verbal feedback and associated learning delivered by an instructor.

An alternative explanation regarding the discordance in experiences related to feedback between instructors and learners is that feedback may be poorly timed, or perhaps it is of insufficient quantity or quality to meet students’ expectations. Providing more time than is typically allotted for instructors to work one-on-one with students during skills training would reduce time restrictions on quality feedback delivery and potentially encourage student self-reflection; however, this would require considerable resource investment in terms of faculty time. Furthermore, if in fact verbal feedback from an instructor is contributing to the learner’s cognitive load to the extent that learning is impaired, having more individualized instructor-delivered feedback might not improve the learning environment for the learner.

While incorporating in-person expert-delivered feedback is one method for learners to obtain feedback, the potential use of technology to assist learners’ self-assessment against a gold standard offers the potential for learners to self-assess in a self-paced and resource-efficient manner. Video-recording skills for subsequent unsupervised or supervised feedback has been explored in basic surgical skills with positive results.15,16 Of note, earlier studies exploring learners’ preferences have shown instructor-directed feedback as preferable; however, the results on outcomes across different feedback delivery systems are variable, warranting further exploration of outcomes and learner preferences with current technologies in different contexts.17,18

To date, student preferences for feedback method during veterinary skills training and the impact of feedback method on the subsequent performance of technical skills, either immediately after the instruction period or following a time delay, have not been thoroughly explored. As veterinary colleges expand their clinical skills training facilities and options for student training, ample opportunity exists to alter how student skills training and faculty engagement are facilitated. The overall goal of our research is to optimize training in skills core to the Doctor of Veterinary Medicine (DVM) curriculum. The specific objective of the current study is to determine and compare students’ experiences, confidence levels, and resulting competencies following either instructor feedback or self-directed feedback using self-assessment of video performance during intravenous (IV) catheter placement in a simulated setting.


Following approval of the project by the Institutional Research Ethics Board at the University of Guelph (REB #16JN044), participants were recruited through convenience sampling. Specifically, brief announcements on the study and the request for volunteers were made to students between classes, and flyers were placed on class bulletin boards. Criteria for entry included status as a student at the University of Guelph and no prior experience performing IV catheter placement. Students volunteered through email communication and were assigned a time to participate in the study.


The study took place in a laboratory with one participant scheduled at a time. The layout of the laboratory is outlined in Figure 1. On arrival at the laboratory at their predetermined time, students were provided with a brief description of the study and a consent form. After signing the consent form, participants were assigned a number and then randomly assigned to either the self-directed (SD) or instructor-directed (ID) feedback groups. A pre-study questionnaire containing 14 questions, each with a 5-point Likert scale ranking self-confidence related to knowledge and skill associated with IV catheter placement, and 2 opened-ended short-answer questions related to confidence and knowledge regarding performing IV catheterization and learner identified preferences for feedback, respectively, was subsequently completed. Students were then provided access to a computer showing a slide presentation containing background information and a step-by-step outline of how to place an IV catheter in a canine forelimb model. The learning material focused on principles surrounding IV catheter placement with some content addressing what not to do. A video demonstrating the process for setting up the necessary supplies, placing the IV catheter, and securing the catheter with tape in the model was included in the presentation. All the background learning material was created by the first author (CLK). Students progressed through the online material at their own pace and were directed to indicate to the investigator overseeing the session when they were ready to move to the next phase. Once participants indicated they had completed reviewing the instructional materials, they were relocated to a nearby benchtop space in the laboratory with a model and the necessary supplies as shown in Figure 2. In brief, the model consisted of a 3D-printed canine forelimb model with silastic tubing in the location of the cephalic vein. The distal end of the silastic tubing was capped while the proximal end was connected via an IV line to a suspended IV bag containing physiologic saline containing red dye. Participants were permitted up to 15 minutes to practice IV catheter placement, during which time they were allowed access to the learning materials, including the instructional video.

Figure 1: Layout of laboratory illustrating the location of the various activities performed by participants

Figure 2: Photograph of skill performance area showing the model and supplies used for catheter placement with mounted video-recording device used for learner self-assessment

Following the initial practice session, participants in both groups had a maximum of 20 minutes for guided skill performance. Individuals in the SD group performed the skill while video recording their performance using an iPad tablet (Apple, Los Altos, CA, USA) directed over the bench work area where the student performed the catheter placement. The tablet was held in place using a purpose-built holder that allowed students to swivel the tablet to review their video recording. Participants were directed on how to perform the video capture of their performance to permit self-assessment; however, they were not directed as to the frequency or methods that they could use to assess their performance. They were also allowed continued access to the learning materials, including the instructional video, throughout the guided skill performance period. Participants assigned to the ID group had an instructor provide verbal feedback while they performed the skill during the guided skill performance period. The same individual (CLK), an experienced instructor, provided feedback to all participants in the ID group. Participants were directed to initiate the procedure, and feedback was provided for each step of performing the skill. When the instructor noted an error in skill performance, the student was asked to pause and guidance was provided as to how to proceed. After the skill was completed, the instructor also provided feedback on procedures participants performed well and areas they could continue to improve upon. Participants were probed regarding their understanding of the feedback and were free to ask questions and repeat either the entire procedure or parts of the procedure based on their performance.

Following the 20-minute guided skill performance period, participants moved to a new area of the laboratory with an identical benchtop setup with a model and supplies. A second iPad tablet was situated over the benchtop to permit recording of individual performance. The participants were then instructed to place a catheter in the model with no further feedback, and the procedure was video recorded. The video was identified at the start of the recording by the assigned participant’s identification number. After performing the skill, participants completed a post-study questionnaire, identical to the pre-study questionnaire, with the same 14 questions, each with a 5-point Likert scale ranking self-confidence related to knowledge and skill associated with IV catheter placement, plus the 2 short-answer open-ended questions related to the skill and to learner preference for feedback. The final step for the participants was a recorded one-on-one semi-structured interview exploring their opinions on the models, their learning using the models, using self-directed feedback with video capture, receiving instructor feedback, and their perceived confidence in performing IV catheterization on a live animal. The interview questions gave the participants the opportunity to express how they felt about their experience learning this new skill. The interviews were audio-recorded on an iPad, and the investigator who conducted the interview was not the same individual who provided feedback or evaluated the participants’ performance.

The final video recordings of the participants placing an IV catheter were transferred to a computer and subsequently reviewed and evaluated using a 31-item assessment tool, in which the performance for each item was scored as 0 or 1, resulting in a maximum score of 31. The assessment tool was based on directions provided and demonstrated in the instructional material provided to the participants. Prior to reviewing the participant videos, three experts independent of the study were provided with a copy of the instructional material that had been given to the learners and the scale for assessment of the videos. The order of video assessment by each expert was randomized, and each expert reviewed the videos independently.

Analysis of Quantitative Data

Statistical analysis was performed using standard statistical software (SAS, version 9.4, SAS Institute Inc., Cary, NC, USA). Non-parametric statistical methods were used for the analysis of questionnaire data and performance scores. Specifically, a Mann–Whitney U test was used to compare differences between groups in questionnaire scores at the pre- and post-study time periods, as well as the performance scores. A Wilcoxon signed-rank test was used to compare the questionnaire scores at the pre- versus post-sampling times within a group. Statistical significance was set at p < .05. Inter-rater reliability was tested using Cronbach’s correlation analysis, and agreement between raters was evaluated using Lin’s concordance.

Qualitative Data Analysis

Thematic analysis techniques were used to evaluate the quantitative results obtained from the one-on-one interviews with participants. The audio recordings were transferred from the iPad to a computer and transcribed verbatim. The transcripts were systematically and iteratively checked for accuracy of representation, and they were then used to conduct a thematic analysis based on the steps outlined by Braun and Clarke.19,20 In brief, one of the authors (MA) generated the initial codes for idea points expressed by participants surrounding each discussion area. The codes from all the participants were then compared among each other and grouped based on similarity of ideas. A list of major themes and takeaways was constructed to cover each category of grouped ideas. The coding of data and naming of themes was reviewed and cross-checked by a second author (DKK) for consistency, clarity, and accuracy of representation.


The median age of study participants was 23 years (range: 21–33). Thirty-four participants identified as female, five as male, and one did not disclose their gender identity. Twenty-six participants were enrolled in the DVM program, and five were in a graduate-level (MSc) program. The remaining participants were undergraduate students enrolled in either the Animal Biology, Animal Science, Biological Science, Human Kinetics, or Engineering programs at the University of Guelph. All participants completed the pre- and post-study questionnaires in full.

Questionnaire Results

The results of the pre- and post-study questionnaires in both the SD and ID groups are presented in Tables 1 and 2. The pre- and post-study questionnaire content was split into questions probing students’ confidence related to knowledge of the procedure (Table 1) and their confidence in performing the procedure (Table 2). Questionnaire scores were significantly higher in the post-study questionnaire than in the pre-study questionnaire for 12 of the 14 questions in both groups of students. The two questions in which no significant difference between pre- and post-study questionnaire scores was detected related to confidence in knowledge surrounding catheter placement. There was no significant difference in the scores between groups for any of the questions in the pre- or post-study questionnaire.


Table 1: Student responses to questions regarding their confidence in their understanding and knowledge related to IV catheter placement before (pre-) and after (post-) exposure to the skill in the SD (n = 20) and ID (n = 20) groups

Table 1: Student responses to questions regarding their confidence in their understanding and knowledge related to IV catheter placement before (pre-) and after (post-) exposure to the skill in the SD (n = 20) and ID (n = 20) groups

Survey question: students’ confidence in their understanding and knowledge of … Pre- or post-exposure (SD/ID group) Strongly disagree (%) Disagree (%) Neutral (%) Agree (%) Strongly agree (%) Mean p
The purpose of IV catheter placement Pre- (SD) 0 0 0 35 65 4.6 .188
Post- (SD) 0 0 0 20 80 4.8
Pre- (ID) 5 0 15 35 45 4.15 .062
Post- (ID) 0 0 0 10 90 4.9
The importance of being able to perform IV catheterization correctly Pre- (SD) 0 0 0 25 75 4.7 .453
Post- (SD) 0 0 5 5 90 4.85
Pre- (ID) 0 0 15 10 75 4.6 .062
Post- (ID) 0 0 0 5 95 7.9
The procedure for inserting an IV catheter Pre- (SD) 25 15 30 30 0 2.6 < .001
Post- (SD) 0 0 0 35 65 4.6
Pre- (ID) 20 20 40 20 0 1.6 < .001
Post- (ID) 0 0 0 20 80 4.8
How to be time efficient during the procedure Pre- (SD) 15 10 60 15 0 2.7 < .001
Post- (SD) 0 0 10 70 20 4.1
Pre- (ID) 5 20 40 30 5 3.1 < .001
Post- (ID) 0 0 5 55 40 4.3
How to identify incorrect technique during the procedure Pre- (SD) 20 25 40 15 0 2.5 .001
Post- (SD) 0 15 35 45 5 3.4
Pre- (ID) 10 35 40 15 0 2.6 < .001
Post- (ID) 0 0 10 60 30 4.2
The steps involved in placing an IV catheter Pre- (SD) 20 40 30 10 0 2.3 < .001
Post- (SD) 0 0 0 25 75 4.7
Pre- (ID) 25 25 30 20 0 2.4 < .001
Post- (ID) 0 0 0 35 65 4.6
How to recognize if the procedure is completed correctly Pre- (SD) 15 30 40 15 0 2.5 < .001
Post- (SD) 0 5 10 40 45 4.2
Pre- (ID) 10 25 40 25 0 2.8 < .001
Post- (ID) 0 0 10 30 60 4.5
How to recognize when an error has occurred during the procedure Pre- (SD) 10 30 35 25 0 2.7 .003
Post- (SD) 0 15 25 40 20 3.6
Pre- (ID) 10 30 45 15 0 2.6 < .001
Post- (ID) 0 0 15 55 30 4.1
The equipment necessary to perform the procedure Pre- (SD) 10 25 40 25 0 2.8 < .001
Post- (SD) 0 0 0 25 75 4.7
Pre- (ID) 15 20 25 40 0 2.9 < .001
Post- (ID) 0 0 0 20 80 4.8
How to teach someone else how to perform the procedure Pre- (SD) 55 25 15 5 0 1.7 < .001
Post- (SD) 0 5 25 65 5 3.7
Pre- (ID) 60 15 15 10 0 1.7 < .001
Post- (ID) 0 5 15 60 20 3.9

IV = intravenous; SD = self-directed; ID = instructor-directed

Note: Significance between time periods was calculated using the non-parametric Wilcoxon signed-rank test.


Table 2: Student responses to questions regarding their confidence related to performing IV catheter placement before (pre-) and after (post-) exposure to the skill in the SD (n = 20) and ID (n = 20) groups.

Table 2: Student responses to questions regarding their confidence related to performing IV catheter placement before (pre-) and after (post-) exposure to the skill in the SD (n = 20) and ID (n = 20) groups.

Survey question: students’ confidence in … Pre- or post-exposure (SD/ID group) Strongly disagree (%) Disagree (%) Neutral (%) Agree (%) Strongly agree (%) Mean p
The placement of an IV catheter Pre- (SD) 30 30 30 10 0 2.2 < .001
Post- (SD) 0 0 20 40 30 4.1
Pre- (ID) 30 15 30 10 15 2.6 < .001
Post- (ID) 0 0 0 40 55 4.6
The placement of an IV catheter efficiently Pre- (SD) 45 35 15 5 0 1.8 < .001
Post- (SD) 0 10 20 55 15 3.7
Pre- (ID) 45 20 30 0 0 1.8 < .001
Post- (ID) 0 5 0 70 25 4.1
The placement of an IV catheter without hesitation Pre- (SD) 40 35 20 5 0 1.9 < .001
Post- (SD) 0 15 35 35 15 3.5
Pre- (ID) 35 20 30 10 5 2.3 < .001
Post- (ID) 0 5 10 55 30 4.1
The mental preparation for performing IV catheter placement Pre- (SD) 10 5 30 50 5 3.3 < .001
Post- (SD) 0 0 20 50 30 4.1
Pre- (ID) 5 15 15 50 15 3.5 .002
Post- (ID) 0 5 5 40 50 4.3

IV = intravenous; SD = self-directed; ID = instructor-directed

Note: Significance between time periods within a group was calculated using the non-parametric Wilcoxon signed-rank test.

All participants left the open-ended question probing further aspects of IV catheterization blank in both the pre- and post-study questionnaires. When asked to rank their preference for receiving feedback, in the pre-study questionnaire, all participants indicated a preference for instructor-provided feedback over self-directed feedback. In the post-study questionnaire, all participants in the ID group maintained the same ranking of learning preference, while one participant in the SD group indicated a preference for self-directed versus instructor-directed feedback.

Performance Scores

The performance scores (out of 31) for participants in the ID group (median: 30, range: 23–31) were significantly higher than the scores for students in the SD group (median: 28; range: 15–31) (p < .05). Individual rater scores are shown in Figure 3. Inter-rater reliability was 0.968, and Lin’s concordance agreement between raters was 0.988, 0.989, and 0.997.

Figure 3: Box plot of performance scores for participants assigned to the SD (n = 20) (dots) or ID (n = 20) (diagonal lines) feedback groups as determined by three independent experts

SD = self-directed; ID = instructor-directedNotes: For each box, the horizontal line within the boxes represents the median value, and the upper and lower boundaries represent the seventy-fifth and twenty-fifth percentiles, respectively. Whiskers represent the 2.5th and 97.5th percentiles.* Medians overlap with the seventy-fifth percentile.
Interview Results

A total of 40 interviews were conducted, each lasting between 5 and 20 minutes. Three themes were identified: availability of guidance, accessibility of learning material, and the realistic nature of the model used in performing the skill. Example quotes for each theme are noted and discussed below, with the participant number in parentheses.

Theme 1: Availability of Guidance

Guidance was identified as being key to making sure participants were performing the task correctly. A popular opinion among most participants was the value of instructor-directed feedback. All participants who had instructor-delivered feedback expressed the feeling that they greatly benefited from having an experienced individual critique their technique. The feedback from the instructor was considered to have provided students a boost of confidence in performing the procedure because they received “reassurance that they were doing it correctly” (P06). Some participants in the SD group indicated they did not feel confident in their technique after completing the procedure without having an expert opinion. In this group, a common concern among participants was the lack of reassurance or verification that they were doing the procedure correctly. For example, a participant mentioned, “I felt that I was doing everything correctly, but I am unsure if I did something wrong along the way” (P05). Other comments from some of the SD participants included that they were “surprised” (P30) by how useful the method was and that they were “not expecting it” (P19). However, this sentiment was not unanimous among the participants in the SD feedback group.

Within the SD group of learners who expressed an issue with the perceived lack of guidance, students specifically commented on their inability to effectively provide themselves with feedback, such that they did not know “what is right from wrong” (P25). The participants’ ability to trust in their work was directly related to their level of confidence in the guidance available to them in performing the procedure. Participants who expressed the need for additional guidance simultaneously also expressed a feeling that they were not able to perform the procedure correctly. The following comment was particularly observed in participants from the SD group:

I feel like I would’ve preferred to have an instructor of some sort to critique it. Just to be able to give tips I guess, because I can see where I went wrong between watching the video and the demonstration and then watching my own video but for some things I would just be, like, how can I improve this, because I can see where I went wrong but I don’t know necessarily how to improve it. (P10)

In conclusion, participants from both categories described an instructor- and self-directed guidance combination as the most effective form of learning the procedure:

I still think maybe if there was a bit of the instructor and a bit of the self together it would have the best outcome rather than each individually. Because self is really good because I learned on my own rather than just depending on someone, it’s a common thing where you see someone do it and be, like, yeah I know it, but you don’t really, because you didn’t try it on your own and you didn’t fail and you didn’t see how you can improve, so I think a mixture of both is good. (P30)

Theme 2: Accessibility of Learning Materials

Participants from the SD group described relying on the learning material more heavily than the participants from the ID group, who mostly said they relied on the learning material as a form of re-affirmation of their process in conducting the procedure.

Students expressed that they felt their degree of understanding the skill depended on their degree of interaction with the learning material. Participants who indicated that they relied heavily on the learning material collectively expressed a higher level of understanding of the skill, and this helped them perform the steps correctly. Specifically, from the SD group, participants who used the learning materials during their practice time praised the use of this technique to learn, as it enabled them to watch the video as many times as they wished, stopping it any point, and progressing at their own pace to learn the skill:

It was effective being able to watch yourself do it and self-critique. Self-critique and having the time to practice and be like oh that’s what I’m doing wrong and then have another opportunity to do it. (P03)

I had the option of recording it, watching it back and I still had the option of going back to the videos to double-check my procedure versus theirs, so I think that was really helpful. I am not sure I used that quite enough in this scenario, but certainly watching the playback of both videos, maybe simultaneously as well, would be very helpful. (P37)

However, participants who expressed the feeling that they didn’t quite understand the skill or struggled with the performance also said that they would have liked more learning material or even commented that they would have preferred having the learning material close by for reference.

Both groups regarded the presentation and the video as useful tools. When discussing the learning material, all students praised the use of the video demonstration. A common recurring comment across all participants was that they enjoyed how the video was sandwiched between the learning materials. They began by watching the video as beginners, went through the process step by step in the presentation, and then watched the video again to make sure the concepts they had learned were solidified: “I liked how there was the video at the beginning, you read through all the material, and there was the same video at the end. So I was able to combine everything that I had read and watched” (P07).

When asked about improvements to the learning material, a common answer was either that “the learning material did a good job covering the basic information” (P10) or that it should provide additional detail covering common mistakes and corrective actions when “things go wrong” (P40). This comment was particularly recurrent among participants from the SD group.

Theme 3: Realistic Nature of the Model Helped in Performing the Skill

A common theme across all participants was how they appreciated and enjoyed having a physical model of a dog leg with the shape, feel, and look of an actual dog leg. The majority of participants made a comment about the realistic nature of the model, highlighting features such as the fur coating and the pink-colored fluid simulating blood flow, which made the environment feel real without the added pressure of working on a live animal. On the one hand, the realistic nature of the model dog leg facilitated the participants’ learning as they referred to key anatomical features of the forelimb for taping or hand placement, which is essential to the success of performing the procedure. On the other hand, most participants recognized that these models could not be a complete replacement to training using live animals, because many external factors are not accounted for in the simulation, such as the innate flinching response. Also, mention was specifically made that the leg model was not attached to a body, and the participants commented on the resulting unrealistic nature of their ability to move and position the forelimb. Furthermore, a common recommendation for future applications was to have different-sized models to mimic varying animal sizes: “You can probably have different size models because you obviously have different types of animals; cats are probably a lot smaller than probably the model, and there’s probably a lot larger dogs than the model also. So maybe different variations of the model would be helpful” (P10). Several participants appreciated the use of the same model in the learning material and practice. It allowed them to solidify the concepts by visually re-creating the steps introduced in the learning materials.

As valuable as the model was in teaching the participants how to perform the procedure, many also expressed the view that the model and learning material did not provide sufficient information to cover the consequences of what would happen if a mistake occurred while performing the technique of inserting an IV catheter. This sentiment was voiced across both groups. Some participants, even though they felt confident in their technique, recognized that there would be a major difference between performing the procedure on a model and performing it on a live animal. These participants, from both groups, expressed that they would require more time practicing before they could move on to performing the technique on a live animal.

This mixed-methods approach to determining student perceptions and their performance outcomes, with self-assessment using video or with instructor-delivered feedback, during skills training using a model provided rich and informative results. After completing the activities in this investigation, all participants had a significant self-identified increase in knowledge about and confidence in performing IV catheterization. Participants relying on SD feedback articulated a desire for confirmation of their performance by an expert, but the value of self-assessment and reflection was identified by some learners. While participants in the ID group received higher scores on skill performance than students in the SD group, the scores were overall very high, indicating substantial skill acquisition in both groups. Students expressed positive opinions regarding the use of models and provided useful insights on model design and instruction.

No participant had performed IV catheterization in either a model or a live animal prior to the investigation, and thus it is not surprising that participants identified a substantial change in their knowledge, confidence, and skill level in placing an IV catheter following the experiential activities in the study. Not surprisingly, no difference was observed in participant self-scoring related to confidence in skill acquisition between the two groups. This finding is consistent with the research literature demonstrating the limited ability of novice learners to self-assess their performance.5,2123 In this investigation, individuals who were scored lower on their performance by an expert likely did not identify errors they made while performing the skill. Another factor that may have contributed to participants’ high confidence levels was the nature of the model used for the skill evaluation. As noted by students, the realistic nature of the model, specifically, the flash of the colored fluid in the catheter to indicate successful placement within the lumen of the vessel, provided participants confirmation that the catheter had been correctly located. While the actual placement of the catheter in the vessel was only a small component of the total score, the immediate feedback from the model itself as to the correct placement of the catheter indicated to the participant that several steps had been followed correctly. Interestingly, the interview data did identify some participants in the SD group who were looking for additional guidance or reassurance that they had performed the skill correctly, a positive indicator that students were open to further feedback.

In the current study, all participants were provided with identical learning material, which included slides describing how to do the procedure, along with a video demonstrating the procedure step by step. Participants commented on the value of the learning material, in particular, the video content. The value of video content for skill instruction in this study as identified by participants is consistent with other reports’ findings of the relative value of video over text content as perceived by learners.24,25 While learning materials were accessible to all participants in the study during the practice session, in retrospect, the accessibility of the learning material was not optimized as learners had to physically turn from their workstation to view the video or slides. The use of the material during the practice session for both groups and during the self-directed feedback period for the SD group may have been greater had the video content been located close to the workstation or on a portable device. Unfortunately, participants’ use of the learning material was not recorded; this information could have provided further data regarding learner practices and outcomes. In addition to the inconvenient physical location of the learning material, several participants noted a lack of direction within the learning material related to what not to do. The latter was not extensively covered, and content related to how to manage errors was not included due to the recognized additional time commitment for viewing the learning material that would be involved; however, the feedback was informative, and the inclusion of simple clear directions of how to avoid and manage common errors is worthy of consideration in the future.

While concerns may exist related to learners’ information processing from verbal expert feedback in clinical settings due to cognitive limitations imposed by both the amount of sensory input and the intrinsic cognitive load of learning a new skill,10,26 participants in this study who received verbal feedback ultimately outperformed participants who were assigned to the SD feedback group. A recent investigation evaluating the effects of instructional formats on veterinary student task performance and emotional state during skills training in a simulation environment found student self-reported anxiety to be low.24 While this study did not evaluate the impact of instructors on anxiety, speculatively and subjectively, the participants in our study appeared to show low levels of anxiety. Interestingly, low anxiety levels have been reported in medical students receiving either supervised or unsupervised video feedback.27 For learners in a low-stress environment such as a simulated setting with no evaluative component, one-on-one interactions with an expert were demonstrated in the current study to provide an ideal environment for verbal feedback, at least for short-term acquisition of a basic skill such as IV catheter placement. As noted by some participants during their interviews, this situation permitted dialogue, including explanations and responses to inquiries, between the expert and participant. As noted by Cosford and colleagues, the low state of anxiety experienced by students in a simulation environment may differ from their anxiety level if students are placed in a learning environment involving live animals.24 Whether our findings related to feedback strategy in a simulation setting translate to a live animal learning environment remains to be determined and warrants exploration.

Similar to previous investigations, participants in this study identified their preference for individual instructor-directed feedback over self-directed feedback.27,28 Participants assigned to the instructor-feedback group in the current study received up to 20 minutes of individualized feedback from an expert. This format of instruction would be challenging to reproduce on a large scale in practice with many learners; however, it does provide us with a theoretical gold standard with which to compare alternative learning strategies in the future. Several investigators have demonstrated the beneficial effects of self-assessment of video performance on skill acquisition.15,18,29 Nesbitt and colleagues found the use of technology without instructor feedback to be comparable to instructor-directed feedback for acquiring basic suturing skills in novice learners.15 It is possible that the skill we evaluated in the current investigation was more complex and therefore more challenging for novice learners to navigate than the task evaluated in the suturing study. Other strategies that could have improved the outcome of our SD group to levels equivalent to those observed in our ID feedback group include (a) providing greater opportunity for learners to review their performance, for example, over a longer period of time; (b) providing learners with an evaluation template; and/or (c) providing learners with expanded learning material that highlights common errors and mistakes, as recommended by participants and discussed above.15,16 The inclusion of such content in the learning material, referred to as “hints and tips,” was described by Nesbitt and colleagues and resulted in equivalence in skill performance among novice medical students.15(p.698) The learners’ lack of familiarity with self-directed learning using self-assessment of video performance might have also contributed to the suboptimal performance by the participants in the SD group in this study. Providing learners with more explicit directions on how to self-assess their performance relative to a gold standard may have resulted in more effective self-directed learning. For example, an evaluation tool or checklist, as described by Al-Jundi and colleagues,16 to guide learners in their self-assessment could have led to improved evaluation of performance, particularly if the learning material explicitly demonstrated different levels of performance. Alternatively, strategies that incorporate peer feedback into skill training using learning material with explicit performance criteria warrant exploration in this context.30

Student perceptions regarding the models used for this study were explored to determine their level of engagement regarding the tools and to generate suggestions for improving these tools. Despite the simplicity of the model used in this study, the students expressed positive comments regarding the use of this tool for acquisition of introductory skills, a finding consistent with those of other studies evaluating low-fidelity models for skills training in veterinary medicine.31,32 Our results show that use of the this basic training model improved student confidence and competence in IV catheterization skills with both instructor- and self-directed feedback and so provides a basis for further exploration into feedback strategies that ensure acquisition and retention of skill performance in novice learners.

1. Matthew SM, Bok HGJ, Chaney KP, Read EK, Hodgson JL, Rush BR, et al. Collaborative development of a shared framework for competency-based veterinary education. J Vet Med Educ. 2020;47(5):57893. https://doi.org/10.3138/jvme.2019-0082. LinkGoogle Scholar
2. Ericsson KA. Deliberate practice and acquisition of expert performance: a general overview. Acad Emerg Med. 2008;15(11): 98894. https://doi.org/10.1111/j.1553-2712.2008.00227.x. MedlineGoogle Scholar
3. Hattie J, Timperley H. The power of feedback. Rev Educ Res. 2007;77:81112. https://doi.org/10.3102/003465430298487. Google Scholar
4. Van Dinther M, Dochy F, Segers M. Factors affecting students’ self-efficacy in higher education. Educ Res Rev. 2011;6(2): 95108. https://doi.org/10.1016/j.edurev.2010.10.003. Google Scholar
5. Colthart I, Bagnall G, Evans A, Allbutt H, Haig A, Illing J, et al. The effectiveness of self-assessment on the identification of learner needs, learner activity, and impact on clinical practice: BEME guide no. 10. Med Teach. 2008;30(2):12445. https://doi.org/10.1080/01421590701881699. MedlineGoogle Scholar
6. Boud D, Molloy E. What is the problem with feedback? In: Boud D, Molloy E, editors. Feedback in higher and professional education: understanding it and doing it well. New York: Routledge; 2013. p. 110. Google Scholar
7. Jensen AR, Wright AS, Kim S, Horvath KD, Calhoun KE. Educational feedback in the operating room: a gap between resident and faculty perceptions. Am J Surg. 2012;204(2):24855. https://doi.org/10.1016/j.amjsurg.2011.08.019. MedlineGoogle Scholar
8. Archer JC. State of the science in health professional education: effective feedback. Med Educ. 2010;44(1):1018. https://doi.org/ 10.1111/j.1365-2923.2009.03546.x. MedlineGoogle Scholar
9. Weidman J, Baker K. The cognitive science of learning: concepts and strategies for the educator and learner. Anesth Analg. 2015;121(6):158699. https://doi.org/10.1213/ANE. 0000000000000890. MedlineGoogle Scholar
10. Sewell JL, Boscardin CK, Young JQ, Ten Cate O, O’Sullivan PS. Learner, patient, and supervisor features are associated with different types of cognitive load during procedural skills training: implications for teaching and instructional design. Acad Med. 2017;92(11):162231. https://doi.org/10.1097/ACM.0000000000001690. MedlineGoogle Scholar
11. Sweller J. Cognitive load during problem solving: effects on learning. Cogn Sci. 1988;12:25785. https://doi.org/10.1207/s15516709cog1202_4. Google Scholar
12. van Merriënboer JJ, Sweller J. Cognitive load theory in health professional education: design principles and strategies. Med Educ. 2010;44(1):8593. https://doi.org/10.1111/j.1365-2923.2009.03498.x. MedlineGoogle Scholar
13. Mayer RE. Applying the science of learning to medical education. Med Educ. 2010;44(6):5439. https://doi.org/10.1111/j.1365-2923.2010.03624.x. MedlineGoogle Scholar
14. Fraser K, Ma I, Teteris E, Baxter H, Wright B, McLaughlin K. Emotion, cognitive load and learning outcomes during simulation training. Med Educ. 2012;46(11):105562. https://doi.org/ 10.1111/j.1365-2923.2012.04355.x. MedlineGoogle Scholar
15. Nesbitt CI, Phillips AW, Searle RF, Stansby G. Randomized trial to assess the effect of supervised and unsupervised video feedback on teaching practical skills. J Surg Educ. 2015;72(4): 697703. https://doi.org/10.1016/j.jsurg.2014.12.013. MedlineGoogle Scholar
16. Al-Jundi W, Elsharif M, Anderson M, Chan P, Beard J, Nawaz S. A randomized controlled trial to compare e-feedback versus “standard” face-to-face verbal feedback to improve the acquisition of procedural skill. J Surg Educ. 2017;74(3):3907. https://doi.org/10.1016/j.jsurg.2016.11.011. MedlineGoogle Scholar
17. Engum SA, Jeffries P, Fisher L. Intravenous catheter training system: computer-based education versus traditional learning methods. Am J Surg. 2003;186(1):6774. https://doi.org/ 10.1016/S0002-9610(03)00109-0. MedlineGoogle Scholar
18. Phillips AW, Matthan J, Bookless LR, Whitehead IJ, Madhavan A, Rodham P, et al. Individualised expert feedback is not essential for improving basic clinical skills performance in novice learners: a randomized trial. J Surg Educ. 2017;74(4):61220. https://doi.org/10.1016/j.jsurg.2016.12.003. MedlineGoogle Scholar
19. Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol. 2006;3(2):77101. https://doi.org/10.1191/ 1478088706qp063oa. Google Scholar
20. Kamleh M, Khosa DK, Verbrugghe A, Dewey CE, Stone E. A cross-sectional study of pet owners’ attitudes and intentions towards nutritional guidance received from veterinarians. Vet Rec. 2020;187(12):e123. https://doi.org/10.1136/vr.105604. MedlineGoogle Scholar
21. Davis DA, Mazmanian PE, Fordis M, Van Harrison R, Thorpe KE, Perrier L. Accuracy of physician self-assessment compared with observed measures of competence: a systematic review. JAMA. 2006;296(9):1094102. https://doi.org/10.1001/jama.296.9.1094. MedlineGoogle Scholar
22. Kruger J, Dunning D. Unskilled and unaware of it: how difficulties in recognizing one’s own incompetence lead to inflated self-assessments. J Pers Soc Psychol. 1999;77(6): 112134. https://doi.org/10.1037/0022-3514.77.6.1121. MedlineGoogle Scholar
23. Barsuk JH, McGaghie WC, Cohen ER, Balachandran JS, Wayne DB. Use of simulation-based mastery learning to improve the quality of central venous catheter placement in a medical intensive care unit. J Hosp Med. 2009;4(7):397403. https://doi.org/10.1002/jhm.468. MedlineGoogle Scholar
24. Cosford K, Briere J, Ambros B, Beazley S, Cartwright C. Effect of instructional format on veterinary students’ task performance and emotional state during a simulation-based canine endotracheal intubation laboratory: handout versus video. J Vet Med Educ. 2020;47(2):23947. https://doi.org/ 10.3138/jvme.0618-077r1. LinkGoogle Scholar
25. Langebæk R, Nielsen SS, Koch BC, Berendt M. Student preparation and the power of visual input in veterinary surgical education: an empirical study. J Vet Med Educ. 2016;43(2):21421. https://doi.org/10.3138/jvme.1015-164R. LinkGoogle Scholar
26. Szulewski A, Howes D, van Merriënboer JJG, Sweller J. From theory to practice: the application of cognitive load theory to the practice of medicine. Acad Med. 2021;96(1):2430. https://doi.org/10.1097/ACM.0000000000003524. MedlineGoogle Scholar
27. Matthan J, Gray M, Nesbitt CI, Bookless L, Stansby G, Phillips A. Perceived anxiety is negligible in medical students receiving video feedback during simulated core practical skills teaching: a randomised trial comparing two feedback modalities. Cureus. 2020;12(3):e7486. https://doi.org/10.7759/cureus.7486. MedlineGoogle Scholar
28. Nesbitt C, Phillips AW, Searle R, Stansby G. Student views on the use of 2 styles of video-enhanced feedback compared to standard lecture feedback during clinical skills training. J Surg Educ. 2015;72(5):96973. https://doi.org/10.1016/j.jsurg.2015.04.017. MedlineGoogle Scholar
29. Farquharson AL, Cresswell AC, Beard JD, Chan P. Randomized trial of the effect of video feedback on the acquisition of surgical skills. Br J Surg. 2013;100(11):144853. https://doi.org/ 10.1002/bjs.9237. MedlineGoogle Scholar
30. Dooley LM, Bamford NJ. Peer feedback on collaborative learning activities in veterinary education. Vet Sci. 2018;5(4):90. https://doi.org/10.3390/vetsci5040090. MedlineGoogle Scholar
31. Lumbis RH, Gregory SP, Baillie S. Evaluation of a dental model for training veterinary students. J Vet Med Educ. 2012;39(2): 12835. https://doi.org/10.3138/jvme.1011.108R. LinkGoogle Scholar
32. Greenfield CL, Johnson AL, Schaeffer DJ, Hungerford LL. Comparison of surgical skills of veterinary students trained using models or live animals. J Am Vet Med Assoc. 1995; 206(12):18405. MedlineGoogle Scholar