Volume 49 Issue 4, August 2022, pp. 473-483

Veterinary ultrasonography is a complex, advanced skill requiring repetitive exposure and supervision to gain competence. Consequently, newly graduated veterinarians are underprepared and lack the resources to achieve basic ultrasound proficiency upon graduation. Ultrasound simulation has been proposed as an adjunct educational tool for teaching entry-level ultrasound skills to student veterinarians. The objectives of this multicentric prospective observational cohort study were to describe the development of a novel ultrasound training model, establish model construct and face validity, and seek participant feedback. The model was constructed using three-dimensional silicone shapes embedded in ballistics gel within a glass container. A novice cohort of 15 veterinary students and 14 expert participants were prospectively enrolled in the study. Each cohort underwent training and assessment phases using a simulation model. Participants were asked to (a) determine shape location, (b) identify shape type using a shape bank, and (c) measure shape axes using the caliper tool. Time for each phase was recorded. Anonymous post-participation survey feedback was obtained. For most shapes (4/6), experts performed significantly better than novices in identifying shape type and location. Generally, no significant difference was found in mean axis shape measurements between cohorts or compared to the true mean axis measurements. No significant difference was found in scan time for either phase. This study’s results support the validation of this ultrasound simulation model and may demonstrate early evidence for its use as a training tool in the veterinary curriculum to teach entry-level ultrasound skills.

As with many technical procedures, ultrasound is an advanced skill requiring repetitive exposure to obtain proficiency, and diagnostic utility is dependent on operator skill level.13 With ultrasound reliance increasing among medical specialities,2,3 there is a greater expectation for veterinary students to demonstrate basic ultrasound proficiency upon graduation. Despite this, many students are unfamiliar with this modality and feel they lack the resources to achieve basic ultrasound proficiency upon graduation.45 Providing students with adequate instruction in ultrasonography is a challenge given the long learning curves,6 the ethical dilemma of learning new skills on live patients, and the fact that supervised hands-on training is resource-intensive in light of the abrupt shortage of academic radiologists the specialty is experiencing.79

One such solution to the increased demand for ultrasound training and proper quality assessment and control is the integration of simulation technology into the curriculum.10 Simulation allows repetitive, goal-directed practice in a controlled, low-stakes environment. Besides ensuring equal exposure to training opportunities, simulation also reduces instructor supervision in resource-limited settings. It allows evaluation on the premise of predetermined outcomes and yields reproducible metrics for evaluation. Finally, by reducing heavy cognitive burden on learners, simulation enhances subsequent learning in a clinical scenario.1011

For a learner to acquire new ultrasound skills and effectively transfer them to a clinical scenario, valid and sound evidence-based ultrasound simulation models must exist.8 Regardless of model fidelity, an examination of the validity and reliability of a simulation model is important to ensure that it teaches what it intends to and that its outcome assessments match the curriculum’s training objectives for a particular level.8,12,13 Unlike other specialties, no standardized simulator for teaching and evaluating ultrasound proficiency exists within veterinary sonography education. Further, the models that do exist seldomly demonstrate proper model validation.1315 Thorough validation of a model and its metrics is challenging, requiring careful problem solving and trial and error to predict which ultrasound-based tasks can be tested in vitro that will effectively translate into improved patient care in a clinical setting. Model use without prior validation can lead to premature adoption of inefficacious material into the curriculum, which could consume valuable teaching time and compromise long-term patient safety.8,13

As the demand to improve patient safety and veterinary trainee competence in ultrasound education increases, so will the need to produce pertinent and valid simulation models through high-caliber research. In response, this study reviews the construction and validation of a novel, cost-effective, and accessible bench-side ultrasound simulation model for teaching entry-level ultrasound skills to student veterinarians. The objectives of this study are to establish construct and face validity and obtain participant feedback, with the ultimate long-term goal of demonstrating skill transfer from the simulation model to a clinical scenario, thus improving patient care. We hypothesized that experts would outperform novices in their ability to identify shape location and type within the ultrasound simulation model, obtain more accurate shape measurements, and spend less time compared with novices scanning the model in both phases. Additionally, we hypothesized that this model would be well liked by participants.

Model Development

Commercially made silicone candy molds of various geometric shapes (Figure 1A) were purchased from individual online vendors through Etsy (Brooklyn, NY, USA).a Each mold was rinsed with warm soapy water, dried, and then sprayed with Ease Release™ 200 (Mann™, PA, USA) 30 minutes prior to use. Silicone was reconstituted as per package instructions for Ecoflex™ 00–30 (Smooth-On, Macungie, PA, USA). The mixture was placed in a USV-9B sealed pump degasser system with a 5-gallon Smooth-On vacuum chamberb (Figure 1B) for 2–3 minutes at a negative pressure of 29 inches of mercury to complete the degassing process. The mixture was poured into the desired candy molds in a laminar manner to prevent reintroduction of gas bubbles. The shapes were cured at room temperature for 24 hours, manually demolded, and again bathed. The shapes were placed on a standard baking sheet, and an optional post-curing process was performed, which consisted of heating the shapes in a commercial oven to 80 °C (176 °F) for 2 hours, then 100 °C (212 °F) for an additional hour (Figure 1C).

Figure 1: (A) Candy molds used to make 3D silicone shapes; (B) a USV-9B pump degasser with a 5-gallon vacuum chamber used to remove air from the high viscosity silicone liquid rubber prior to use; (C) various silicone geometric shapes post-curing process

Cubes of 10% ballistics gel (Clear Ballistics, Greenville, SC, USA) were placed in an oven-safe glass container (Figure 2A), which was heated to 148 °C (300 °F) in a 22-quart slow cooker (Hamilton Beach, Glen Allen, VA, USA, model number 32229R) until a liquid state was achieved—approximately 45 minutes. The silicone shapes were then distributed into the liquid gel. The temperature was maintained at 148 °C (300 °F) until bubble tracks dissipated and then the model set at room temperature. If necessary, the model was reheated at a low setting for an additional 30–60 minutes to release persistent gas bubbles. The training ultrasound simulation model was complete after cooling (Figure 2B, left side). To create an opaque model, as used in the assessment phase of the study (Figure 2B, right side), SO-Strong™ PMS 148C flesh-colored tint urethane dye (Smooth-On, Macungie, PA, USA) was added to the ballistics gel at its liquid state. Desired color and opacity were achieved by transferring the dye from its container to the gel using the tip of a toothpick and thoroughly mixing with a spoon.

Figure 2: (A) Hamilton Beach slow cooker housing glass dish and ballistics gel in its solid state prior to melting; (B) silicone shapes embedded in ballistics gel forming a clear practice ultrasound simulator (left) and opaque assessment ultrasound simulator (right)

Participant Recruitment

The novice cohort consisted of 15 fourth-year veterinary students at the Ontario Veterinary College who were rotating through their core diagnostic imaging rotation during the data collection period. Novices were eligible to participate in the study if they were (a) in the final year of their Doctor of Veterinary Medicine (DVM) program, and (b) had undergone at least 1 week of their diagnostic imaging rotation. Novices were excluded if they had any formal ultrasound training or prior ultrasound experience outside of the DVM curriculum, which up to that point had been theory and observation exclusively.

The expert cohort consisted of American College of Veterinary Radiology (ACVR) board-certified radiologists (n = 10), ACVR residents (n = 2), and American College of Veterinary Internal Medicine specialists who were primarily responsible for performing diagnostic ultrasound within their hospital (n = 2). Some expert data were collected at the Ontario Veterinary College; however, the primary investigator (JW) also traveled to three other institutions to recruit expert participants externally: North Carolina State University College of Veterinary Medicine, Université de Montréal Faculty of Veterinary Medicine, and Veterinary Specialists and Emergency Services (Rochester, NY). Experts were eligible to participate if they were (a) ACVR board-certified, (b) a member of a college where formal training in ultrasound was provided, or (c) radiology residents with more than 1 year of training in ultrasound. Experts were excluded if they were (a) radiology residents with less than 1 year of ultrasound training, (b) in a specialty that does not regularly use ultrasound, or (c) did not receive formal ultrasound training as part of their residency training. No incentives to participate were offered, and no participants dropped out.

Equipment

Novice participants used a dedicated ultrasound machine (iU22, Philips Healthcare, Bothell, WA, USA) and a 5–8 MHz curvilinear transducer. The image was optimized to a standard depth (3–4 cm), focus (2–3 cm), and gain (Figure 3A). Expert participants were instructed to use the ultrasound machine they were most familiar with and practiced with on a daily basis. Machine types consisted of iU22, Aplio 500 (Toshiba Medical Systems Corporation, Tochigi-ken, Japan), and a SonoSite M-Turbo ultrasound system (Fujifilm SonoSite, Inc., Bothell, WA, USA).

Figure 3: (A) Still image of a rectangular silicone geometric shape in the assessment ultrasound simulation model. The image was captured on an iU22 Philips ultrasound using standardized settings for novice participants; (B) A clear practice ultrasound simulation model and iU22 Philips ultrasound machine, as seen in the training phase of the study

The model was placed on a blue absorbent pad and oriented in a standard way on a table to achieve maximal ergonomics and to mimic scanning a live patient (Figure 3B). Sterile lubricant jelly and warm water in a 50:50 mixture was applied to the model surface. Lights were dimmed to avoid glare on the monitor. Participants could sit on an adjustable stool or stand and were asked to hold the probe in their dominant hand.

Study Design

This was a prospective non-randomized cohort study. Data collection occurred from October 2019 to January 2020. This study was approved by the Research Ethics Board at the University of Guelph (REB #19-04-010). Novice and expert participants who provided written and verbal consent to participate and who met the inclusion criteria (as described above) were enrolled in the study. All participants underwent training and assessment phases of the study followed by an anonymous post-participation survey.

Training Phase

Instructional videos were sent electronically to participants for review prior to commencing the study. Both cohorts received a brief introductory instructional video reviewing the ultrasound simulation model and study tasks.c In addition, novices received a more detailed video reviewing ultrasound basics, key machine functions, and the skills necessary to complete the tasks required for the study.d Examples of topics covered in this video included probe orientation (scanning and fanning), shape axis identification, track pad changes, and measurement with the caliper tool. The second instructional video was not shared with experts due to their prior expertise in the field. Participants were then provided with an ultrasound simulation model consisting of clear ballistics gel and embedded silicone shapes of varying sizes and orientations (Figure 4A) such that the shapes could be directly visualized through the gel. Participants were given up to 20 minutes to practice using the ultrasound, scanning the model, and answering questions from a mock assessment with identical format to the assessment phase (Figure 4B). The handout in the training phase was pre-filled in to demonstrate the expected format for final responses in the assessment phase. Tasks are explained under “Assessment Phase” (below). Participants were permitted to ask logistical questions throughout the training phase. Requested training time was recorded.

Figure 4: (A) Clear practice ultrasound simulation model, and (B) the handout provided to participants in the practice phase demonstrating how to indicate their responses in the assessment phase

Notes: Participants were allocated up to 20 minutes to practice in the training phase in preparation for the final assessment phase of the study. In the practice phase, shapes D and G were too long to obtain long axis measurements using the caliper too (as indicated by N/A). They remained in the model as an exercise of practice, nonetheless. All shapes in the testing model could be measured accurately using the caliper tool.
Assessment Phase

Following the training phase, participants were provided with a new but similar model (Figure 5A), which varied with respect to number, size, and location of shapes. Additionally, the model was opaque, thus removing the visual cue of seeing the embedded shapes. Participants had no prior knowledge of shape type or number. The assessment model contained six shapes: a cube, hemisphere, cone, heart, arrow, and torus (Figure 5B). As in the training phase, a handout was provided to participants prompting them to complete three tasks:

Figure 5: (A) Opaque ultrasound simulation model used in the assessment phase of the study; and (B) answer key depicting location and orientation of shapes embedded in the opaque ultrasound simulation model, which was provided to participants after study completion

  • Task 1—Identify shape location. A diagram that correlated with model dimensions was provided. Participants were asked to draw a circle on the diagram at the approximate corresponding location to where a shape was identified in the model.

  • Task 2—Identify shape type. Participants were asked to select a shape from a 15-shape bank (Figure 6) that best represented what they were scanning. Each shape was assigned a corresponding alphabetic letter, A–O, which participants placed inside the circle they drew in Task 1.

  • Task 3—Obtain shape axes measurements. Using the caliper tool, participants were asked to obtain a short axis measurement and the longest shape axis measurement they could identify. Within the previously drawn circle (Task 1) and next to the shape letter (Task 2), participants were asked to indicate their axes measurements in centimeters (cm) to 1 decimal point.

Figure 6: Shape bank

Note: During the assessment phase, participants were asked to select the shape they felt best correlated with what they were scanning on the simulation model by indicating the corresponding letter on the answer key.

After the assessment, participants were given an answer key and they were permitted to re-scan the model and ask questions.

Post-Participation Survey

A brief anonymous written survey was provided to participants immediately upon study completion. The survey contained visual analog scale (VAS) and short open-text questions. To reflect relevance and expertise level, cohort-specific VAS survey questions were created. Examples of questions included the following: “I would have benefited from using this ultrasound training model as a novice ultrasonographer in veterinary school” for expert cohorts, and “My confidence using ultrasound has improved using this model” for novice cohorts. Short-answer questions sought feedback on the strengths and weaknesses of the model, ease of use, and whether participants felt the model would be useful as a basic skills training model in the context of a clinical skills laboratory. A thematic analysis was not conducted.

Data Analysis

Participant performance was assessed on a binary scale with 1 indicating successful identification of a shape and 0 indicating failure. No points were allocated if either shape type or shape location was misidentified. Shapes had to be located within the correct quadrant with appropriate orientation to other shapes on the diagram. Total points were tallied for each participant in the assessment phase. Axes’ measurements were recorded and compared between cohorts and individually to true axis measurements. True axis measurements were determined by a board-certified radiologist (AZ) with no prior knowledge of shape type and who measured the shapes ultrasonographically using the caliper tool. Time to complete both training and assessment phases were recorded. Participant identities and skill levels were blinded to the single evaluator by removing consent forms with direct identifiers (JW).

Statistical Analysis

Statistical analyses were conducted using SAS Procedures (SAS Institute Inc., v. 9.4, Cary, NC, USA) at a two-sided .05-level of significance. To assess performance between cohorts in their ability to identify shape type and location, Fisher’s exact test was conducted and conditional maximum likelihood estimates of the odds ratios and exact 95% Sterne limits16 were provided. If estimates were zero or infinite, the median unbiased estimates were reported.17 To compare long axis shape measurements between cohorts, Proc MIXED in SAS was used. Assumptions of the model were assessed via residual analysis. The residuals were formally tested for normality using the four tests offered by SAS: Shapiro–Wilk, Kolmogrov–Smirnov, Cramér–von Mises, and Anderson–Darling. To check for apparent unequal variance or outliers and to see if the data needed transformation, the residuals were plotted against the treatments. If transforming the data was not effective, a non-parametric Mann–Whitney–Wilcoxon rank test was employed. To determine if the mean axis measurements differed from true axis measurements, a one-sample t-test was performed. Data from both treatments were used to provide variance estimates but only one mean was tested. Proc MIXED procedure, as described above, was used to determine if time spent scanning the model differed between cohorts in each phase. Due to data censoring in the assessment phase time results, Proc LIFETEST was used. The log-rank test was performed and estimates of lower, middle, and upper quartiles are provided. For survey question 5, a Mann–Whitney–Wilcoxon rank test was employed to assess the differences in opinions between cohorts regarding desire to integrate the simulation model into the diagnostic imaging curriculum.

Shape Identification

Experts identified shapes significantly better than novices in all cases except E and C—the cone and the torus (Table 1). No novices successfully identified shape L, the heart. The shape that discriminated best between cohorts was shape A—the hemisphere—with the odds of an expert identifying it 17.4 times higher than that of a novice (Table 1). Shape C, the torus, was most easily identified by the greatest number of participants (14 experts, 13 novices).

Table

Table 1: Percentage of shapes correctly identified by participants separated into experts and novices

Table 1: Percentage of shapes correctly identified by participants separated into experts and novices

Shape bank letter Shape name Shape diagram Experience level Probability of success (%) OR (E/N) 95% CI p
D Cube Expert 57.14 7.966 1.226–66.685 .021*
Novice 13.33
A Hemisphere Expert 92.86 17.401 2.041–451.610 .005*
Novice 40.00
E Cone Expert 92.86 8.070 0.915–210.75 .080
Novice 60.00
L Heart Expert 35.71 9.699 1.590–infinity .017*
Novice 0.00
O Arrow Expert 71.43 6.363 1.140–35.764 .027*
Novice 26.67
C Torus Expert 100.00 2.366 0.273–infinity .483
Novice 86.67

OR = odds ratio; E = expert; N = novice; CI = confidence interval

Notes: Shape bank letters D, A, E, L, O, and C correspond with their respective shape names and three-dimensional depictions.

* p < .05

† Conditional maximum likelihood estimates of the ORs with coinciding Sterne confidence intervals are provided unless intervals are infinite or zero, in which case the median unbiased estimates are provided.

Ho: OR = 1

Shape Measurements

Mean axis measurements did not differ significantly between cohorts (Table 2). Shape A, the hemisphere, was the only exception to this (p < .03, Mann–Whitney–Wilcoxon test); however, neither cohort measurements for this shape differed significantly from the true mean axis measurements (Table 2). For shape E, the cone, and D, the cube, novice measurements significantly differed from the true long axis measurements. For the cone, expert measurements also significantly differed from true axis measurements, whereas for the cube they did not (Table 3). Although the torus was the easiest shape for both cohorts to find and identify, both measured similarly inaccurately when compared with the true mean axis measurements (Table 3). No novice participants successfully identified shape L, the heart. Of the expert participants who identified shape L, their results did not significantly differ from true long axis shape measurements (Table 3).

Table

Table 2: Mean difference (E–N) of long axis shape measurements (cm) between cohorts for shapes D, E, O, and C

Table 2: Mean difference (E–N) of long axis shape measurements (cm) between cohorts for shapes D, E, O, and C

Shape bank letter Shape name Expert M estimate (cm) Novice M estimate (cm) M difference (cm) 95% CI p (two-tailed)
D Cube 2.063 2.650 −0.588 −1.399–0.223 .133
E Cone 2.177 2.500 −0.323 −0.662–0.016 .060
O Arrow 3.733 3.825 −0.092 −0.673–0.490 .735
C Torus 3.700 3.554 0.146 −0.372–0.664 .567

E = expert; N = novice; M = mean; CI = confidence interval

Notes: Mean axis measurement significantly differed between cohorts for shape A (p < .03, Mann–Whitney–Wilcoxon test); however, it is not included in the table due to non-normal data. Data for shape L were omitted as no novice participants correctly identified it.

Table

Table 3: Mean difference (measured–true values) of long axis shape measurements (cm) for expert and novice cohorts

Table 3: Mean difference (measured–true values) of long axis shape measurements (cm) for expert and novice cohorts

Shape bank letter Shape name Experience M difference (cm) 95% CI p Measured M (cm)
D Cube Expert n = 8 0.363 −0.0003–0.725 .050* 2.063
Novice n = 2 0.950 0.225–1.676 .017* 2.650
A Hemisphere Expert n = 13 −0.062 −0.280–0.157 .562 3.239
Novice n = 6 0.086 −0.213–0.384 .554 3.386
E Cone Expert n = 13 −0.023 −0.232–0.186 .820 2.177
Novice n = 9 0.300 0.034–0.566 .029* 2.500
L Heart Expert n = 5 0.220 −0.231–0.671 .247 5.330
O Arrow Expert n = 10 −0.267 −0.589–0.056 .096 3.733
Novice n = 4 −0.175 −0.659–0.309 .443 3.825
C Torus Expert n = 14 −0.400 −0.760–0.040 .031* 3.700
Novice n = 13 −0.546 −0.920–0.173 .006* 3.554

E = expert; N = novice; CI = confidence interval

Notes: Novice participant data for shape L are omitted from the table as no novice participants correctly identified the shape.

* p < .05

Scanning Time

A secondary outcome examined was scanning time. No significant difference in scan time for either phase was found. In the training phase, median scanning time for novices was 5 minutes and 30 seconds (95% CI = 237–459 seconds) compared with median scanning time for experts, which was 3 minutes and 57 seconds (95% CI = 168–334 seconds). As many participants (5 novices and 7 experts) took the maximal allowable time in the assessment phase, results are not fully clear due to data censoring.

We hypothesized that due to lack of clinical experience and theoretical knowledge, novices would spend more time scanning in each phase of the study. This was not the case, however. Anecdotally, it seemed that although expert scanners were already familiar with principles of ultrasonography, they tended to want to spend extra time scanning in the training phase to optimally familiarize themselves with the model prior to proceeding to the assessment phase. Even though many experts finished the assessment phase with time to spare, many proceeded to back-scan their work for a period afterward to ensure they had answered questions to the best of their abilities. This contrasts with novice participants, many of whom spent the majority of the time trying to complete the tasks required, which didn’t allow much time to check their work at the end.

Survey Results

“This ultrasound training model should be incorporated into the veterinary school curriculum” was the only visual analog scale survey question common to both cohorts. Both groups supported the model’s integration into the diagnostic imaging curriculum, with novices being slightly more enthusiastic. Mean scores for responses were 7.0 and 9.5 for experts and novices, respectively. Novices had a significantly higher mean rank (19.37) compared with experts (10.32), with their means and medians tending to be higher than those of experts (p = .003). A single low extreme outlier pulled the mean of the expert group down.

Survey question results were used to gauge general responses to the simulation model and to guide the authors in future model development and research. Open-text survey questions provided model feedback (Tables 4 and 5). Generally, observations were very positive across participant groups. Novices identified that the simulator permitted acquisition of basic skills in a low-stakes environment as a major benefit. Participants in both cohorts commented that the model was an appropriate starting point for learning to recognize different tissue echogenicity and shapes, which could ultimately help with interpretating more complex structures and lesions in a clinical setting. Novices commented that the exercise provided a unique opportunity whereby they could strengthen their visuospatial capabilities. Although the purpose of this study was model validation and not teaching, many novices commented that they learned a lot from the instructional video and participation in the study. Upcoming model prototypes will aim to strengthen model validity after considering survey feedback. Example quotes were extracted from the short open-text questions in the anonymous post-participation survey and are summarized in Tables 4 and 5.

Table

Table 4: Summary of anonymous post-participation survey, open-ended questions, and answers pertaining to ultrasound simulation model use for expert participants

Table 4: Summary of anonymous post-participation survey, open-ended questions, and answers pertaining to ultrasound simulation model use for expert participants

Number

Question

Responses

1

How do you think this training model could be improved?

“Using structures more similar to normal anatomy may be more useful than abstract shapes. I’m not sure the benefit of being able to identify a star vs. an arrow.”

“Scanning a round surface would be more desirable.”

“Consider less clustering of shapes in gel.”

“Try to decrease the number of gas bubbles in the ballistic gel.”

2

With respect to teaching basic ultrasound skills, what are the strengths of this ultrasound training model?

“Good integration of basic ultrasound skills and cognitive function.”

“Helps learners locate structures, lesions, interfaces, and recognize different echogenicity. Helps with familiarity of ultrasound machine and probe.”

“Good development of visuospatial skills, fun, no animals needed, safe, easy, cheaper than other models, easy for a single person to do on their own without help.”

“Forces 3D shape assessment and mental rotation.”

3

With respect to teaching basic ultrasound skills, what are the weaknesses of this ultrasound training model?

“Can’t image dorsal/third plane.”

“Some of the shapes may be difficult to identify for entry-level students.”

“Would like to see objects at varying depths and densities.”

“The shapes proposed in the shape bank are sometimes too close to each other to determine precisely on ultrasound what they are.”

“Distal acoustic shadowing and gas artifacts sometimes hinder thorough shape evaluation.”

4

Can you comment on the fidelity (likeness to real world) of the model in regard to appearance, feel, or other?

“We see a wide variety of shapes in clinical practice which are not representative in the model.”

“Close enough for practice.”

“It is firmer than a live animal.”

“Good simulation of soft tissues.”

5

State any additional comments.

“Great experience, I do believe it should be incorporated into all senior rotations in diagnostic imaging accompanied with a model for ultrasound-guided aspirates.”

“Great idea—helpful for basic training.”

“This would be great to teach the basics of ultrasound (physics, etc.) but would need to be modified to help with interpretation of echogenicity.”

“This was fun!”

Table

Table 5: Summary of anonymous post-participation survey, open-ended questions, and answers pertaining to ultrasound simulation model use for novices

Table 5: Summary of anonymous post-participation survey, open-ended questions, and answers pertaining to ultrasound simulation model use for novices

Number

Question

Responses

1

What was the hardest part of the study?

“Learning spatial orientation and converting 2D images into 3D shapes.”

“Shape identification.”

“Determining axis measurements.”

“Identifying star, arrow, and heart shapes.”

2

What was the most rewarding part of the study?

“Being able to confidently identify/find a shape.”

“Getting more ultrasound practice, familiarizing myself with the ultrasound machine.”

“Having the opportunity to use the ultrasound probe, which is a much better experience than reading/talking about ultrasound.”

“Seeing the answer key.”

3

With respect to teaching basic ultrasound skills, what are the strengths of this ultrasound training model?

“Controlled, no live models, work at your own pace, no anesthesia or sedation necessary.”

“Allowed improved machine familiarity, probe handling, and prompted problem solving.”

“Brings ultrasound down to a basic level that is accessible to students.”

“An opportunity to go back to ultrasound basics and improve skills with a stationary (non-moving patient) structure.”

4

With respect to teaching basic ultrasound skills, what are the weaknesses of this ultrasound training model?

“Not a real patient, not real organ shapes in the model.”

“You’re only evaluating shapes at a uniform depth.”

“No guidance from an expert makes it harder to learn.”

“Some of the shapes were a little close together and the probe didn’t move as smoothly as I thought it would.”

“There was some confusion as to which axis I should be measuring.”

“Glass artifact of container.”

“No weaknesses.”

5

Would you prefer learning ultrasound basics from a model vs. textbook vs. observing someone else perform the task?

“Model.” (14/16 participants)

“Model and observing combined.” (2/16 participants)

“I’ve read textbooks and observed and it’s just not the same unless you’re actually doing it yourself.”

6

Please write any additional comments you have.

“Would love to have this incorporated into curriculum (fourth year and earlier).”

“Why don’t we have one of these models to practice with during the day?”

“A version of this model should definitely be implemented into phase 4 rotations and/or phase 3 radiology. We need this model at school!!!!”

“I think it’s great you all are trying to improve the ultrasound training curriculum. As a soon-to-be-DVM, I feel very uncertain about ultrasound.”

“I have finally grasped the concept of imaging in different planes which has been a learning issue up until this point.”

“This was so fun.”

2D = two-dimensional; 3D = three-dimensional; DVM = Doctor of Veterinary Medicine

We created a novel ultrasound simulation model that is accessible, portable, and inexpensive. Start-up costs associated with model fabrication were approximately $2,218.20 CA. This included a 5-gallon Smooth-On vacuum chamber and vacuum pump, two reusable glass containers, and a variety of silicone molds. Approximate cost per model beyond these materials is $31.50 CA. After the silicone shapes have been made and set to cure (approximately 24 hours), the model construction process takes roughly 1 hour. However, making multiple models at a time dramatically reduces the time spent making models.

The materials used in the model are nonperishable and therefore will not desiccate or decay, unlike other agar or food-based models, which require refrigeration. With gentle cleaning after use, the model is long-lasting and can be reused as well as remelted multiple times. Both silicone and ballistics gel have wide thermal stability, do not support microbial growth, and are hardy—they do not break or tear easily. Ballistics gel and silicone are easily acquired online or in retail stores and are economically feasible: $12 CA/lb (Clear Ballistics) and $25 CA/lb (Smooth-On website), respectively. Unlike many earlier models using agar and gelatin,18 ballistics gel and silicone mimic tissue-like materials well when imaged with ultrasound, with or without the addition of particulate material to adjust echogenicity.19,20 Additionally, ballistics gel’s color and transparency can be customized with the addition of urethane dyes.

Ultrasound machine familiarity (including image optimization), image interpretation, and ability to make medical decisions have been identified as important factors pertaining to the diagnostic performance and confidence of novice ultrasonographers.2,20,21 Outcome assessments were designed with this in mind to evaluate this simulation model’s ability to support learners through the transition from a conceptual understanding of basic ultrasound to a more practical one. Assessments were related to participants’ ability to understand basic ultrasound function, problem solve, and use visuospatial skills. Careful thought ensured the establishment of sound metrics relevant to the curriculum, which were effective at discriminating between novices and experts. As the model was intended to teach fundamental ultrasound principles, simple outcome assessments and a low-complexity simulator were created to match the learning level of a final-year veterinary student.

Experts significantly outperformed novices in their ability to identify shape type and location for all shapes except for E and C—the cone and torus. The shape that discriminated best between cohorts was shape A—the hemisphere, which was the only shape where mean axis measurements differed significantly between participant groups. Both cohort measurements were close to the true mean axis measurements, but in opposite directions; one cohort overestimated and one underestimated. No novices successfully identified shape L—the heart. While the heart shape may have been challenging for a novice learner, we feel it is important that the model possess a diverse array of shapes of varying difficulties. For most shapes, expert measurements were numerically closer to the true mean value compared with novice measurements but not significantly so. Additionally, novices tended to overestimate shape measurements. These results perhaps suggest a trend toward experts being more accurate, which would be expected given their prior familiarity with obtaining measurements. Further testing is required to assess this theory. Shape O, the arrow, presented a unique instance, where an almost equal number of novices and experts identified the shape. Novices measured shape O slightly more accurately than experts, although not significantly. Although shape C, the torus, was the easiest shape for both cohorts to find and identify, both cohorts measured it similarly inaccurately when compared to the true mean axis measurements. The reason for these findings is unclear and may be due to random chance or an innate quality of the geometric shapes themselves—perhaps the arrow and torus do not possess qualities that effectively differentiate novice and expert scanners. Results support our hypothesis that experts would identify shape type and location better than novices, but they do not support our hypothesis that experts would accomplish more accurate measurements or faster scan times than novices. Sample size was likely a constraint in this study with respect to finding a difference between group measurements.

In response to survey feedback, the following modifications could be considered. Increasing model lubrication could minimize the tacky model surface tension. Degassing the ballistics gel in addition to the silicone shapes may further minimize gas trapping and model artifacts. Different ballistics gel types may offer more model compressibility, which could more accurately mimic scanning an abdomen. Creating a rounded model surface may offer realism as well as scanning in a tangential plane. Removing the holding container completely could allow participants to freely scan without their probe interfering with the side of the container. Despite some participant feedback to add complexity to the model, the intent of this project was not to construct a complex, high-fidelity simulator; rather, it was to create a vehicle to teach a set of deconstructed skills to act as the building blocks for developing basic ultrasound skill proficiency. As novice learners master skills on rudimentary simulators, higher-fidelity models may be presented for teaching more advanced skills. In these circumstances, different silicone types (Ecoflex, Dragon Skin™) and polyurethane rubbers with the addition of liquid or solid inclusions could be used to produce variation in acoustic properties, thus creating different tissue echogenicity. The higher-fidelity models will be reserved for novices who have mastered the outcome assessments for this simulator.

Some inherent limitations and biases exist within this study. Although the students recruited for the study are likely to be a true reflection of the population who will be using the simulator, selection bias may have been introduced due to recruitment at a specific time of the year. Participating earlier or later in the school year or using students from pre-clinical years, when knowledge base and ultrasound exposure differ, would likely influence results. Additionally, if the study were to be repeated, information regarding participants’ desired specialty of interest would be collected to assess the influence of career orientation on participation. Due to the multicentric nature of the study, travel impeded the transportation of a full-sized ultrasound machine to each institution; therefore, ultrasound machines were not identical between expert participants. While novices used a standard ultrasound machine, experts were encouraged to use the ultrasound machine and probe that they used regularly. As a counterpoint, having expert participants use their own ultrasound likely created a much more realistic representation of how they would regularly perform ultrasound. Analysis of data for both short and long axis shape measurements was originally intended; however, confusion arose as to which short axis measurement should be obtained for 3D geometric shapes that had more than one short axis. For this reason, it was determined that participants would collect both a short axis measurement and the longest axis measurement. Only long axis shape measurements were statistically analyzed, however.

In this prospective observational cohort study, experts significantly outperformed novices in their ability to identify shape type and location for the majority of shapes in this ultrasound simulation model. Additionally, the model was well received, with both groups supporting its integration into the diagnostic imaging curriculum. Generally, no differences were found between cohorts with regard to shape axis measurements and scan time. Taken together, the results of this study provide construct and face validity for this model. The study has laid the groundwork for future research to assess skill acquisition on the model and subsequent skill transfer to a clinical environment. Integrating this model into clinical skills laboratories will help reduce the need for live or cadaveric animals, overcome shortages in instructor supervision, improve diagnostic accuracy and patient safety, and contribute to a more comprehensive educational program. This training model represents an important objective clinical training tool that is accessible, reliable, and valid and that students enjoy using.

The authors would like to thank William Sears for his assistance with the statistical analyses.

The authors declare no potential conflicts of interest with respect to research, authorship, and/or publication of this article.

Notes

a Creativemoldshop (Beijing, China), DiDisDollhouse (Toronto, ON, Canada), thebakersconfections (Maine, USA), Excellenmoldshop (China), AvaArtSupplies (Paragon, IL, USA), ToniTheBaker (Hong Kong, Hong Kong), MsDIYSupplies (China).

b Sculpture supply Canada, Toronto, ON, Canada, https://www.sculpturesupply.com/.

c Novice video #1—Introduction to the ultrasound simulation model: https://www.youtube.com/watch?v=49HzECFa61s.

d Novice video #2—Introduction to the ultrasound machine: https://www.youtube.com/watch?v=bki8t1pl2dw.

1. Moore DL, Ding L, Sadhasivam S. Novel real-time feedback and integrated simulation model for teaching and evaluating ultrasound-guided regional anesthesia skills in pediatric anesthesia trainees. Paediatr Anaesth. 2012;22(9):84753. https://doi.org/10.1111/j.1460-9592.2012.03888.x. MedlineGoogle Scholar
2. Tolsgaard MG, Rasmussen MB, Tappert C, et al. Which factors are associated with trainees’ confidence in performing obstetric and gynecological ultrasound examinations? Ultrasound Obstet Gynecol. 2014;43(4):44451. https://doi.org/10.1002/uog.13211. MedlineGoogle Scholar
3. Maul H, Scharf A, Baier P, et al. Ultrasound simulators: experience with the SonoTrainer and comparative review of other training systems. Ultrasound Obstet Gynecol. 2004;24(5):5815. https://doi.org/10.1002/uog.1119. MedlineGoogle Scholar
4. Alexander K, Bélisle M, Dallaire S, et al. Diagnostic imaging learning resources evaluated by students and recent graduates. J Vet Med Educ. 2013;40(3):25263. https://doi.org/10.3138/jvme.1212-112R1. LinkGoogle Scholar
5. Butler DG. Employer and new graduate satisfaction with new graduate performance in the workplace within the first year following convocation from the Ontario Veterinary College. Can Vet J. 2003;44(5):38091. MedlineGoogle Scholar
6. Nayahangan LJ, Nielsen KR, Albrecht-Beste E, et al. Determining procedures for simulation-based training in radiology: a nationwide needs assessment. Eur Radiol. 2018;28(6): 231927. https://doi.org/10.1007/s00330-017-5244-7. MedlineGoogle Scholar
7. Jensen JK, Dyre L, Jørgensen ME, et al. Simulation-based point-of-care ultrasound training: a matter of competency rather than volume. Acta Anaesthesiol Scand. 2018;62(6):8119. https://doi.org/10.1111/aas.13083. MedlineGoogle Scholar
8. Cima G. Specialists in short supply. Universities, private practices struggle to find certain specialists, blame lack of residency training programs [Internet]. JAVMA News; 2018 Oct 15 [cited 2021 May 3]. Available from: https://www.avma.org/javma-news/2018-10-15/specialists-short-supply. Google Scholar
9. Patel AA, Gould DA. Simulators in interventional radiology training and evaluation: a paradigm shift is on the horizon. J Vasc Interv Radiol. 2006;17(11):163–73. https://doi.org/10.1097/01.RVI.0000247928.77832.C4. MedlineGoogle Scholar
10. Frenk J, Chen L, Bhutta ZA, et al. Health professionals for a new century: transforming education to strengthen health systems in an interdependent world. Lancet. 2010;376(9756):192358. https://doi.org/10.1016/S0140-6736(10)61854-5. MedlineGoogle Scholar
11. Satava R. Role of simulation in postgraduate medical education. J Heal Spec. 2015;3(1):12. https://doi.org/10.4103/1658-600X.150753. Google Scholar
12. Satava RM. Historical review of surgical simulation—a personal perspective. World J Surg. 2008;32(2):1418. https://doi.org/10.1007/s00268-007-9374-y. MedlineGoogle Scholar
13. Jensen JK, Dyre L, Jørgensen ME, et al. Collecting validity evidence for simulation-based assessment of point-of-care ultrasound skills. J Ultrasound Med. 2017;36(12):247583. https://doi.org/10.1002/jum.14292. MedlineGoogle Scholar
14. Stunt J, Wulms P, Kerkhoffs G, et al. How valid are commercially available medical simulators? Adv Med Educ Pract. 2014;14:38595. https://doi.org/10.2147/AMEP.S63435. MedlineGoogle Scholar
15. Sidhu HS, Olubaniyi BO, Bhatnagar G, et al. Role of simulation-based education in ultrasound practice training. J Ultrasound Med. 2012;31(5):78591. https://doi.org/10.7863/jum.2012.31.5.785. MedlineGoogle Scholar
16. Wang EE, Quinones J, Fitch MT, et al. Developing technical expertise in emergency medicine—the role of simulation in procedural skill acquisition. Acad Emerg Med. 2008;15(11):104657. https://doi.org/10.1111/j.1553-2712.2008.00218.x. MedlineGoogle Scholar
17. Sokilowski J.A. Modeling and simulation in the medical and health sciences. Sokilowski J, Banks CM, editors. Hoboken (NJ): John Wiley & Sons, Inc.; 2011. Google Scholar
18. Sterne T. Some remarks on confidence of fiducial limits. Biometrika. 1954;41(1/2):275–8. https://doi.org/10.2307/2333026. Google Scholar
19. Hirji K, Tsiatis A, Mehta C. Median unbiased estimation for binary data. Am Stat. 1989;43(1):7–11. https://doi.org/10.2307/2685158. Google Scholar
20. Earle M, Portu G de, Devos E. Agar ultrasound phantoms for low-cost training without refrigeration. Afr J Emerg Med. 2016;6(1):18–23. https://doi.org/10.1016/j.afjem.2015.09.003. Google Scholar
21. Cafarelli A, Miloro P, Verbeni A, et al. Speed of sound in rubber-based materials for ultrasonic phantoms. J Ultrasound. 2016;19(4):2516. https://doi.org/10.1007/s40477-016-0204-7. MedlineGoogle Scholar