Evaluation of stereoscopic medical video content on an autostereoscopic display for undergraduate medical education

Justus Ilgner*, Takashi Kawai, Takashi Shibata, Takashi Yamazoe, Martin Westhofen

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contribution

19 Citations (Scopus)


Introduction: An increasing number of surgical procedures are performed in a microsurgical and minimally-invasive fashion. However, the performance of surgery, its possibilities and limitations become difficult to teach. Stereoscopic video has evolved from a complex production process and expensive hardware towards rapid editing of video streams with standard and HDTV resolution which can be displayed on portable equipment. This study evaluates the usefulness of stereoscopic video in teaching undergraduate medical students. Material and methods: From an earlier study we chose two clips each of three different microsurgical operations (tympanoplasty type III of the ear, endonasal operation of the paranasal sinuses and laser chordectomy for carcinoma of the larynx). This material was added by 23 clips of a cochlear implantation, which was specifically edited for a portable computer with an autostereoscopic display (PC-RD1-3D, SHARP Corp., Japan). The recording and synchronization of left and right image was performed at the University Hospital Aachen. The footage was edited stereoscopically at the Waseda University by means of our original software for non-linear editing of stereoscopic 3-D movies. Then the material was converted into the streaming 3-D video format. The purpose of the conversion was to present the video clips by a file type that does not depend on a television signal such as PAL or NTSC. 25 4th year medical students who participated in the general ENT course at Aachen University Hospital were asked to estimate depth clues within the six video clips plus cochlear implantation clips. Another 25 4th year students who were shown the material monoscopically on a conventional laptop served as control. Results: All participants noted that the additional depth information helped with understanding the relation of anatomical structures, even though none had hands-on experience with Ear, Nose and Throat operations before or during the course. The monoscopic group generally estimated resection depth to much lesser values than in reality. Although this was the case with some participants in the stereoscopic group, too, the estimation of depth features reflected the enhanced depth impression provided by stereoscopy. Conclusion: Following first implementation of stereoscopic video teaching, medical students who are inexperienced with ENT surgical procedures are able to reproduce depth information and therefore anatomically complex structures to a greater extent following stereoscopic video teaching. Besides extending video teaching to junior doctors, the next evaluation step will address its effect on the learning curve during the surgical training program.

Original languageEnglish
Title of host publicationStereoscopic Displays and Virtual Reality Systems XIII - Proceedings of SPIE-IS and T Electronic Imaging
Publication statusPublished - 2006 Apr 10
EventStereoscopic Displays and Virtual Reality Systems XIII - San Jose, CA, United States
Duration: 2006 Jan 162006 Jan 19

Publication series

NameProceedings of SPIE - The International Society for Optical Engineering
ISSN (Print)0277-786X


ConferenceStereoscopic Displays and Virtual Reality Systems XIII
Country/TerritoryUnited States
CitySan Jose, CA


  • Medical education
  • Microsurgery
  • Otorhinolaryngology
  • Stereoscopy
  • Undergraduate Training

ASJC Scopus subject areas

  • Electronic, Optical and Magnetic Materials
  • Condensed Matter Physics
  • Computer Science Applications
  • Applied Mathematics
  • Electrical and Electronic Engineering


Dive into the research topics of 'Evaluation of stereoscopic medical video content on an autostereoscopic display for undergraduate medical education'. Together they form a unique fingerprint.

Cite this