Surgical navigation display system using volume rendering of intraoperatively scanned CT images

Mitsuhiro Hayashibe*, Naoki Suzuki, Asaki Hattori, Yoshito Otake, Shigeyuki Suzuki, Norio Nakata

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

6 Citations (Scopus)

Abstract

As operative procedures become more complicated, simply increasing the number of devices will not facilitate such operations. It is necessary to consider the ergonomics of the operating environment, especially with regard to the provision of navigation data, the prevention of technical difficulties, and the comfort of the operating room staff. We have designed and created a data-fusion interface that enables volumetric Maximum Intensity Projection (MIP) image navigation using intra-operative mobile 3D-CT data in the OR. The 3D volumetric data reflecting a patient's inner structure is directly displayed on the monitor through video images of the surgical field using a 3D optical tracking system, a ceiling-mounted articulating monitor, and a small-size video camera mounted at the back of the monitor. The system performance and accuracy was validated experimentally. This system provides a novel interface for a surgeon with volume rendering of intra-operatively scanned CT images, as opposed to preoperative images.

Original languageEnglish
Pages (from-to)240-246
Number of pages7
JournalComputer Aided Surgery
Volume11
Issue number5
DOIs
Publication statusPublished - 2006 Sept 1
Externally publishedYes

Keywords

  • Augmented reality
  • Intra-operative inner structure
  • Mobile 3D-CT
  • Surgical navigation

ASJC Scopus subject areas

  • Surgery
  • Radiology Nuclear Medicine and imaging

Fingerprint

Dive into the research topics of 'Surgical navigation display system using volume rendering of intraoperatively scanned CT images'. Together they form a unique fingerprint.

Cite this