OsiriXUserManualExtract PDF
OsiriXUserManualExtract PDF
Introduction
1.1 O SIRI X
The goal of OsiriX is to deliver the best pos-
sible viewer for images produced by radiol-
ogy equipment, such as MRI, CT, PET, PET–
CT, SPECT-CT, Ultrasounds, etc. Its key fea-
tures are: displaying, reviewing, interpreting
and post-processing the images. OsiriX is an
image processing software dedicated to DICOM
images (.dcm extension).
OsiriX is an open-source project, distributed
under the LGPL license. Several versions of
OsiriX are available. If you plan to use OsiriX in a medical environment, you will
require a certified version for primary diagnostic imaging. Using OsiriX as a pri-
mary diagnostic workstation also requires the use of certified monitors for medical
imaging.
OsiriX is the result of more than 5 years of research and development in digi-
tal imaging. OsiriX fully supports the DICOM standard for an easy integration in
your workflow environment. OsiriX offers advanced post-processing techniques in
2D and 3D. OsiriX offers a complete integration with any PACS, including the well
known open-source project: dcm4chee1 . OsiriX supports 64-bit computing and mul-
tithreading for the best performances on the most modern processors. By adopting
OsiriX you have made the right choice: it meets normal requirements, it is simple to
use, it has unlimited power and the ability to evolve!
1.1.1 About DICOM
DICOM is a standard for handling, storing, printing and transmitting information in
medical imaging [7]. The DICOM standard has now become the uncontested stan-
dard for the exchange and management of biomedical images. OsiriX is fully com-
pliant with the DICOM standard for image communication and image file formats.
DICOM is regularly updated through the DICOM committee. The up-to-date version
of this standard is available on the DICOM website2 .
1.1.2 History
The OsiriX project started in November 2003. The first version was developed by
Antoine Rosset MD, a radiologist from Geneva, Switzerland. He received a grant
from the Swiss National Fund3 to spend one year in UCLA, Los Angeles, with Prof.
Osman Ratib, to explore and learn about medical digital imaging.
At first, the goal of the OsiriX project was to simply write a small software pro-
gram to convert DICOM files to a QuickTime movie file, in order to help a radiologist
1
www.dcm4che.org
2
dicom.nema.org
3
www.snf.ch
1
2 CHAPTER 1. INTRODUCTION
friend, Luca Spadola MD, to create a teaching files database. He spent more than two
weeks searching a Java library for DICOM management and image manipulation.
Indeed he wanted to create a cross-platform software program for Windows, Mac OS
and Linux. But at that time, Apple had just released Mac OS X 10.3, the first usable
release of their UNIX-based OS. It became obvious that Mac OS X was the best choice
to quickly create a robust and modern DICOM viewer:
One DICOM viewer to rule them all
This small project became the unique obsession of this young radiologist. A
SourceForge account4 was created in January 2004 to develop OsiriX as an LGPL
software.
The first public version was released in April 2004, version 0.1a, on Antoine Ros-
set’s personal homepage5 . This first version was developed in less than 6 months: it
offered a basic database and a simple DICOM viewer, without post-processing func-
tions or measurement tools.
In October 2004, Antoine Rosset went back to the Geneva University Hospital in
Switzerland, to continue his career as a radiologist, but his obsession with OsiriX
remained strong.
The reference article about the OsiriX project was written in April 2004 and pub-
lished in June 2004 in the Journal of Digital Imaging [10].
Joris Heuberger, a mathematician from Geneva, joined the project in March 2005
on a voluntary fellowship of 6 months in UCLA, Los Angeles. During this period,
while working on plugins for OsiriX, he added the first Fly Through algorithm to the
Surface Rendering and Volume Rendering viewers.
In June 2005, during Apple’s Worldwide Developer Conference (WWDC) in San
Francisco, OsiriX received two prestigious Apple Design Awards: Best Use of Open
Source and Best Mac OS X Scientific Computing Solution (Figure 1.1).
the official DICOM viewer for the Radiology Department of the Geneva University
Hospital in 2009.
At this time, Antoine Rosset and Joris Heuberger became the core team of OsiriX
software. Over the years, OsiriX has benefitted from many external contributors. The
most active external contributor was Lance Pysher MD, a US radiologist. He notably
added the foundation for the DICOM Network protocol, through the DCMTK library
(Offis) and wrote an Objective-C framework for DICOM files management (DCM
Framework).
In March 2009, Antoine Rosset, Joris Heuberger and Osman Ratib created the
OsiriX Foundation6 to promote open-source in medicine. This non-profit foundation
offers grants to students for developing open-source software in medicine. The foun-
dation also organizes awards with prizes to stimulate development in digital imaging
and post-processing. Most of these projects are based around OsiriX.
Figure 1.2: From left to right: Osman Ratib, Joris Heuberger and Antoine Rosset
In February 2010, Antoine Rosset and Joris Heuberger created the company
Pixmeo7 to continue to promote open-source solutions in medical imaging, includ-
ing OsiriX development. The major goal of this company is to certify OsiriX as a
FDA cleared product and offer commercial support for open-source solutions, such
as OsiriX or dcm4chee.
Today, Antoine Rosset is working as a radiologist in La Tour8 private hospital.
Joris Heuberger is working as a Mac OS and iOS developer.
Their commitment to the OsiriX project is intact, thanks to the support of more
than 40,000 users throughout the world. From a student project, OsiriX is today a
mature and professional project, benefiting from experienced developers and users
and used by thousand of institutions, including the most prestigious medical centers.
6
www.osirixfoundation.com
7
www.pixmeo.com
8
www.latour.ch
4 CHAPTER 1. INTRODUCTION
The first iPhone (iOS) version of OsiriX was released in November 2008, devel-
oped by Joris Heuberger. OsiriX for iOS quickly became a major success.
1.2 I NDICATIONS F OR U SE
OsiriX is a software device intended for the viewing of images acquired from CT,
MR, CR, DR, US and other DICOM compliant medical imaging systems when in-
stalled on suitable commercial standard hardware. Images and data can be captured,
stored, communicated, processed and displayed within the system and/or across
computer networks at distributed locations. Lossy compressed Mammographic im-
ages and digitized film screen images must not be reviewed for primary diagnosis or
image interpretation. Mammographic images should only be viewed with a monitor
approved by FDA for viewing Mammographic images. It is the user’s responsibil-
ity to ensure monitor quality, ambient light conditions and that image compression
ratios are consistent with clinical application.
1.3 C ONTRAINDICATIONS
None
In some cases, corrupted DICOM images may interfere with OsiriX and may cause
OsiriX to crash. You can restart OsiriX in a safe mode by holding down the shift and
option key ( + ) while starting OsiriX. In safe mode, OsiriX will NOT read and
display the content of DICOM files. This will allow you to delete the studies or series
that may be corrupted.
1.5 R EQUIREMENTS
This section describes the software and hardware specifications required to run
OsiriX. OsiriX can only be installed on an Apple Mac running Mac OS X. It cannot be
installed on a Windows-based PC or Linux system.
The minimal requirements are as follows:
This Chapter describes how to visualize and render 3D datasets in OsiriX. The
VR/MIP Viewer is a window displaying a set of DICOM images rendered as a 3D
volume.
This 3D rendering technique is commonly used to visualize volumes of soft tis-
sue data. It assigns different colors and transparencies to different intensity values
in the data set. This technique can be applied to CT and MRI images with some ad-
justments. It is the most commonly used technique for pseudo-realistic rendering of
3D medical images. The predefined settings allow the user to generate reasonable
images with very little adjustments. The simplest adjustment that the user will have
to make is to the contrast and intensity of the image. This sets the threshold values of
the image rendering algorithm that assigns a given opacity to the lowest level of in-
tensity displayed. Thanks to this very simple maneuver it is then very easy to set the
rendering of different tissue densities (skin, muscle or bones). The contrast and inten-
sity assigned to the images will select the threshold density value used for rendering
the opaque tissue level. High contrast will select bone density and therefore remove
soft tissue and show only dense bone structures, while soft contrast will show only
soft tissue such as skin and muscles.
The toolbar contains buttons allowing the user to access the most useful functions
available. As with any other Mac OS X toolbar, it can be customized to fit your needs:
you can re-organize the tools, remove the ones you never use and add the ones you
need. See 7.1.1 for a complete reference of the available tools.
The image view is the place where DICOM images are rendered as 3D images. You
can interact with this area using the mouse and the keyboard and by using several
tools that are described in this chapter.
165
CHAPTER 7. 3D VOLUME RENDERING & 3D MAXIMUM INTENSITY
166 PROJECTION
7.1.1 Toolbar
The VR/MIP Viewer window provides a variety of tools and functions that can be
accessed through icons displayed on the toolbar or through items listed in the 3D
Viewer menu. This section describes each tool available in the toolbar.
7.1.1.1 WL/WW
This allows the user to change the settings for WL/WW
using presets.
7.1.1.2 CLUT
This allows the user to change the settings for the CLUT using presets. You can also
create a new CLUT using the 16-bit CLUT Editor (see 7.2.5).
7.1.1.3 Opacity
This allows the user to change the settings for the opacity table using presets.
7.1.1.4 CLUT Editor
This allows the user to have easy access to the 8-bit and 16-bit editors.
7.1.1.5 3D Presets
This allows the user to choose the rendering settings from a list of presets.
You can select one of the proposed presets.
7.1.1.6 Level of Detail
This allows the user to set the level of details for the Volume Ren-
dering of the 3D view. You can choose the level of detail by mov-
ing a slider between Fine and Coarse.
7.1.1.7 Best Rendering
This allows the user to temporarily force the 3D view to render images in
high quality.
7.1.1.8 Cropping Cube
This allows the user to resize the rendering area, by limiting the rendered
dataset. You can manipulate the 6 sides of the bounding box of the volume.
Any data outside of this box will not be rendered.
7.1. THE ANATOMY OF THE VR/MIP VIEWER 167
This allows the user to display/hide the orientation labels and cube. The
tool displays 4 labels (1 on each side of the 3D view) and a cube. It computes
the orientation of the volume and displays the cube of the same orientation
with the following labels on its sides:
• L – Left
• R – Right
• P – Posterior
• A – Anterior
• S – Superior
• I – Inferior
The 4 labels can contain any combination of these letters depending on the
volume orientation.
7.1.1.10 Shadings
This allows the user to edit the settings for the shadings of the
Volume Rendering. This item can only be used when the render-
ing mode is VR and not MIP. You can choose to turn the shadings
on and off. You can modify the following parameters:
• Ambient coefficient, from 0.0 to 1.0
• Diffusion coefficient, from 0.0 to 1.0
• Specular coefficient, from 0.0 to 4.0
• Specular power, from 0.0 to 50.0
You can also choose these parameters from predefined presets.
7.1.1.11 Perspective
This allows the user to choose perspective parameters for the render-
ing. You can choose one of the following perspectives:
• Parallel
• Perspective
• Endoscopy
CHAPTER 7. 3D VOLUME RENDERING & 3D MAXIMUM INTENSITY
168 PROJECTION
7.1.1.12 Orientations
This allows the user to switch from different predefined camera
positions. You can choose to see the volume in one of the follow-
ing positions:
• Axial
• Coronal
• Left Sagittal
• Right Sagittal
7.1.1.14 Mode
7.1.1.15 Fusion
This allows the user to modify the fusion percentage. If no Se-
ries is fused, the tool is not activated. You can modify the fusion
percentage.
7.1.1.16 4D Player
7.1.1.17 Stereo
This allows the user to turn the stereo display on and off. It creates an
anaglyph image and requires two color glasses (red/blue).
7.1.1.18 Movie Export
This allows the user to create a QuickTime movie of the 3D volume. You can
set the following parameters:
• The number of frames to be generated (from 1 to 360)
• The direction of the rotation (horizontal or vertical)
• The amplitude of the rotation (180 ○ or 360 ○ )
• The quality of the resulting rendering (Current or Best)
• The size of the resulting movie. Choose from the following:
– Current
– 512 × 512
– 768 × 768
7.1.1.19 iPhoto
This allows the user to export the currently rendered image to iPhoto. This
opens Apple’s iPhoto application and adds the JPEG image to the Album
defined in 2.2.1.
7.1.1.20 Export QTVR
This allows the user to create a QuickTime VR movie of the 3D volume. You
can set the following parameters:
• The type of QuickTime VR movie to create:
– X only axis rotation (18 frames)
– X only axis rotation (36 frames)
– 3D Rotation (100 frames)
– 3D Rotation (400 frames)
– 3D Rotation (1600 frames)
7.1.1.21 Email
This opens Apple’s Mail application and creates a new email message con-
taining the JPEG image as an attachment.
CHAPTER 7. 3D VOLUME RENDERING & 3D MAXIMUM INTENSITY
170 PROJECTION
7.1.1.22 Reset
7.1.1.23 Revert
This reloads the data from the original DICOM files and re-renders the 3D
volume. It cannot be undone.
7.1.1.24 Save as DICOM
This creates new DICOM images from snapshots of the 3D scene. It writes
the resulting data on the disk and adds the new DICOM files to the database.
You can choose from the following options:
• The name of the resulting Series
• The size of the resulting images. Choose from the following:
– Current
– 512 × 512
– 768 × 768
You can preview the result of the interpolation, by either playing the full anima-
tion or by scrolling through it.
You can choose to export the animation as a new DICOM series or as a QuickTime
movie. The resulting animation can be rendered in Current or Best quality. The size of
the resulting images can be chosen from the following:
• Current
• 512 × 512
• 768 × 768
This allows the user to manage the appearance of 3D ROIs and compute their
volume. For each ROI, you can set the following parameters:
• Map a texture: on/off.
• Red value from 0.0 to 1.0
• Green value from 0.0 to 1.0
• Blue value from 0.0 to 1.0
• Opacity value from 0.0 (transparent) to 1.0 (opaque)
7.1.1.28 Filters
This applies a filter of convolution to the image data. The following
filters are available:
• Basic Smooth 5 × 5
• Blur 3 × 3
• Blur 5 × 5
• Bone Filter 3 × 3
• Edge 3 × 3
• Emboss heavy
• Emboss north
• Emboss west
• Excessive edges
• Gaussian blur
• Hat
• Highpass 5 × 5
• Inverted blur
• Laplacian
• Laplacian 4
• Laplacian 8
• Lowpass
• Negative blur
• 3 × 3 Sharpen
• 5 × 5 sharpen
7.2. 3D VIEWING FUNCTIONS 173
Volume Data
Ray Casting
Composing
Pixel Color
Output image
Figure 7.5: The same dataset with parallel and perspective projection
You can move the points (or the whole curve) vertically. This will determine
the opacity: the bottom represents a transparent curve while the top represents an
opaque curve.
You can also save your CLUT to use it later with other predefined CLUTs.
7.2.6 Sculpting 3D image
You can sculpt the 3D volume data to remove parts of the 3D dataset. By removing
certain structures you can display hidden structures. For example, you can remove
the rib cage to show the heart structures on a thoracic CT.
To remove a structure, simply select the sculpting tool in the toolbar tool. Next,
draw an irregular region of interest over the 3D image, by clicking on the points of a
closed polygon. To remove a polygon, because it is not placed as you would like it to
be, you can start over by pressing the esc key on the keyboard.
7.2. 3D VIEWING FUNCTIONS 177
The drawn region can be a polygon or a B-Spline rendered area, based on the 3D
Preferences settings (see 2.2.2). The region represents the limits of an in-depth cut
through the data. To undo a sculpting operation you can use the Revert series from
the 2D Viewer Menu (see 13.6.5) or Undo from the Edit Menu.
When deleting part of the image with the Scissors tool, the raw data is modified.
That means the pixels of the original images are modified: the pixel intensity is set
to the minimum value of the series pixels. For example, in a CT series, the pixels
are modified to a value of −1024. When you close the VR/MIP Viewer window and
go back to the 2D Viewer window, you will see the sculpted image, with missing
pixels (as illustrated in Figure 7.8). You can reload these missing pixels at any time
by using the Revert series from the 2D Viewer Menu (see 13.6.5) or of the 3D Viewer
menu (see 13.7.9).
CHAPTER 7. 3D VOLUME RENDERING & 3D MAXIMUM INTENSITY
178 PROJECTION
Figure 7.8: The 3D VR of a CT series (a) before and (b) after 3D sculpting. The original
CT series (c) and the modified one (d).
a certain range of intensities. The default values for bones are set between 250 and
2000 Hounsfield values for CT images. You can modify these values by clicking on
the bone removal button while holding the option key pressed ( ).
To remove bone structures simply click on it and all the adjacent and contiguous
bone structures will be removed. You may repeat the operation on multiple bones
that are not connected (see Figure 7.9).
Figure 7.9: The Bone Removal tool used to take the patellas away.
Sometimes some remaining fragments that don’t have the perfect bone density
may need to be manually removed using the sculpting tools (see 8.2.4).
This tool uses the same technique as the Scissors tool: it modifies pixel intensity,
in order to hide them. This means that the pixels of the original images are modified:
the pixels intensity is set to the minimum value of the series pixels.
The effects of this tool can be undone, with the Undo item of the Edit menu.
7.2.8 Cropping 3D volume
A cropping tool allows the user to restrict the data volume being rendered,
by selecting the limits in X, Y and Z directions. Activating this tool in the
toolbar causes green spheres to appear on the image together with a wire-
frame parallelogram showing the limits of the volume being rendered. By clicking
and dragging the green dots you can move each limit and adjust the rendered vol-
ume. Each plane of the 6 sides of the parallelogram can be moved independently.
Notice that reducing the volume size not only allows you to hide unwanted parts of
the data but also significantly speeds up the speed of image rendering. To hide the
cropping-box click on the crop tool again.
Unlike the Scissors tool, the pixels are only hidden, not deleted. Hence, the origi-
nal pixels values are not modified.
The cropping tool is often used in association with the sculpting tool.
7.2.9 Image Fusion
The VR/MIP Viewer supports image fusion. This means that if you open the
VR/MIP Viewer window, from a 2D Viewer containing a series fused (see 5.5.9) with
another one, you can see these two series in 3D. The fused 3D dataset is displayed on
CHAPTER 7. 3D VOLUME RENDERING & 3D MAXIMUM INTENSITY
180 PROJECTION
top of the other 3D dataset, without taking into account the original position of the
structure. This means that the 3D dataset will always be in front. The fused 3D dataset
doesn’t support shading (see 7.1.1.10). It is always displayed without shading.
The fused 3D dataset is locked to the other dataset: they move and rotate together.
To modify the WL/WW of the fused 3D dataset, you have to display the original 2D
Viewer window, which contains the original dataset.
The cropping box tool will be applied to both datasets. The Scissors tool is not
applied to the fused 3D dataset. If you want to use the Scissors tool on the fused
dataset, you need to first sculpt it by opening a VR/MIP Viewer window from the
2D Viewer window, which contains the original dataset. Then you fuse this sculpted
dataset with the second 2D Viewer window. Finally, open a VR/MIP Viewer window
from the 2D Viewer window containing the fused datasets.
7.2.10 4D Dataset
The VR/MIP Viewer window also supports 4D datasets (see 3.1.1.2). Temporal se-
quences of 3D data are either obtained in separate series (CT and MRI image sets), or
are stored in a single series (PET and SPECT image sets).
OsiriX allows the user to render these images in 3D while maintaining the fourth
dimension of time activated resulting in a 4D dynamic image of a beating heart for
example that can be manipulated in 3D. To do so you must first load the set of dy-
namic images using the 4D Viewer function in the database window (see 3.1.1.2).
Once the dynamic set has been loaded, you can open the VR/MIP Viewer window.
You can then use the 4D player controls on the toolbar (see 7.1.1.16). The dynamic
display of 3D images can be activated by hitting the play button. You can also select
a given frame of the cine sequence by using the slide control labelled Pos. Dynamic
4D images can also be exported in DICOM format or in QuickTime(see 7.3).
7.2.11 ROIs
The VR/MIP Viewer window can display some ROIs: 3D ROIs or Point ROI. The
definition and the creation process of 3D ROIs is described in the 2D Viewer chapter
(see 5.6.1.22). You have to create a 3D ROI in the 2D Viewer window: the VR/MIP
Viewer window can display it, but cannot create it.
To display a 3D ROI, you have to open the 3D ROI Manager panel. You can either
open it from the ROI Manager button, in the toolbar, or from the ROI Menu by select-
ing the ROI Manager item. You can then display or hide each 3D ROI and change the
rendering settings for the corresponding 3D ROI(see 7.1.1.27).
The Point ROIs can be displayed and created in the VR/MIP Viewer window.
To create a Point ROI in the VR/MIP Viewer window, select the Point button in the
Mouse button function tool (see 7.1.1.13), in the toolbar. You can then drop a Point ROI
directly onto the 3D rendered image. OsiriX will use the current WL/WW settings
to determine where to drop the point in 3D: a ray casting is computed and the point
is dropped when the 3D rendered structure is completely opaque. For example, in
the case of an abdominal CT dataset, the point will be dropped on the skin if the
rendered image shows the patient’s skin and on the bone if the rendered image shows
the patient’s bone, depending on the WL/WW settings. The Point ROI created in the
VR/MIP Viewer window will be also displayed in the 2D Viewer window.
You can change the appearance of the 3D Points ROIs, by double-clicking on a
point (Figure 7.10).
7.2. 3D VIEWING FUNCTIONS 181
preset should be saved. Unlimited numbers of presets can be saved in any number
of groups.
Figure 7.11: To resize the 3D view, drag the X located in the lower left corner.