Technology Portfolio

Here are some of our fields of expertise

3D Freehand Ultrasound

The ImFusion Suite offers extensive support for working with both 2D and 3D freehand ultrasound. Tracking information is used when available, otherwise image-based reconstruction is possible.

Live visualization of freehand ultrasound during abdominal acquistion, featuring a 3D view (left), a live 2D view with ruler (top center), and three multi-planar reconstructions exhibiting the on-the-fly compounding.

A variety of proprietary formats are supported to load ultrasound videos/clips alongside tracking information. Modules for image-based probe-to-sensor calibration and slice-to-volume registration allow to properly identify the geometrical attributes. We have developed a signature fast on-the-fly compounding on the GPU based on our previously published work [1], which allows to visualize cross-sections in any orientation, even while geometric parameters are still being optimized or US frames are still being acquired. A variety of different forward and backward compounding techniques then reconstruct high-quality volumes out of the freehand slices. For further processing, our general modules for fusion and multi-modal registration can be seamlessly employed. A demonstration of the combination of learning-based initialization, mono-modal ultrasound extended-field-of-view reconstruction, and multi-modal registration can be found in our 2014 publication at the VCBM workshop [2].

Note that the technology for freehand ultrasound naturally extends to other scenarios where US frames are acquired at different locations in space, for instance as used in motorized (so-called "wobbler") transducers or when an US probe is attached to a robot's end-effector. To this end, a bi-directional communcation interface to ROS is available and has already been used for various publications.

Within the ImFusion framework, we have also developed a fast GPU-based ultrasound simulation algorithm which is able to synthesize medical US images including the tissue-specific speckle pattern and anisotropic ultrasound artifacts [3]. By following the path of rays through a 3D volume and considering the acoustic properties of different tissues, this simulator can create US imaging artifacts such as refractions, reverberation, range distortion, and mirroring in the order of a few seconds. Since the algorithm is fast and yet capable of simulating many ultrasound imaging artifacts, it can boost other medical computing applications such as multi-modal image registration, medical ultrasound training, and learning tissue acoustic properties.

Comparison between an actual ultrasound image (left) and an artificial one simulated with our algorithm from a MR image (right).


[1] A. Karamalis, W. Wein, O. Kutter, N. Navab. Fast Hybrid Freehand Ultrasound Volume Reconstruction.

[2] M. Müller, L.E.S. Helljesen, R. Prevost, I. Viola, K. Nylund, O.H. Gilja, N. Navab, W. Wein. Deriving Anatomical Context from 4D Ultrasound.

[3] M. Salehi, S.A. Ahmadi, R. Prevost, N. Navab, W. Wein. Patient-specific 3D ultrasound simulation based on convolutional ray-tracing and appearance optimization.