Our powerful surface registration algorithms allow to register surface scans obtained by a depth sensor to MR or CT scans. The skin surface is automatically detected with sub-voxel precision and a weighting-based surface alignment is run. Different weights can be specified for different parts of the anatomy to take into account different levels of confidence in structure similarity.

Our software support several industrials depth sensors such as Zivid and Photoneo, medical depth sensors such as the Atracsys spryTrack 300 and RGB-D sensors such as the Intel RealSense and the Microsoft Kinect Azure out of the box.

We provides a number of algorithms for endoscopic vision applications, ranging from basics such as camera calibration, hand-eye calibration and framegrabber integration to advanced topics such as SLAM and monocular depth estimation.

A particular focus is on run-time performance and efficient utilization of GPU resources.

State of the art machine learning models are available for tasks such as:

  • Depth computation from stereo images.
  • Monocular Depth Estimation.
  • 6D Tool Pose Estimation from endoscopy video.
  • Endoscopy specific feature matching.
  • Endoscopic tool segmentation.
  • Endoscopic SLAM

    Monocular Depth Estimation

    Diffusion Tensor Imaging (DTI) is one of the few imaging modalities that allows for non-invasive in-vivo insight into tissue microstructure and is particularly common in brain/neuro applications. The ImFusion DTI plugin enables the user to load and process diffusion-weighted MR images and provides state-of-the-art algorithms for visual exploration and analysis of such data.

    The ImFusion DTI toolbox offers the following features:

    • Loading of DW-MRI volume sets, annotating volumes with gradient directions
    • Fast analytic fitting of diffusion tensors to the DW-MRI data
    • Computation of eigenvectors, eigenvalues, and various anisotropy measures
    • GPU-accelerated real-time fiber tracking
    • State-of-the art visualization of fiber tractography

    Multi-modal rendering showing a co-registered CR-MRI data set, including a segmented brain tumor and DTI fiber tractography.

    A robotics toolbox for advanced research and innovative products

    The ImFusion Robotics and ROS plugins enable you to create bleeding-edge robotic applications - without trade-offs between development speed, performance, and reliability.
    Our modern, consistent APIs allow to quickly build complex workflows with low development and maintenance effort. Our high-quality software components implement state-of-the-art methods, providing a solid baseline and strong platform for scientific research. At the same time, they can be rapidly tailored and included into certified medical devices.
    The Robotics plugin provides a solid, comprehensive and customisable environment:
    • Intuitive GUI and C++ API for motion planning and execution
    • Native, ROS-free integration with Franka Emika and Universal Robots devices
    • Fast integration of custom end effectors
    • Simple handling of multi-robot scenarios
    • Convenient framework for the integration of custom robots and control strategies
    • Interactive hand-eye calibration utility
    • Easy combination with the whole ImFusion portfolio for surgical navigation, freehand ultrasound, RGBD reconstruction, and more
    The ROS plugin gives you the best of ROS, with added convenience:
    • Fast installation: no previous ROS installation required
    • ROS1/2 agnostic API: code once, run everywhere
    • Windows support: double-click on our installer, and access all of ROS and MoveIt!
    • Deep integration: connect ImFusion Streams to ROS topics and tf, import and export rosbags, start a ROS1 master, ...
    • MoveIt! connection: plan and execute motions for an external MoveIt! instance from the ImFusion GUI or C++ SDK
    OS ROS1 Noetic ROS2 Foxy ROS2 Humble
    Ubuntu 22.04
    Ubuntu 20.04
    Windows

    The ImFusion Suite is well equipped to support the full workflow of common interventional scenarios. For the guidance of tools such as needles, available interfaces to tracking systems integrate well with other software modules for fast registration to pre-interventional imaging data and live freehand ultrasound to achieve a comprehensive and interactive real-time visualization.

    Regularly, the first step of clinical interventions is an extensive image analysis and planning step, often involving segmentations of important structures and the annotation of target point and insertion path. In the interventional theater, a tracking system comes into play. The ImFusion Suite currently supports many common proprietary optical and electromagnetic systems (including NDI Polaris(TM), NDI Aurora(TM), Ascension driveBAY/trakSTAR(TM), Atracsys FusionTrack + SpryTrack, OptiTrack system such as the V120), and has a bi-directional OpenIGTLink communication interface for full flexibility.

    Imagen 1 Imagen 2

    To establish the registration between pre-interventional images and the patient in the interventional theater, the software's registration module allows for both intensity-based registration based on available on-site imaging and feature-based registration using fiducials and a pointer device.

    Various visualization techniques can then be employed for 3D guidance during the intervention. While tool-in-hand scenarios, for instance when manipulating a tracked tool, require a fixed scene, eye-in-hand scenarios such as tracked ultrasound probes necessitate geometrically correct blending of pre-interventional imaging data into a moving scene. In either case, the ImFusion Suite is able to provide 3D views as well as multi-modal, multi-planar reconstructions.

    Imagen 3 Imagen 4