By 2030, the World Health Organization projects a global shortage of over 15 million healthcare workers, including surgeons, radiologists, and nurses. In the US, the shortfall could reach 124,000 physicians by 2034. Rising demand, aging populations, and limited human capacity are pushing healthcare systems to a breaking point.
AI-enabled robotic systems offer a path forward. From automating surgical subtasks and optimizing operating room setups to accelerating diagnostics and enabling remote procedures, robotics is helping extend care, reduce burdens on clinical staff, and improve outcomes, especially in underserved regions.
With over 1,000 FDA-cleared AI medical devices and more than 400 healthcare robotic platforms in development, a vibrant innovation ecosystem is forming. Yet, real-world deployment remains difficult. Developers face key challenges like:
- High-fidelity biomechanical simulation.
- Advanced medical sensor and imaging simulation, and emulation.
- Sim-to-real transfer.
- Robotic data gap, data acquisition, and expert learning integration.
What’s NVIDIA Isaac for Healthcare?
NVIDIA Isaac for Healthcare is a purpose-built platform to accelerate simulation, training, and deployment of AI-enabled medical robotics. It brings the powerful NVIDIA three-computer architecture to healthcare robotics, unifying the full development stack from simulation to real-time execution.
The platform provides the tools needed for AI medical robotics development through five integrated components:
Start with simulation
From robot modeling to importing patient-specific anatomies and simulating physics-based sensors (e.g., RGB, ultrasound, force), developers can generate high-quality synthetic training data without ever interacting with a patient. For example, the GPU-accelerated ultrasound simulator produces B-mode images that are virtually indistinguishable from real devices.
Apply pretrained AI
Domain-specific models like the post-trained π0 and NVIDIA Isaac GR00T N1 provide starting points for perception and control. These aren’t generic models—they’re trained specifically for medical tasks.
Build complete workflows
Three reference applications demonstrate end-to-end implementation: robotic ultrasound scanning, surgical subtask automation, and remote telesurgery. Each includes evaluation metrics and can be customized for your specific use case.
Generate training data
When real data is scarce, tools like MAISI create anatomically correct synthetic patients, while NVIDIA Cosmos generates procedural variations for surgical scenarios. This solves the fundamental challenge of training data availability in healthcare.
Leverage medical assets
Pre-validated 3D models of surgical equipment, anatomical structures, and hospital environments accelerate development. The asset catalog includes everything from da Vinci instruments to patient anatomy, all of which are simulation-ready.
Featured tools include:
- Autonomous workflows: Reference pipelines for robotic ultrasound imaging and surgical task automation.
- Ultrasound sensor simulation: Physics-accurate B-mode image simulation for AI training and testing.
- Sim-ready assets: Plug-and-play anatomical models and robot support (e.g., dVRK, Franka, and more underway).
- Pretrained policies: Ready-to-use π0 for ultrasound guidance, plus imitation and reinforcement learning baselines like action chunking transformer (ACT).
- Telesurgery workflow: Edge-optimized, low-latency control pipeline with GPUDirect sensor I/O.
- Expanded model library: New π0 and GR00T N1 policies for more robust task execution
- Cosmos transfer for synthetic data generation: Synthetic-to-real-domain adaptation for clinical imaging environments
Highlights from Early Access projects:
Over 500 developers joined the Early Access Program, spanning use cases from surgery and imaging to patient services. Highlights include:
Moon Surgical: automating OR setup with intelligent robotic positioning
Moon Surgical is pioneering system-level automation by teaching its robot to configure itself for surgery autonomously. Using onboard cameras and a preference card-driven AI policy, the system detects trocar positions and optimizes its setup based on the surgical case and surgeon’s preferences. From autodocking at the table to deploying robotic arms in the ideal configuration, this workflow streamlines operating room setup and improves consistency across cases.
Virtual Incision: automating needle transfer on the MIRA platform
Virtual Incision showed pre-clinical surgical subtask automation on their miniaturized laparoscopic platform, MIRA, by automating the needle-transfer task. Using transformer-based imitation learning, they trained an AI policy that mimics expert motions and executes with high precision, bringing us closer to scalable autonomy in constrained surgical environments.
Virtuoso Surgical: AI‑powered tissue handling with a concentric‑tube endoscopic robot
Virtuoso Surgical is bringing autonomy to their deformable, concentric-tube robots by training AI to handle delicate tasks like tissue retraction and cutting. Using internal strain feedback from simulated data in NVIDIA Isaac Sim to develop a policy that enables precise manipulation in a soft tissue environment
Sovato: low-latency telerobotics with edge-optimized workflow
Sovato is advancing telerobotic surgery by implementing a latency-optimized workflow tailored for remote procedures. By using GPU-accelerated compute and sensor I/O integration at the edge, they achieved significant performance gains, bringing high-precision, real-time robotic control to geographically distributed operating environments.
Get started
Clone the repos and start building:
git clone https://github.com/isaac-for-healthcare/i4h-workflows.git
git clone https://github.com/isaac-for-healthcare/i4h-sensor-simulation.git
git clone https://github.com/isaac-for-healthcare/i4h-asset-catalog.git
Bring your own model/patient/robot/XR
Isaac for Healthcare is designed to work with your existing assets, models, and hardware. The platform provides clear integration paths for customization while maintaining performance and reliability.
The platform supports AI models in standard formats, including ONNX, NVIDIA TensorRT, PyTorch (TorchScript), and TensorFlow. Integration involves exporting your trained model, creating a Holoscan operator for inference, and connecting it to the sensor pipeline. The framework handles optimization and hardware acceleration automatically.
- Bring your own patient: Convert your medical imaging data (CT/MRI scans) into 3D models for simulation. Using MONAI’s integration tools, you can transform DICOM, NIFTI, or NRRD files with segmentation masks into USD format for use in surgical planning and training. The workflow includes mesh generation, coordinate alignment, and integration with the physics simulation.
- Bring your own robot: Whether you have a custom surgical robot or want to modify existing platforms, the framework supports URDF import and CAD file conversion. The process includes converting your robot description to USD format, setting up kinematic chains, and integrating with the control system. Examples include replacing end effectors (like swapping a gripper for an ultrasound probe) or adding entirely new robot platforms.
- Bring your own XR device: Connect any OpenXR-compatible mixed reality headset for immersive teleoperation. The platform supports devices from Apple Vision Pro to Meta Quest through standard OpenXR interfaces. This enables stereoscopic visualization and intuitive hand tracking control for surgical training and remote procedures.
Each BYO path includes detailed tutorials and example code in the i4h-workflows and i4h-asset-catalog repository, making it straightforward to extend the platform with your specific requirements.