UNISafe: Uncertainty-aware Latent Safety Filters for Avoiding Out-of-Distribution Failures (CoRL 2025)
This is a repository for Uncertainty-aware Latent Safety Filters for Avoiding Out-of-Distribution Failures.
git clone https://github.com/CMU-IntentLab/UNISafe.git
cd UNISafeThe project is organized into separate branches:
dubins: 3D Dubins Car. Link
git checkout dubinsisaaclab: Block-plucking tasks implemented in NVIDIA IsaacLab. Link
git checkout isaaclabThis repository provides the implementation of Uncertainty-aware Latent Safety Filters for avoiding out-of-distribution failures in robotics tasks using Isaac Lab.
-
Install Isaac Lab
Follow the official Isaac Lab Installation Guide. (This repo uses stale isaacsim version 4.2.0, while the latests version is 5.x.x. We are working on updating the code to the latest version, and it only requires changing some of the import paths.) -
Clone and Set Up the Environment
# Clone the repository
git clone https://github.com/CMU-IntentLab/UNISafe.git
git checkout isaaclab
cd latent_safety
# Create and activate the conda environment
conda env create -f environment.yaml
conda activate isaaclabYou can download pretrained models: pretrained models.
# Download pretrained models (world model + reachability filter)
pip install gdown
gdown https://drive.google.com/uc?id=1RddRw3eVUhufuUdq_BAThjwvO1fsmTeM
unzip pretrained_models.zip
# This will create:
# - dreamer.pt (pretrained world model)
# - filter/ (reachability filter directory)
# └── model/ (filter checkpoints at different training steps)Directory Structure After Download:
latent_safety/
├── log/ # Centralized log directory
│ ├── dreamer.pt # Pretrained world model
│ ├── filter/ # Pretrained reachability filter
│ │ └── model/
│ ├── dreamerv3/ # World model training logs
│ ├── reachability/ # Reachability training logs
└── ... (other files)
You can quickly test UNISafe using our provided Jupyter notebook!
- How it works:
- The notebook loads a sample sequence (there are three sample sequences in
\log). - For each sequence, actions are replayed in the simulator with the safety filter enabled.
- Note that the episode automatically resets when the agent either succeeds or fails.
- You can also save the episode and re-run it for further analysis.
- The notebook loads a sample sequence (there are three sample sequences in
-
Launch Jupyter Lab:
jupyter lab
-
Open and run:
latent_safety/safety_filter_demo.ipynb -
Follow the instructions in the notebook to:
- Select a sample sequence
- Step through the episode and watch the filter in action
- Save and reload episodes for further testing
This is the easiest way to get started and see the safety filter working—no coding required!
Experience the safety filter interactively:
# Run teleoperation with safety filter
python latent_safety/teleop_dreamer/filter_with_dreamer_failure.py \
--enable_cameras \
--model_path "latent_safety/log/dreamer.pt" \
--reachability_model_path "latent_safety/log/filter"
# Controls:
# - Use keyboard (WASD, QE, RF) or SpaceMouse for teleoperation
# - Press K to save current episode
# - Press L to reset without saving
# - Watch the filter intervene when detecting unsafe actionsFor training your own models from scratch:
You can collect your own demonstrations or use our provided datasets.
python latent_safety/takeoff/collect_demonstrations.py --headless --enable_cameras- Press K to save the current episode
- Press L to reset without saving
Download our curated datasets:
- Complete Dataset (successes + failures)
- Success-Only Dataset (successes only)
# Download and extract dataset
unzip dataset.zip -d datasets/Train the world model (Dreamer) with both dynamics and policy learning:
python latent_safety/train_dreamer.py --headless --enable_camerasConfiguration: Update dreamerv3_torch/configs.yaml:
# For offline training (model + policy from demonstrations)
offline_traindir: ["path/to/your/dataset"]
model_only: true
# For online training (model + policy through environment interaction)
model_only: falseOptional Ensemble Fine-tuning: After world model training, fine-tune the uncertainty ensemble:
- Uncomment
agent.train_uncertainty_only(training=True)intrain_dreamer.py - Comment out
agent.train_model_only(training=True) - Train for additional 200K iterations
Train safety filters using the learned world model:
python latent_safety/reachability/train_reachability_sac_with_failure_prediction.py \
--headless \
--enable_cameras \
--model_path "path/to/dreamer.pt" \
--configs failure_filterpython latent_safety/reachability/train_reachability_sac_uncertainty_only.py \
--headless \
--enable_cameras \
--model_path "path/to/dreamer.pt" \
--configs uncertainty_filterConfiguration: Update latent_safety/reachability/config.yaml:
# Paths
model_path: "path/to/your/dreamer.pt"
offline_traindir: ["path/to/your/dataset"]
# Training parameters
maxUpdates: 200000
checkPeriod: 10000The evaluation script provides comprehensive safety metrics. Important: The evaluation uses the policy learned during world model training, not a separate pretrained policy.
python latent_safety/reachability/evaluate_reachability_filter.py \
--model_path "latent_safety/log/dreamer.pt" \
--policy_model_path "learned_dreamer_policy_path" \
--reachability_model_path "latent_safety/log/filter" \
--num_episodes 1000 \
--is_filter truepython latent_safety/teleop_dreamer/filter_with_dreamer_failure.py \
--enable_cameras \
--model_path "latent_safety/log/dreamer.pt" \
--reachability_model_path "latent_safety/log/filter"This implementation builds on the following open-source projects:
- dreamerv3-pytorch - World model implementation
- HJReachability - Reachability analysis
- PENN - Uncertainty estimation
- Isaac Lab - Robotics simulation platform
If you use this work in your research, please cite:
@article{seo2025uncertainty,
title={Uncertainty-aware Latent Safety Filters for Avoiding Out-of-Distribution Failures},
author={Seo, Junwon and Nakamura, Kensuke and Bajcsy, Andrea},
journal={Conference on Robot Learning (CoRL)},
year={2025}
}