Research

Research Vision: develop safe, deployable, and trustworthy robot autonomy with the goal of enabling robots to work capably and confidently alongside humans.

Product Vision: a general, lifelong, safe, online learning framework that allows performant robots to be deployed at scale and continuously adapt to ensure safety.

Research Philosophy: Our research follows a theory-algorithm-application loop. We develop application-motivated theory and use that to design provably sound algorithms that we deploy on real-world systems. This iterative process deepens theoretical understanding and applied expertise, while ensuring that research remains rigorously accurate, computationally feasible, and practically impactful.

Collaboration philosophy: We value an open exchange of code and ideas. By working closely with industry and academic partners, our group bridges the gap between formal methods and real-world robotics to identify impactful problems and deploy solutions across diverse domains.

If you’re interested in joining the lab, please click here for more information.

For an in-depth understanding of our research, check out Prof. Cosner’s thesis: Dynamic Safety Under Uncertainty: A Control Barrier Function Approach or the recording of his defense presentation:

Research Directions:


Human-Aligned Safety

Embedding human understanding and social awareness into the mathematical definitions of safety.

Despite the rigorous mathematical definitions of “safety” from robotics and control theory, true safety is inherently a subjective human concept. Determining tolerable levels of risk, allocating responsibility in multi-agent contexts, and prioritizing safety across competing objectives all require human value judgements. While many methods provide formal safety guarantees once these subjective components have been projected onto numerical values, there remains a substantial gap between human understandings of safety and how it is encoded algorithmically.

This research direction seeks to bridge that gap by incorporating subjective human input into formal safety frameworks. To do that, we identify flexibilities in the safety algorithms that allow us to retain guarantees while modulating behavior, thus creating “knobs” that can be tuned to better reflect human preferences. Next, we develop algorithms to tune these knobs using demonstrations, feedback, and high-level instructions. This approach enables systems to balance risk with performance, act with social awareness and shared responsibility, and assign variable safety priorities to different obstacles in the environment. Ultimately, our goal is to align mathematical definitions with human expectations, creating adaptive, socially-aware, and context-dependent frameworks for safety.

Selected papers on this topic:
Safety-Aware Preference-based Learning for Safety-Critical Control
Ryan K. Cosner, Maegan Tucker, Andrew Taylor, Kejun Li, Tamas Molnar, Wyatt Ubellacker, Anil Alan, Gábor Orosz, Yisong Yue, and Aaron Ames
Learning for Dynamics and Control Conference (L4DC), 2022.
Learning Responsibility Allocations for Safe Human-Robot Interaction with Applications to Autonomous Driving
Ryan K. Cosner, Yuxiao Chen, Karen Leung, and Marco Pavone.
International Conference on Robotics and Automation (ICRA), 2023.
Risk-Aware Safety Filters with Poisson Safety Functions and Laplace Guidance Fields
Gilbert Bahati, Ryan M. Bena, Meg Wilkinson, Pol Mestres, Ryan K. Cosner, Aaron D. Ames
American Control Conference (ACC), 2026 (submitted)
Future directions:
  • Develop safe online adaptation method to identify human-aligned parameters.
  • Incorporate environmental semantics and natural language as conditioning labels to modulate safety definitions.
  • Explore the distinction between safety as a system constraint and safety as a performance metric.

Rapidly Synthesizing Dynamically Feasible Safety Constraints

Developing performant, environment-aware safety constraints that reflect real-world robot dynamics.

In order to enforce safety on our robotic system we require safety definitions that are physically compatible with the robot’s dynamics. In particular, we need to identify which states are unsafe and which states are doomed to become unsafe regardless of the robot’s actions (e.g., the whole act of skydiving without a parachute is considered unsafe, even though the actual hazard is the impact with the ground). In general, this problem of identifying the “backwards-reachable set of the hazardous states” can be very computationally expensive, limiting our ability to generate safety constraints online during deployment.

This research direction seeks to enable the online synthesis of dynamically feasible safety constraints. We achieve this by avoiding the full reachability problem and instead separating the problem into two more tractable subproblems: (1) generating safe trajectories for a simplified model and (2) designing trajectory tracking controllers with robust stability properties. Together, these components allow us to rapidly synthesize safety constraints for complex robotic systems. Ultimately, our goal is to build a framework that will enable the online generation of dynamically feasible safety constraints in complex, unstructured, real-world environments.

Selected papers on this topic:
Geometry-Aware Predictive Safety Filters on Humanoids: From Poisson Safety Functions to CBF Constraint MPC
Ryan M. Bena, Gilbert Bahati, Blake Werner, Ryan K. Cosner, Lizhi Yang, Aaron D. Ames
International Conference on Humanoid Robotics (ICRA), 2025.
**Best Oral Paper Finalist**
Chp. 7.4 of Dynamic Safety Under Uncertainty: A Control Barrier Function Approach
Ryan K. Cosner
Caltech Theses, 2025.
Control Barrier Function Synthesis for Nonlinear Systems with Dual Relative Degree
Gilbert Bahati, Ryan K. Cosner, Max H. Cohen, Ryan M. Bena, Aaron D. Ames
Conference on Decision and Control (CDC), 2025.
Model-Free Safety-Critical Control for Robotic Systems
Tamas G. Molnar, Ryan K. Cosner, Andrew W. Singletary, Wyatt Ubellacker, Aaron D. Ames
IEEE Robotics and Automation Letters (RAL), 2021.
Constructive Safety-Critical Control: Synthesizing Control Barrier Functions for Partially Feedback Linearizable Systems
Max H. Cohen, Ryan K. Cosner, Aaron D. Ames
IEEE Control Systems Letters (CSL), 2024.
Future directions:
  • Translate high-level, human-language commands into dynamically feasible safety constraints.
  • Use latent-space safety representations as a data-driven, computationally tractable means to identify control-invariant sets for complex systems.
  • Incorporate semantic understanding of the environment into the design of safe sets and safe controllers.
  • Explore the closed-loop feasibility gains achieved by combining model predictive control (MPC), control barrier functions (CBF), and reinforcement learning (RL)-based safety methods, and study the role of safety constraints during RL training versus deployment.

Safety Guarantees Under Uncertainty

Bringing rigorous mathematical guarantees closer to real-world deployment.

Many theoretical methods in robotics and control offer “safety guarantees” that can inspire great confidence when deploying systems. However, these guarantees are often built on foundational assumptions that rarely hold in practice, e.g. perfect perception and flawless system and environment models. When these assumptions fail to hold, the guarantees themselves can collapse, leading to catastrophic failures in practice despite guaranteed safety in theory.

This research direction seeks to bridge the gap between theory and deployment by explicitly considering realistic sources of uncertainty and relaxing idealized assumptions. We use tools from robust and stochastic control theory to analyze how safety guarantees degrade under uncertainty and to design mechanisms that mitigate that degradation. Ultimately, our goal is to develop well-calibrated theoretical guarantees that remain reliable when brought into the real world.

Selected papers on this topic:
Probabilistic Control Barrier Functions: Safety in Probability for Discrete-Time Stochastic Systems
Pol Mestres, Blake Werner, Ryan K. Cosner, Aaron D. Ames
American Control Conference (ACC), 2026 (submitted)
Robust Safety under Stochastic Uncertainty with Discrete-Time Control Barrier Functions
Ryan K. Cosner, Preston Culbertson, Andrew J. Taylor, and Aaron D. Ames
Robotics: Science and Systems (RSS), 2023.
Guaranteeing Safety of Learned Perception Modules via Measurement-Robust Control Barrier Functions
Sarah Dean, Andrew J. Taylor, Ryan K. Cosner, Benjamin Recht, Aaron D. Ames
Conference on Robotic Learning (CoRL), 2021.
**Best Student Paper Finalist**
Future directions:
  • Analyze how uncertainty in perception, environment segmentation, and system dynamics propagates into safety guarantees.
  • Model assumptions and limitations of the environment in social, multi-agent scenarios, and quantify their effects on closed-loop safety.
  • Define sensor requirements based on backward reachability of unsafe sets and sensor precision/accuracy.

Safety with Learned Dynamics Residuals

Combining the extrapolation strengths of model-based control with the interpolation power of learning for accurate behavior in low-data regimes.

Modern methods for dynamic safety typically rely on a system model. While these models are invaluable, no model is perfect, and even small imperfections can undermine a system’s ability to ensure safety. As the aphorism goes, “all models are wrong, but some models are useful.” To make our models more useful, we strive to make them better reflect the behavior of real-world robots. However, even the most complex models miss important real-world details and increasing their complexity often comes at a steep computational cost.


This research direction seeks to improve the effectiveness of our model-based control methods by enhancing first-principles dynamics models with data-driven corrections. By learning dynamics residuals (i.e., the difference between the dynamics model and the real-world system) we can systematically close the gap between our theoretical analysis, simulation testing, and real-world performance. This approach increases confidence in deployment and reduces development time by shrinking the sim-to-real gap. Ultimately, our goal is to improve the real-world efficacy of model-based techniques through data-driven refinement, while retaining the interpretability and structure that the model-based approaches provide.

Selected papers on this topic:
Generative Modeling of Residuals for Real-time Risk-Sensitive Safety with Discrete-Time Control Barrier Functions
Ryan K. Cosner, Igor Sadalski, Jana K. Woo, Preston Culbertson, Aaron D. Ames
International Conference on Robotics and Automation (ICRA), 2024.
Episodic Learning for Safe Bipedal Locomotion with Control Barrier Functions and Projection-to-State Safety
Noel Csomay-Shanklin*, Ryan K. Cosner*, Min Dai*, Andrew J. Taylor, Aaron D. Ames
Learning for Dynamics and Control (L4DC), 2021.
SHIELD: Safety on Humanoids via CBFs in Expectation on Learned Dynamics
Lizhi Yang, Blake Werner, Ryan K. Cosner, David Fridovich-Keil, Preston Culbertson, Aaron D. Ames
International Conference on Intelligent Robots and Systems (IROS), 2025.
Future directions:
  • Investigate the efficacy of learned dynamics residuals by identifying system characteristics that make residuals easier or harder to learn, and conditions where learning may be counterproductive.
  • Close the sim-to-real loop using generative modeling techniques to iteratively improve simulators, better represent real-world deployment, and enhance robot performance—especially for systems that are difficult to simulate, such as non-rigid or highly dynamic robots.