T

Safety layer for embodied general intelligence.

We are building the safety layer for embodied artificial intelligence.

The real obstacle to putting robots into everyday human spaces isn’t how smart they are, it’s whether we can trust them when things stop going as planned. A robot in a home or hospital will constantly face situations it hasn’t seen before. What matters in those moments is not cleverness, but the ability to pause, hesitate, and refuse to act when the situation is unclear or unsafe.

We think safety has to exist as its own layer. Before any action reaches motors and joints, it should be checked against basic constraints: geometry, physical limits, and how confident the system actually is about what it sees. If the robot can’t explain why an action is safe, it shouldn’t take it. Acting should be something that must be justified, not something that happens by default.

Our goal is simple and ambitious: make it possible for robots to be trusted in the real world. If embodied intelligence is going to scale beyond factories and labs, safety must scale with it, and we’re here to build the foundation that makes that future possible.