AI Safety

How do we make sure that intelligent systems do what we want, without purposefully or accidentally causing harm to humans? We have an exciting set of multidisciplinary approaches, including provable guarantees of behaviour in model-based RL agents, zero-shot cooperation in RL systems, and interpretability of what models are learning.

Far-UVC disinfection

We're interested in understanding the processes by which 222-nm ultraviolet light inactivates viruses while not harming humans, and figuring out effective ways of large-scale deployment.

Magic LAMP

We're interested in designing a low-cost and simple-to-use platform for LAMP reactions so that generalized diagnostic capabilities are more widespread. We envision a world where it is both cheap and easy to run a panel of tests so one can swiftly determine the exact virus behind an infection.


Can we use multi-agent RL scenarios to measure the effectiveness of, and discover, different forms of government?

Interested in joining any of these projects? Email us at