AI Safety

How do we make sure that intelligent systems do what we want, without purposefully or accidentally causing harm to humans? Our current approach explores provable guarantees of behaviour in model-based RL agents, such as robots.

Far-UVC disinfection

We're interested in understanding the processes by which 222-nm ultraviolet light inactivates viruses while not harming humans, and figuring out effective ways of large-scale deployment.

Magic LAMP

We're interested in designing a low-cost and simple-to-use platform for LAMP reactions so that generalized diagnostic capabilities are more widespread. We envision a world where it is both cheap and easy to run a panel of tests so one can swiftly determine the exact virus behind an infection.


Can we use multi-agent RL scenarios to measure the effectiveness of different forms of government? We think so!

Interested in joining any of these projects? Email us at