Note: We are not currently accepting applications as we decide on the structure the next iteration of our program. We'll post an update when applications reopen.
How do we make sure that intelligent systems do what we want, without purposefully or accidentally causing harm to humans? We have an exciting set of multidisciplinary approaches, including provable guarantees of behavior in model-based RL agents, zero-shot cooperation in RL systems, and interpretability of what models are learning.
We're interested in designing a low-cost and simple-to-use platform for LAMP reactions so that generalized diagnostic capabilities are more widespread. We envision a world where it is both cheap and easy to run a panel of tests so one can swiftly determine the exact virus behind an infection.
Interested in collaborating on any of these projects? Email us at hello@cavendishlabs.org.