Jump to section
To empower ambitious, developing AI safety researchers to produce impactful research through facilitating their mentorship, collaboration, and exploration.
MATS (ML Alignment & Theory Scholars) is a crucial initiative focused on advancing AI safety research. Addressing the need to mitigate risks from unaligned AI, MATS allows individuals to make significant contributions in the field.
During a 10-week program, scholars engage in cutting-edge research, attend seminars by top AI safety researchers, and participate in workshops and networking events. This approach fosters academic and professional growth, ensuring scholars receive essential support, including financial backing for their research pursuits.
Since its inception, MATS has received support from prominent organizations like Open Philanthropy and Berkeley Existential Risk Initiative, highlighting its key role in AI safety. Alumni have made significant impacts in industry giants such as Google DeepMind and OpenAI and in leading academic groups at UC Berkeley and MIT.
Kirsty
Company Specialist at Welcome to the Jungle
Ryan Kidd
(Co-Founder)Also the Co-founder of the London Initiative for Safe AI. Prior to founding LISA and MATS, they spent a short period as a Research Scholar at Stanford Existential Risks Initiative and over 2 years as a Physics Tutor at The University of Queensland.
https://www.linkedin.com/in/christian-lee-smith/
(Co-Founder)Also the Co-founder of the London Initiative for Safe AI. Previously worked as an Event Coordinator at The Centre for Effective Altruism, as an Operations Generalist at Redwood Research, and as an Instructor/Organizer at Bit by Bit Coding.