The ERA Team
-
Nandini Shiralkar
FOUNDER & EXECUTIVE DIRECTOR
-
Robert Harling
ASSOCIATE DIRECTOR
-
Olivia Benoit
PROGRAMME MANAGER
-
Stephen Robcraft
OPERATIONS MANAGER
-
Richard Moulange
TECHNICAL GOVERNANCE RESEARCH MANAGER
Richard is an AI–Biosecurity Fellow at the Centre for Long-Term Resilience and a PhD candidate in Biomedical Machine Learning at the University of Cambridge. He previously served as a Summer Research Fellow at the Centre for the Governance of AI, co-authoring papers on risk-benefit analysis for open-source AI and the responsible governance of biological design tools. His academic research centers on out-of-distribution robustness in biomedical machine learning models. Richard holds both Bachelor's and Master's degrees from the University of Cambridge. -
Morgan Simpson
AI GOVERNANCE RESEARCH MANAGER
Morgan joined ERA after research fellowships with the Centre for the Governance of AI and the Stanford Existential Risk Initiative. He is currently drafting two white papers for the Oxford Martin School AI Governance Initiative. Morgan holds an M.A. in Science and International Security from King's College London and a B.A. in Politics, Philosophy, and Economics from the University of York.
-
Yulu Niki Pi
AI GOVERNANCE RESEARCH MANAGER
Yulu is a PhD researcher at the Centre for Interdisciplinary Methodologies at the University of Warwick and contributes to the IN-DEPTH EU AI Toolkit project for the Leverhulme Centre for the Future of Intelligence at the University of Cambridge. Her previous experience includes work with governmental and international organizations such as UNICEF and the World Meteorological Organization.
-
Rudolf Laine
AI SAFETY RESEARCH MANAGER
Rudolf recently led an LLM evaluation project at Owain Evans's lab. Prior to this, he completed the MATS research internship and earned both a master's and an undergraduate degree in Computer Science from the University of Cambridge.
-
Fazl Barez
AI SAFETY RESEARCH MANAGER
Fazl is a Research Fellow at the Torr Vision Group, University of Oxford, where he focuses on safety and interpretability in AI systems. He is affiliated with the Centre for the Study of Existential Risk and the Krueger AI Safety Lab at the University of Cambridge, as well as the Future of Life Institute.