Welcome to ERA
We are a global network of researchers and entrepreneurs working to understand and reduce risks from frontier AI. We do this by:
Supporting top talent across AI governance, Technical AI Safety, Technical AI Governance (TechGov) and adjacent domains to produce concrete research, and backing them with funding, mentorship, and dedicated research management support.
Building a high-trust community and talent pipeline, helping ERA alumni transition into high-impact roles across research, policy, and entrepreneurship for mitigating risks from frontier AI.
Partnering with universities, institutes, and frontier-AI organisations worldwide to accelerate work across our focus areas.
The ERA Fellowship has run every summer since 2021. Thus far, we have supported over 85 early career researchers from 10+ countries through our programme, spanning projects related to AI safety, biosecurity, nuclear security, and extreme climate change mitigation.
In 2024, we shifted our focus to AI safety and governance research. Our alumni have used their time and experience at ERA to secure positions in technical AI safety and policy within governments, AI research centres and influential think tanks.
Our History
Our Mission
Frontier AI labs are developing increasingly powerful systems that offer extraordinary benefits, but also introduce unprecedented risks. To navigate these challenges safely, we need coordinated expertise across technical, governance, and policy domains. At ERA, we are building the community and knowledge base necessary to mitigate catastrophic risks from frontier AI.
Through the Cambridge ERA:AI Fellowship, we identify and support talented researchers and entrepreneurs, providing them with targeted mentorship, research management support, and institutional connections to drive tangible progress on important AI safety and governance challenges. With a distinctive focus on technical governance, we facilitate high-impact collaborations where they are most urgently needed. Our alumni now lead work at RAND, the UK AI Security Institute, and other key institutions actively shaping the future of AI.
Beyond our fellowship programmes, we are cultivating a robust research ecosystem and infrastructure to ensure that advanced AI systems remain safe, beneficial, and aligned with humanity’s best interests.
Meet the Team
Programmes Team
-

Jonathan Dannevig
PROGRAMME DIRECTOR
Jonathan is an engineer and economist with a Harvard MBA and 15 years of experience. He has been a consultant at BCG, twice an entrepreneur, and a policy officer at the UN. He is married and has two kids (who are the main reason to work on AI risks), and he loves reading (from history and psychology to Victorian classics), traveling (he is the son of a pilot and a flight attendant), and football (River Plate and Argentina). -

Sam Smith
PROGRAMME ASSOCIATE
Sam is Head of Courses at Leaf, where they focus on helping people discover their capacity for impact in AI safety and effective altruism. Alongside this, they've designed an introductory AI Safety course, facilitated programs like Non-Trivial, Leaf, and BlueDot, and organised EA Bristol alongside their studies in Maths and Philosophy. When they're not working on talent development, you'll find Sam bouldering, playing board and video games, or around a table for DnD. -

Matt Hodak
SENIOR PROGRAMME ASSOCIATE
Matt is an AI governance researcher with private sector product management experience. His research interests focus on compute governance and the economic impacts of transformative AI. He studied technology policy at the University of Cambridge and economics and applied mathematics at Washington University in St. Louis. Matt is an avid reader, loves film, and enjoys exploring new places. -

Genevieve Gaul
PROGRAMME ASSOCIATE
Genevieve has experience in long-term project planning, workshop facilitation, and teaching. She first joined ERA as the Community Health Lead. Before this, she designed and organised a student development programme for a university, and studied Literature at Durham University. Outside of work, Genevieve enjoys undertaking various creative projects. -

Marta Strzyga
OPERATIONS MANAGER
Marta previously worked as a contracted recruitment consultant for Lead Exposure Elimination Project (LEEP). They have experience in academic research, publishing, office administration and corporate finance, alongside operations at non-profits. She studied Japanese at SOAS and psychology at University of Glasgow, and speaks Polish, French and Japanese. -

Nandini Shiralkar
FOUNDER
Nandini founded ERA during her 1st year of undergraduate engineering studies at the University of Cambridge. As the Founder, she helps set the strategy and vision for ERA’s programmes and research initiatives. In her personal life, she is deeply interested in mathematics and used to play chess competitively; she also enjoys writing poetry and reading literature in several languages.
Research Managers
-

Kyle O'Brien
TECHNICAL AI
Kyle is a technical AI safety researcher and former ERA fellow, previously contributing as a scientist and engineer at EleutherAI and Microsoft. His research focuses on preventing and/or removing dangerous capabilities from LLMs, with a specific focus on pretraining interventions and open-weight model safety. Kyle's research has been accepted to ICLR, cited by OpenAI and Anthropic, and covered by The Washington Post and Fortune. He particularly enjoys mentoring junior researchers by helping them with prioritization, experiment design, and research taste. -

Carolina Oxenstierna
AI GOVERNANCE
Carolina is a recent graduate from Georgetown University's School of Foreign Service with a B.S. in Foreign Service, where she focused on AI governance and cybersecurity. She previously interned at the White House Office of Science and Technology with the Chief Technology Officer under the Biden administration and was a research fellow at the Center for the Governance of AI, working on US-China cooperation on AI safety. Carolina is continuing this research focus alongside AI sovereignty and cloud compute governance projects at the Brookings Institution's Center for Technology Innovation. -

David Williams-King
TECHNICAL AI
David combines technical AI safety research with AI risk communication, running a 30,000+ subscriber YouTube channel. He has conducted research at Mila, and worked with David Krueger and Yoshua Bengio. Previously, David was founding CTO of a cybersecurity insurance startup, leading a 15+ person team. David holds a cybersecurity PhD from Columbia University. He loves to travel and build a network of amazing people. -

Jacob Schaal
AI GOVERNANCE
Jacob is an AI governance researcher and economist focusing on AI's labor market impacts. Previously, he was an ERA fellow and served as Research Manager at the Orion AI Governance Fellowship, leading research on AI safety information-sharing under UK competition law and AI-enabled disinformation exacerbating CBRN risks. He has contributed to EU AI Act implementation, AI standards, and OECD Due Diligence Guidelines through his work at Pour Demain in Brussels. He holds an MSc in Economics from LSE, and his research has been published in Verfassungsblog, Competition Law Insight, and AI Policy Bulletin. -

Gwyn Glasser
TECHNICAL AI GOVERNANCE
Gwyn is a governance researcher at Convergence Analysis and a techgov Research Manager at ERA. Over the past four years, he has worked across AI and tech governance with INHR, the CyberPeace Institute and the Geneva Centre for Security Policy in Switzerland. Research areas included Epistemic Security, Lethal Autonomous Weapons, quantifying non-kinetic, indirect harms, and frontier AI transparency policy. Gwyn holds an MA in Philosophy and Literature from the University of Edinburgh, and has trained as a Machine Learning Support Engineer with AiCore. -

Dave Banerjee
TECHNICAL AI GOVERNANCE
Dave is an AI policy fellow at the Institute for AI Policy and Strategy. Previously, he was a summer fellow at GovAI, participant of ARENA 5.0, research assistant through the SPAR program, and security engineer at a hedge fund. He graduated from Columbia University in 2024 with a Bachelor's in computer science with a focus on cybersecurity and machine learning. His research interests include: AI integrity, model weight security, cybersecurity & hardware security, AI-enabled coups, centralization of power, democracy preservation, verification of international AI treaties, and LM psychology.
Previous Research Managers at ERA: Kristina Fort, Usman Anwar, Cameron Tice, Peter Gebauer, Richard Moulange, Yulu Niki Pi, Fazl Barez, Rudolf Laine, Moritz von Knebel, Tilman Räuker, Oscar Delaney, Gideon Futerman, Irina Gueorguiev, Joël Christoph, Will Aldred, Jaime Bernardi, Herbie Bradley, and Dewi Erwan.
