Welcome to ERA
We are a global consortium of researchers dedicated to understanding and mitigating risks in an ERA of transformative AI. We do so by:
Supporting early-career researchers with resources and mentorship to work on a research project in AI safety, AI governance, or Technical AI Governance.
Developing a strong community and talent pipeline, helping our alumni pivot into AI safety careers and research.
Partnering with various organisations across the world, including Cambridge University’s Leverhulme Centre for the Future of Intelligence (CFI), Centre for the Study of Existential Risk (CSER), and the Krueger AI Safety Lab (KASL).
About the Fellowship
The Cambridge ERA:AI Fellowship is a research programme hosted in Cambridge, UK from July 1 through August 23, 2024.
What we provide
Full funding: Fellows receive a salary equivalent to £34,125 per year, which will be prorated to the duration of the Fellowship. On top of this, our fellows receive complimentary accommodation, meal provisions during working hours, visa support, and travel expense coverage.
Expert mentorship: Fellows will work closely with a mentor on their research agenda for the summer. See our Mentors page to learn about previous mentors (but do note that we match you with mentors once you are accepted, and this list is only indicative to guide your brainstorming).
Research Support: Many of our alumni have gone on to publish their research in top journals and conferences, and we provide dedicated research management support to help you become strong researchers / policymakers in the field.
Community: Fellows are immersed in a living-learning environment. They will have a dedicated desk space at our office in central Cambridge and are housed together at Emmanuel College, Cambridge.
Networking and learning opportunities: We assist fellows in developing the necessary skills, expertise, and networks to thrive in an AI safety or policy career. We can facilitate introductions to many organisations in the field. In special cases, we also provide extra financial assistance to support impactful career transitions.
Our Research
During the 8-week Fellowship, fellows develop and complete a research project on technical and/or governance measures to mitigate the risks posed by frontier AI systems. Fellows will have the support of research managers, mentors, and industry experts. For examples of previous projects, see our Previous Research Projects page.
Artificial intelligence stands to be among the most significant technological advances of this century. Yet, the assurance of the safety of these systems remains an unresolved issue, spanning a broad spectrum of challenges related to AI alignment and effective governance.
Application Process
The first stage of the application process primarily consists of short essay questions (and should take you about 2 hours). Successful applicants will then be selected for a short interview. We aim to make final acceptance offers around the end of April. We will then work with the cohort of ERA’24 fellows to develop their project ideas and match them with mentors in April/May.
Applications for Summer 2024 have now closed. For updates on future application cycles, please sign up to our mailing list.
Who can apply?
Anyone! We are looking to support fellows from a wide range of subject areas who are committed to reducing risks posed by advances in AI.
However, we expect the Cambridge ERA:AI Fellowship might be most useful to students (from undergraduates to postgraduates) who are about to finish their degree, and to people early in their careers who are looking for opportunities to conduct research projects on topics related to AI safety and governance.
The ERA Fellowship has run every summer since 2021. Thus far, we have supported over 85 early career researchers from 10+ countries through our programme, spanning projects related to AI safety, biosecurity, nuclear security, and extreme climate change mitigation.
In 2024, we shifted our focus to AI safety and governance research. Our alumni have used their time and experience at ERA to secure positions in technical AI safety and policy within governments, AI research centres and influential think tanks.