Welcome to ERA

We are a global consortium of researchers dedicated to understanding and mitigating risks in an ERA of transformative artificial intelligence (AI). We do so by:

  • Supporting early researchers with resources and mentorship to work on a research project related to AI safety, governance, or risk mitigation.

  • Developing a strong community and talent pipeline, helping our alumni launch into AI safety careers and research.

  • Partnering with various organisations across the world, including Cambridge University’s Leverhulme Centre for the Future of Intelligence (CFI), Centre for the Study of Existential Risk (CSER), and the Krueger AI Safety Lab (KASL).

About AI risks

A commitment to safe AI

The rapid advancement of artificial intelligence (AI) in recent years has brought about transformative changes across various domains. As AI systems become more sophisticated and autonomous, their potential impact on our society grows exponentially. With this increased capability comes a heightened responsibility to ensure that these systems are developed and deployed in a safe, secure, and reliable manner.

One of the primary challenges in AI development is the inherent complexity and opacity of many modern AI techniques, particularly those based on deep learning. These systems often operate as "black boxes", making it difficult for humans to understand how they arrive at their decisions or predictions. This lack of transparency can lead to unintended consequences, such as biased outputs, unexpected failures, or even malicious behaviour if the systems are not properly designed and monitored. As AI systems become more advanced and capable of self-improvement, there is a risk that they could diverge from human values and goals, leading to unintended and potentially irreversible outcomes.

To address these challenges, the field of AI safety has emerged as a crucial area of research and development. AI safety focuses on creating the technical foundations, design principles, and governance frameworks necessary to ensure that AI systems are safe, secure, and aligned with human values throughout their lifecycle.

About the Fellowship

The Cambridge ERA:AI Fellowship is a research programme hosted in Cambridge, UK from July 1 through August 23, 2024.

What we provide

  • Full funding: Fellows receive a salary equivalent to £34,125 per year, which will be prorated to the duration of the Fellowship. On top of this, our fellows receive complimentary accommodation, meal provisions during working hours, visa support, and travel expense coverage.

  • Expert mentorship: Fellows will work closely with a mentor on their research agenda for the summer. See our Mentors page to learn about previous mentors.

  • Research Support: Many of our alumni have gone on to publish their research in top journals and conferences, and we provide dedicated research management support to help you become strong researchers / policymakers in the field.

  • Community: Fellows are immersed in a living-learning environment. They will have a dedicated desk space at our office in central Cambridge and are housed together at Emmanuel College, Cambridge.

  • Networking and learning opportunities: We assist fellows in developing the necessary skills, expertise, and networks to thrive in an AI safety or policy career. We offer introductions to pertinent professionals and organisations, including in Oxford and London. In special cases, we also provide extra financial assistance to support impactful career transitions.

Our Research

During the 8-week Fellowship, fellows develop and complete a research project on technical and/or governance measures to mitigate the risks posed by frontier AI systems. Fellows will have the support of research managers, mentors, and industry experts. For examples of previous projects, see our Previous Research Projects page.

Artificial intelligence stands to be among the most significant technological advances of this century. Yet, the assurance of the safety of these systems remains an unresolved issue, spanning a broad spectrum of challenges related to AI alignment, governance, and ethical considerations.

Application Process

Applications for the Fellowship are now open. The first stage of the application process primarily consists of short essay questions (and should take you about 2 hours). Applications for Summer 2024 have now closed. For updates on future application cycles, please sign up to our mailing list.

Successful applicants will be selected for a short interview. We aim to make final acceptance offers around the end of April.

We will then work with the cohort of ERA’24 fellows to develop their project ideas and match them with mentors in April/May.

Who can apply?

Anyone! We are looking to support fellows from a wide range of subject areas who are committed to reducing risks posed by advances in AI.

However, we expect the Cambridge ERA:AI Fellowship might be most useful to students (from undergraduates to postgraduates) and to people early in their careers who are looking for opportunities to conduct short research projects on topics related to AI safety and governance. Note that we are currently unable to accept applicants who will be under the age of 18 on 1st July 2024.

Aris Richardson, RAND Corporation Technology and Security Policy fellow & ERA Summer Research Fellow 2023:

ERA “significantly fast-tracked my path into AI governance… It has given me the time and legitimacy to speak to experts and produce formal research outputs."

Carson E., Cambridge Boston Alignment Initiative & ERA Summer Research Fellow 2022:

"I have been surprised by how senior researchers in my field have been more than willing to advise me and discuss my project and put me in touch with other experts to fill relevant gaps in my research.”

The ERA Fellowship has run every summer since 2021. Thus far, we have supported over 60 early career researchers from 10+ countries through our programme, spanning projects related to AI safety, biosecurity, nuclear warfare, and extreme climate change mitigation.

This year, we are focusing specifically on AI safety and governance research. Our alumni have used their time and experience at ERA to secure positions in technical AI safety and policy within governments, leading AI research centres and influential think tanks.

Our History