Building talent for a new ERA of AI safety research.

The Cambridge ERA:AI Fellowship provides aspiring AI safety & Governance researchers with an in-person, paid, 8-week summer research fellowship at the University of Cambridge. Applications for Summer 2024 have now closed.

Addressing the array of risks posed by advanced AI requires both technical and governance approaches. We host researchers from both of these domains, and we are especially excited about those uniting technical and policy research under a single research project.

01

Technical

New technologies present new opportunities for good; however, they are often also associated with novel risks. Advanced AI agents more intelligent than ourselves present new challenges: How can we be sure to maintain control of the trajectory of the future as we continue to hand over decision-making to these systems? Technical AI safety research aims to ensure that advanced AI systems are rigorously designed and built with appropriate safeguards to avoid harmful or unpredictable behaviour.

02

Governance

As awareness of risks from AI among decision makers and the wider public increases, we see windows of opportunities for impactful research on AI Governance and Policy. Possible projects include foundational philosophical work on the ethics of AI governance, as well as applied research on concrete policy questions, such as analysing existing or proposed laws and regulations and exploring practical strategies for effective governance implementation.

03

Technical AI Governance

The development of AI is fundamentally a sociotechnical challenge: we need to align both the systems themselves and the societal contexts in which they are developed and deployed. This poses complex challenges at the intersection of technology and governance. Many of the key levers for shaping the trajectory of AI progress are inherently technical, and thus require deep engagement with the architectures, algorithms, and interfaces through which AI systems are designed, deployed, and controlled. Technical AI governance seeks to leverage this technical substrate of AI to support governance objectives by empowering relevant actors to make well-informed and proactive decisions to steer AI development in positive directions.

  • Dates & Location

    The Cambridge ERA:AI Fellowship is held in Cambridge, England for 8-weeks during the months of July and August.

  • Support

    Fellows are paid a competitive stipend, and we also cover food, transport, visas and lodging for the duration of the Fellowship.

  • Our Fellows

    We welcome early-career researchers, including undergraduates, from around the world who are interested in AI safety and governance research.

  • The Programme

    Fellows work on a research project with mentorship provided by our network of experienced researchers and influential policymakers.

01

Help mitigate risks posed by advanced AI

AI safety is centred on both building technical solutions and implementing governance measures to mitigate the risks posed by advanced AI systems. We hope to address the complex challenges that arise from increasingly capable AI, including averting uncontrolled emergence of artificial general intelligence (AGI), maintaining human authority and autonomy, developing AI value alignment techniques, and establishing policy safeguards on the research, development and use of advanced AI.

02

Build your research portfolio

At ERA, you will have the opportunity to directly research a topic relevant to understanding and mitigating risks from advances in AI capabilities with guidance through weekly mentorship from a full-time researcher in your field, and daily conversations with other fellows and the community at the University of Cambridge.

03

Develop lasting connections

Spend a summer cultivating deep and life-long connections with other fellows, full-time researchers, and the AI safety community in Cambridge by living and working alongside them.

Our research partners

  • Centre for the Study of Existential Risk

  • Leverhulme Centre for the Future of Intelligence

  • Krueger AI Safety Lab

  • University of Cambridge