2024 Mentors

  • Jesse Hoogland

    Jesse Hoogland is co-founder and executive director of Timaeus, an AI safety organization currently working on singular learning theory and developmental interpretability, a research agenda that aims to detect, locate, and interpret phase transitions in neural networks.

  • Josh Clymer

    Josh works on threat modeling at METR, investigating scenarios where AI systems could cause catastrophic harm. He previously published technical AI safety research and safety cases (arguments that given AI systems are safe). Before that, he was a project manager at the Center for AI Safety.

  • Gabriel Weil

    Gabriel Weil joined Touro Law in 2022, teaching law and artificial intelligence and torts. His primary interests are in AI governance and climate change law. Weil's research includes the role of tort law in mitigating AI risks and critiques of the Learned Hand formula for negligence. He previously served at the University of Wyoming, the Climate Leadership Council, UC Irvine, and the White House Council on Environmental Quality. Weil holds a J.D. from Georgetown, an LL.M. from Pace, and a B.A. from Northwestern.

  • Jamie Bernardi

    Jamie is a research fellow at the Institute for AI Policy and Strategy (IAPS), exploring how to help society adapt to advanced AI. He recently completed the 2024 Winter Fellowship at the Centre for the Governance of AI, where he co-authored a paper on Societal Adaptation to Advanced AI. He previously ran the AI Safety Fundamentals course, having co-founded the education charity BlueDot Impact.

  • Hauke Hillebrandt

    Hauke Hillebrandt, PhD, is CEO of the Center for Applied Utilitarianism, an AI strategy think tank. Previously, he founded Lets-Fund.org and held research positions at the Center for Global Development, the Center for Effective Altruism, Harvard, and UCL.

  • Alan Chan

    Alan is a final-year PhD student at Mila and a research scholar at GovAI. His research focuses on governing AI agents.

  • Olli Järviniemi

    Olli is a Visiting Fellow at Constellation working on technical AI safety. His research focuses on risks from deceptively aligned AI models, conducting empirical experiments to obtain information about the likelihood and nature of such threat scenarios. Previously, he completed a PhD in pure mathematics at the University of Turku.

  • Nicholas Emery-Xu

    Nicholas is a third-year Ph.D. student in economics at UCLA and part of MIT FutureTech, working on innovation and computing. His research focuses on the implications of AI and other new computing technologies on productivity and innovation, and on governance mechanisms for the optimal development and deployment of technologies with negative externalities. His work has been published in Research Policy and the Journal of Conflict Resolution.

  • John Bliss

    John Bliss is an Assistant Professor at the University of Denver Sturm College of Law and affiliate faculty at the Institute for Law & AI and the Harvard Law School Center on the Legal Profession. He holds a JD and a PhD from UC Berkeley. His research empirically examines the relationship between lawyers' public interest values and their professional identities. This has included sociological studies of law students and young lawyers in the US and China, historical research on the early civil rights movement, and ongoing studies of the movements for animal rights and the mitigation of AI risk.

  • Usman Anwar

    Usman is a PhD student at University of Cambridge. He is the recipient of Open Phil AI Fellowship and Vitalik Buterin fellowship on AI Safety from Future of Life Institute. He is primarily interested in foundational research on AI alignment, with a current focus on understanding in-context learning; and understanding generalization behaviors of RL trained AI agents. His most recent work is an agenda-paper on Foundational Challenges in LLM Alignment and Safety.

  • Stephen Montes Casper

    Cas is a PhD candidate at MIT advised by Dylan Hadfield-Menell. He works on tools for trustworthy, safe AI. His research emphasizes interpretability, robustness, and auditing. Before his Ph.D, he worked with the Harvard Kreiman Lab and the Center for Human-Compatible AI. Hobbies of his include biking, making hot sauce, growing plants, and keeping bugs.

  • Lorenzo Pacchiardi

    Lorenzo is a research associate at the Leverhulme Centre for the Future of Intelligence, University of Cambridge, working on AI capabilities evaluation with Prof José Hernández-Orallo and Dr Lucy Cheke. He previously evaluated large language models and worked on AI standards for the European AI Act at the Future of Life Institute. Lorenzo holds a PhD in Statistics and Machine Learning from Oxford, where he worked on Bayesian simulation-based inference and generative neural networks. He has a Bachelor's in Physical Engineering and an MSc in Physics of Complex Systems.

  • Carlos Mougan

    Carlos is a Principal Investigator at the Alan Turing Institute. At the moment, he researches AI alignment, LLM evaluations, Ethics and AI Safety. He has worked around the different steps of the ML pipeline: data collection, data quality, preprocessing, modeling, and monitoring. He is fortunate to have pursued his passions at world-class research and public institutions. In the past, he has been: a Marie Sklodowska-Curie research fellow, a statistician at the European Central Bank, a consultant at Deloitte, an industry researcher at BSC, CSIC, Schufa, AEMET and BBC.

  • Robert Kirk

    Robert is a 4th year PhD student at UCL and a research scientist at the UK AI Safety institute. He's interested in understanding language models fine-tuning and generalisation, with the goal of mitigating both misuse and misalignment risk from advanced artificial intelligence. He's previously interned at Meta AI with Roberta Raileanu, and before his PhD worked as a software engineer and did his undergrad and masters at the University of Oxford.

  • Robert Thomas

    Vice Admiral Robert Thomas retired from the U.S. Navy in January 2017 after 38 years of service. He is now a Senior Research Fellow at UC's Institute on Global Conflict and Cooperation and a full-time faculty member at UC San Diego's Graduate School of Global Policy and Strategy. His last operational tour was commanding the U.S. 7th Fleet, and his final assignment was Director of Navy Staff at the Pentagon. He holds a B.S. in Civil Engineering from UC Berkeley and an M.A. in National Security Studies from the National War College. His awards include The Order of the Rising Sun and The Republic of Korea Order of National Security Merit.

  • Fazl Barez

    Fazl is a Research Fellow at the Torr Vision Group, University of Oxford, where he focuses on safety and interpretability in AI systems. He is affiliated with the Centre for the Study of Existential Risk and the Krueger AI Safety Lab at the University of Cambridge, as well as the Future of Life Institute.

2023 Mentors

  • Christopher Byrd

    Christopher Byrd

    Chris Byrd is a Technology and Security Policy Fellow at the RAND Corporation, specializing in AI governance, US-China relations, and defense policy. His current interests include AI export controls and full-stack cybersecurity for Fronter AI datacenters, including on-chip mechanisms and related topics in compute governance. Before RAND, he conducted research on AI policy at the Defense Innovation Board, the Centre for the Governance of AI, and the Carnegie-Tsinghua Center, and as an independent researcher and consultant for various leading AI labs and government clients. He also worked for Hilary Clinton’s Presidential campaign, as a legal researcher at Westlaw, and taught middle school as a Teach for America corps. member. He completed graduate study in International Economics and Strategic Studies at Tsinghua University and Johns Hopkins SAIS. He completed his undergraduate study at Virginia Commonwealth University, double-majoring in Philosophical Logic and Ethics and Public Policy.

  • Krystal Jackson

    Krystal Jackson

    Krystal is a Junior AI Fellow at the Center for Security and Emerging Technology (CSET) as part of the Open Philanthropy Technology Policy Fellowship, where she conducts research into how artificial intelligence may alter cyber capabilities. She has done projects as a Public Interest Technology Fellow with the US Census Bureau on combating algorithmic discrimination, as a Tech Policy Scholar with the Cybersecurity and Infrastructure Security Agency on scaling bug bounty programs across the federal government, and, most recently, investigated ways to establish equity audits in the machine learning pipeline as a part of the FAS Day One & Kapor Center policy accelerator program. Krystal received her M.S. in Information Security Policy & Management from Carnegie Mellon University's Heinz College.

  • Matthew Gentzel

    Matthew Gentzel is a graduate student at the University of Maryland studying International Security and Economic Policy and was recently accepted into the Strategic Studies Program at John’s Hopkins School of Advanced International Studies for fall 2018. He currently helps teach an applied policy and intelligence analysis class on CRISPR/Cas9 technology and has worked as an EA grantee in risk analysis on topics such as nuclear policy, AI forecasting, and strategy. Previously he studied Engineering and Statistics at the University of Maryland and founded Effective Altruism Policy Analytics, an experiment aiming for low-cost, research based policy influence via sending federal agencies cost-benefit analysis on their proposed policies.

  •  Claire Boine

    Claire Boine

    Claire is a PhD candidate in AI law at the University of Ottawa and a Research Associate at the Artificial and Natural Intelligence Toulouse Institute. She was a Senior AI Policy Research Fellow at the Future of Life Institute where she focused on AI liability, fiduciary law, AI-enabled manipulation, and General Purpose AI systems. Before that, Claire worked at Boston University for four years, where she was a Research Scholar and a lecturer, and at Harvard University where she was a Research Fellow. Claire holds a French J.D. in international and European law, a Master in Public Policy from Harvard University, a Master’s in political science from Toulouse University, and a Bachelor in History from Paris IV Sorbonne.

  • Onni Aarne

    Onni Aarne

    Onni is an Associate Researcher at Rethink Priorities working on the AI Governance and Strategy team. His research focuses on technical means of governing AI compute. He has a BSc in computer science and an MSc in data science from the University of Helsinki.

  • Ashwin Acharya

    Ashwin Acharya

    Ashwin is an AI Governance Researcher at Rethink Priorities, leading their US regulations work. He has also done work on AI forecasting and strategy. Previously, he was a Research Associate at the Center for Security and Emerging Technology.

  • S. J. Beard

    S. J. Beard

    SJ is an Academic Programme Manager and Senior Research Associate at the Centre for the Study of Existential Risk. They have a PhD in Philosophy and also provide mentorship through Magnify Mentoring and the Effective Thesis Project.

  • Lauro Langosco

    Lauro Langosco

    Lauro is a PhD Student with David Krueger at the University of Cambridge. His main research interest is AGI safety: the problem of building intelligent systems that remain safe even when they are smarter than humans.

  • Alan Chan

    Alan Chan

    Alan is a PhD student at Mila, where he works on both technical and governance sides of AI safety. Most recently, he has been working on language model evaluations as well as elaborating key sticking points and concepts in stories of AI risk.

  • Shaun Ee

    Shaun Ee

    Shaun is a Researcher on Rethink Priorities’ AI Governance and Strategy team, working at the nexus of tech, security, and geopolitics. He is also a nonresident fellow with the Atlantic Council’s Scowcroft Center for Strategy and Security. Before RP, Shaun coordinated cyber policy for Singapore’s government under the Prime Minister’s Office, worked in Washington, DC as assistant director of the Atlantic Council’s cyber program, and served as a signaller in the Singapore Armed Forces. Shaun speaks Mandarin and holds a MA from Peking University and a BA from Washington University in St. Louis, where he studied East African history and cognitive neuroscience.

  • Herbie Bradley

    Herbie Bradley

    Herbie is a 2nd year PhD student at Cambridge University and research scientist at EleutherAI studying large language models. His research interests include RLHF, interpretability, evaluations, open-endedness with LLMs, and AI governance. Herbie has a strong interest in AI safety, has helped to run the 2022 Cambridge Existential Risk Initiative summer fellowship, and supervises several AI safety research projects via the Cambridge AI Safety Hub and with Cambridge masters students.

  • Tomek Korbak

    Tomek Korbak

    Tomek is a final-year PhD student at the University of Sussex working on aligning language models with human preferences. He’s particularly interested in reinforcement learning from human feedback (RLHF) and probabilistic programming with language models. He has spent time as a visiting researcher at NYU and FAR AI working with Ethan Perez, Sam Bowman and Kyunghyun Cho and recently worked with Owain Evans (University of Oxford) on evaluating LMs for dangerous capabilities.