2023 Mentors

  • Christopher Byrd

    Christopher Byrd

    Chris Byrd is a Technology and Security Policy Fellow at the RAND Corporation, specializing in AI governance, US-China relations, and defense policy. His current interests include AI export controls and full-stack cybersecurity for Fronter AI datacenters, including on-chip mechanisms and related topics in compute governance. Before RAND, he conducted research on AI policy at the Defense Innovation Board, the Centre for the Governance of AI, and the Carnegie-Tsinghua Center, and as an independent researcher and consultant for various leading AI labs and government clients. He also worked for Hilary Clinton’s Presidential campaign, as a legal researcher at Westlaw, and taught middle school as a Teach for America corps. member. He completed graduate study in International Economics and Strategic Studies at Tsinghua University and Johns Hopkins SAIS. He completed his undergraduate study at Virginia Commonwealth University, double-majoring in Philosophical Logic and Ethics and Public Policy.

  • Krystal Jackson

    Krystal Jackson

    Krystal is a Junior AI Fellow at the Center for Security and Emerging Technology (CSET) as part of the Open Philanthropy Technology Policy Fellowship, where she conducts research into how artificial intelligence may alter cyber capabilities. She has done projects as a Public Interest Technology Fellow with the US Census Bureau on combating algorithmic discrimination, as a Tech Policy Scholar with the Cybersecurity and Infrastructure Security Agency on scaling bug bounty programs across the federal government, and, most recently, investigated ways to establish equity audits in the machine learning pipeline as a part of the FAS Day One & Kapor Center policy accelerator program. Krystal received her M.S. in Information Security Policy & Management from Carnegie Mellon University's Heinz College.

  • Matthew Gentzel

    Matthew Gentzel is a graduate student at the University of Maryland studying International Security and Economic Policy and was recently accepted into the Strategic Studies Program at John’s Hopkins School of Advanced International Studies for fall 2018. He currently helps teach an applied policy and intelligence analysis class on CRISPR/Cas9 technology and has worked as an EA grantee in risk analysis on topics such as nuclear policy, AI forecasting, and strategy. Previously he studied Engineering and Statistics at the University of Maryland and founded Effective Altruism Policy Analytics, an experiment aiming for low-cost, research based policy influence via sending federal agencies cost-benefit analysis on their proposed policies.

  •  Claire Boine

    Claire Boine

    Claire is a PhD candidate in AI law at the University of Ottawa and a Research Associate at the Artificial and Natural Intelligence Toulouse Institute. She was a Senior AI Policy Research Fellow at the Future of Life Institute where she focused on AI liability, fiduciary law, AI-enabled manipulation, and General Purpose AI systems. Before that, Claire worked at Boston University for four years, where she was a Research Scholar and a lecturer, and at Harvard University where she was a Research Fellow. Claire holds a French J.D. in international and European law, a Master in Public Policy from Harvard University, a Master’s in political science from Toulouse University, and a Bachelor in History from Paris IV Sorbonne.

  • Onni Aarne

    Onni Aarne

    Onni is an Associate Researcher at Rethink Priorities working on the AI Governance and Strategy team. His research focuses on technical means of governing AI compute. He has a BSc in computer science and an MSc in data science from the University of Helsinki.

  • Ashwin Acharya

    Ashwin Acharya

    Ashwin is an AI Governance Researcher at Rethink Priorities, leading their US regulations work. He has also done work on AI forecasting and strategy. Previously, he was a Research Associate at the Center for Security and Emerging Technology.

  • S. J. Beard

    S. J. Beard

    SJ is an Academic Programme Manager and Senior Research Associate at the Centre for the Study of Existential Risk. They have a PhD in Philosophy and also provide mentorship through Magnify Mentoring and the Effective Thesis Project.

  • Lauro Langosco

    Lauro Langosco

    Lauro is a PhD Student with David Krueger at the University of Cambridge. His main research interest is AGI safety: the problem of building intelligent systems that remain safe even when they are smarter than humans.

  • Alan Chan

    Alan Chan

    Alan is a PhD student at Mila, where he works on both technical and governance sides of AI safety. Most recently, he has been working on language model evaluations as well as elaborating key sticking points and concepts in stories of AI risk.

  • Shaun Ee

    Shaun Ee

    Shaun is a Researcher on Rethink Priorities’ AI Governance and Strategy team, working at the nexus of tech, security, and geopolitics. He is also a nonresident fellow with the Atlantic Council’s Scowcroft Center for Strategy and Security. Before RP, Shaun coordinated cyber policy for Singapore’s government under the Prime Minister’s Office, worked in Washington, DC as assistant director of the Atlantic Council’s cyber program, and served as a signaller in the Singapore Armed Forces. Shaun speaks Mandarin and holds a MA from Peking University and a BA from Washington University in St. Louis, where he studied East African history and cognitive neuroscience.

  • Herbie Bradley

    Herbie Bradley

    Herbie is a 2nd year PhD student at Cambridge University and research scientist at EleutherAI studying large language models. His research interests include RLHF, interpretability, evaluations, open-endedness with LLMs, and AI governance. Herbie has a strong interest in AI safety, has helped to run the 2022 Cambridge Existential Risk Initiative summer fellowship, and supervises several AI safety research projects via the Cambridge AI Safety Hub and with Cambridge masters students.

  • Tomek Korbak

    Tomek Korbak

    Tomek is a final-year PhD student at the University of Sussex working on aligning language models with human preferences. He’s particularly interested in reinforcement learning from human feedback (RLHF) and probabilistic programming with language models. He has spent time as a visiting researcher at NYU and FAR AI working with Ethan Perez, Sam Bowman and Kyunghyun Cho and recently worked with Owain Evans (University of Oxford) on evaluating LMs for dangerous capabilities.