Our Winter 2026 Fellows
Technical AI Safety
-

Ashwin Sreevatsa
Ashwin works on technical AI safety research. He was previously a software engineer working on machine learning compliance infrastructure at Google. Before this, he graduated from the University of Michigan with a degree in computer science. He has worked on projects in AI control, machine unlearning, computer vision for medical imaging, and natural language generation.
-

Girish Gupta
Girish is an AI researcher focused on understanding the internal mechanisms of large-scale AI systems and their implications for AI safety. His interdisciplinary background spans theoretical physics and a decade of investigative journalism covering global crises from Venezuela to Iraq, with everyone from the New Yorker to Reuters. Most recently, he has led AI and software engineering in San Francisco across human rights investigations, accounting, and healthcare. He lives with his wife and children in San Francisco, and enjoys photography, music, and learning history.
-

Kyungeun Lim
Kyungeun holds a PhD in Physics from Columbia University and conducted postdoctoral research at Yale University, co-authoring 25+ publications in international dark matter and neutrino physics collaborations. She spent recent years in industry ML/data science on both US coasts covering a big corporation and a startup. Her current research focuses on machine unlearning and interpretability, bringing experimental physics methodology to develop rigorous evaluation frameworks for AI safety. (https://kyungeunlim.github.io/)
-

Lennart Wachowiak
Lennart is a technical AI safety fellow at ERA and a soon-to-graduate PhD candidate at the CDT in Safe and Trusted AI at King’s and Imperial College London. Previously, he interned at Google DeepMind and worked as an NLP researcher at the University of Vienna. His research spans explainability, human–AI interaction, and robotics.
-

Marek Kowalski
Marek is interested in developing detection methods for model self-awareness as the first step towards securing model welfare and assuring a benevolent relationship between humans and digital minds. He holds a medical degree and a PhD in computational neuroscience from Boston University, where he investigated neural mechanisms implicated in altered consciousness under ketamine anesthesia. Most recently, he has been practicing internal medicine as a junior faculty at the UCLA David Geffen School of Medicine.
-

Matthew Khoriaty
Matthew founded the AI Safety and Governance Group at Northwestern University, where he's completing his Master's in Computer Science. Before ERA, Matthew worked as a software engineer for Amazon and Northwestern's Global Poverty Research Lab. He has researched LLM interpretability, automated research, and AI control. In his free time, he enjoys designing video games and solving escape rooms.
-

Nathaniel Mitrani
Nathaniel holds a BSc in Mathematics and a BEng in Data Science and Engineering, and is an AI researcher with experience across industry and academia. He has participated in SPAR, working on neural scaling laws grounded in data distribution geometry alongside Ari Brill, and at AISC, where he focused on preventing adversarial reward optimization with Domenic Rosati. He has also interned at Geodesic Research, studying obfuscation generalization with Cam Tice and Puria Radmard, and more recently at Qualcomm AI Research, where he led a project on multimodal and multilingual reasoning. Nathaniel is currently a visiting researcher at the van der Schaar Lab at the University of Cambridge, where his work centers on reasoning under epistemic uncertainty with applications to medical diagnostics.
-

Peter Nutter
Peter is an MSc Computer Science student at ETH Zurich with a BSc in Computational Mathematics. At ETH, he has worked with the LAS and SPY groups on anthropomorphized misalignment, focusing on deception detection in large language models. His current interests center on mechanistic interpretability research.
-

Scott Blain
Scott is a cognitive scientist pivoting to technical AI safety, with a focus on building and validating evaluation frameworks for social cognition, deception precursors, and other epistemic failure modes in large language models. His work draws on human psychology—especially theory of mind, apophenia, and individual differences—to better characterize and mitigate risks like hallucination, manipulation, and strategic misbehavior in AI systems. He previously completed a postdoc at University of Michigan and earned his PhD in Psychology from University of Minnesota. Outside research, he’s an avid runner and pianist.
Technical Governance
-

Andrew Wei
Andrew is a current undergraduate studying Computer Science and Public Policy at Georgia Tech. He is a lead organizer for the Georgia Tech AI Safety Initiative, through which he's presented at a Congressional exhibition, co-authored an RFI for the National AI Plan, and helped found governance field building efforts. His current research interests include multi-agent risks, particularly collusion and concentration of power, and improving how models generalize safety training, drawing on work in emergent misalignment and inoculation prompting.
-

Botao Hu
Botao 'Amber' Hu is a PhD candidate in Human Centred Computing at the University of Oxford's Department of Computer Science. His research focuses on Decentralized AI (DeAI), exploring both governance and applications.
-

Henry Colbert
Henry is a former quantitative trader with a Bachelor’s degree in Mathematics from the University of Cambridge and a Master’s degree in Financial Mathematics from the Technical University of Munich. His current interests focus on the alignment problem, particularly the design and scaling of evaluations to detect misaligned behaviour. Outside of work, Henry trains for middle-distance triathlons and begins each day with a coffee and the New York Times crossword.
-

Jacob Davies
Jacob is an AI safety researcher at Exona Labs. Previously, he was a participant of ARENA 6.0, a researcher at a digital policy think tank based in the University of Cambridge on AI evaluation and policy, and an AI Linguist at LinkedIn. He graduated from the University of Edinburgh in 2024 with an MSc in Speech and Language Processing, and from the University of Cambridge with a BA in Computational Linguistics. His research interests include: AI monitoring frameworks, agentic AI governance, mechanistic interpretability, organisational AI risk quantification, and AI transparency standards.
-
Nicholas Spragg
Nick works in safeguards enforcement at Anthropic, where he evaluates catastrophic harm threats and develops safety protocols. Previously, he trained generative AI models at Meta and conducted research on AI for sustainable development at the United Nations Development Programme. He holds a Master of Theological Studies from Harvard Divinity School, focusing on religion, ethics, and politics. His career sits at the intersection of AI safety, policy, and global development.
-

Oliver Kurilov
Oliver works at the intersection of technical AI governance and AI-driven psychosocial impact. His research focuses on mitigating societal risks from emerging misalignment, improving model organisms/character training, and advancing global cooperation on AI governance. He’s currently building a CogSec startup backed by the BlueDot Incubator, and is a MARS AI Governance Fellow researching US export controls & US-China AI competition dynamics. Previously, Oliver worked as a Deeptech VC, was a Cambridge Enterprise fellow, and led ML-enabled research in biophysics. He holds an MPhil in Physics (University of Cambridge) and a BSc in Computational Biology & Mathematics (UCLA), and is currently on leave from his PhD at Columbia University.
-

Vilija Vainaite
Vilija is an AI safety researcher and entrepreneur. She combines consulting experience in Business Resilience (Deloitte) and Quantitative Solutions (PwC) with research expertise on AI, cybersecurity governance, and epistemic resilience, and a track record of leadership within the technology space (part of TNW T500). With a focus on human-centric AI-enhanced solutions, she is a co-founder and European Strategy Lead at Encode Europe, and AI governance non-profit.
AI Governance
-
Alex Mark
Alex is an AI governance researcher with litigation experience in public defense and housing litigation. His research interests focus on legal alignment and frontier regulation. He studied global studies and mathematical economics at Temple University and law at the University of California, Los Angeles. Alex is an avid filmgoer, reader, and writer. He enjoys wandering around cities and finding street art.
-

Ben Robinson
Ben is an AI policy researcher with experience across the UK, US and Australia. From 2023-25, he was an AI Policy Manager at the Centre for Long-Term Resilience in London, and previously a Senior Consultant at Deloitte and at The Ethics Centre. He's an affiliate of the Machine Intelligence and Normative Theory lab at the Australian National University, with a First Class Honours thesis on AI risk, and is currently an MPhil candidate in Global Risk and Resilience at the University of Cambridge.
-

Jackson Lopez
Jackson is a researcher with the Oxford Martin AI Governance Initiative and a 2025 IAPS Fellow. He’s interested in representing different dynamics of AI-driven geopolitical competition, including reconstituting soft power competition, threat modelling around AI-driven surveillance, and assessing Middle Power strategies for AI development. At AIGI, Jackson is involved in policy advocacy with the AI Impact Summit and prioritizes developing politically feasible proposals for international AI governance. He has worked as a consultant with Hinge Intelligence, specializing in energy security, as a Geopolitical Analyst for the Regional Centre for Strategic Studies, and as a correspondent for the ASEAN region.
-

James Chai
James is a Visiting Fellow at ISEAS-Yusof Ishak Institute and a former Policy Advisor to the Minister of Economy in Malaysia. He has led flagship technology policies on AI infrastructure and semiconductors, and worked in public-listed technology firms in Australia and UK. His writings have appeared on international columns and academic journals, and he is also the author of a bestselling non-fiction on underdogs with Penguin Random House. Now he writes a Substack newsletter called Progress Papers, where he explores the intersection of AI and geopolitics in middle powers. He is a graduate of Oxford University and Queen Mary University.
-

Jamie Johnson
Jamie is a financial services regulatory lawyer who advises financial institutions on their regulatory obligations across a range of products and services. His research interests centre on the lessons that AI governance can draw from financial services regulation, particularly how technical standards developed to manage complexity in financial markets might be cross-applied to AI systems. Jamie studied PPE and retains an interest in some of the more philosophical questions around AI, such as consciousness and moral status.
-

Joshua Krook
Joshua is an AI governance researcher at the University of Antwerp, Belgium. Previously, he worked with Arcadia Impact and Good Ancestors on AI Governance Taskforces, co-drafted the Munich Convention on AI, Data and Human Rights, and the Code of Practice for the EU AI Act, as part of the Transparency Working Group. His research considers AI risks to human safety and society, including autonomy, human rights, and the erosion of human freedoms.
-

Shruti Sharma
Shruti is a researcher focused on AI governance, technology policy, and national security. She has completed a Master of Arts in Strategy, Cybersecurity, and Intelligence at Johns Hopkins University’s School of Advanced International Studies (SAIS), and holds a second Master’s degree in International Relations from India. Shruti has worked across government and think-tank settings in the United States and India on AI safety, compute governance, and semiconductor supply chains. Her work bridges policy research and applied governance, examining how frontier AI development adapts under export controls and geopolitical constraints, as well as how cybersecurity risks are evolving alongside AI capabilities and what organizations need to do to adapt.
