
An independent research and educational fellowship that connects promising scholars with leading mentors in AI alignment, interpretability, governance, and security. Fellows work on cutting‑edge research projects, participate in seminars/workshops, and build networks within the AI safety community.
AI Alignment Research Fellow (Summer 2026 – MATS Program)
MATS (ML Alignment & Theory Scholars Program)
Research Fellowship
12 weeks (Summer 2026)
Berkeley, California (USA) and London, United Kingdom
(Extension phase primarily based in London)
The MATS Program is a highly selective, independent research fellowship that connects emerging researchers with leading mentors in AI alignment, interpretability, governance, and security. Over a 12-week period, fellows conduct original research while participating in talks, workshops, and networking events with the broader AI alignment research community.
The program provides structured mentorship, research management support, and access to a vibrant research ecosystem, preparing fellows to produce impactful work and pursue long-term careers in AI safety.
Applicants may apply to one or more of the following research tracks:
Technical Governance
Empirical AI Safety
Policy & Strategy
Theoretical Research
Compute Governance
Applicants can also express interest in working with specific mentor streams.
⚠️ Note: Neel Nanda’s stream has a separate application, which closes on January 2, 2026.
Conduct original research in AI alignment, safety, or governance
Collaborate closely with assigned mentors
Attend research talks, workshops, and community events
Participate in evaluations, reviews, and feedback sessions
Contribute to written research outputs and long-term projects
Open to applicants from diverse academic and professional backgrounds, including:
Machine Learning, Computer Science, Mathematics
Economics, Policy, Governance
Physics, Cognitive Science, and related fields
Strong motivation to contribute to AI safety and alignment
Demonstrated technical aptitude or research potential
Prior AI safety experience is helpful but not required
All applicants must submit the Summer 2026 General Application (estimated 1–2 hours to complete), which includes:
General background information
Track-specific questions
Preferred mentor streams
Submission of two references (contacted at a later stage)
Depending on track and stream, applicants may be required to complete:
Code screening (required for Empirical track applicants)
Work tests or project proposals
Intermediate interviews
Shortlisted candidates interview with potential mentors
Mentors rank preferred candidates based on mutual fit
Final offers are issued based on mentor selection
MATS mentors are leading academics, industry researchers, and independent experts across AI safety domains, including:
Agent foundations and decision theory
Mechanistic and conceptual interpretability
AI control, monitoring, and evaluations
Cybersecurity, adversarial robustness, and safeguards
Governance, forecasting, and policy strategy
Threat modeling, compute proliferation, and compliance
Red-teaming, deceptive alignment, and model organisms
Hardware interventions and information security
General Application Open: December 16, 2025
Application Deadline: January 18, 2026
Additional Evaluations: Late January – March 2026
Final Offers Released: Late March / Early April 2026
Main Program: Early June – Late August 2026
Extension Phase: Begins September 2026 (primarily London-based)
All applicants must submit the Summer 2026 General Application.
Applicants interested in Neel Nanda’s stream must apply separately by January 2, 2026.