Opportunity In USA LogoOpportunity
Location: Berkeley, California, United StatesType: contract

About Company

An independent research and educational fellowship that connects promising scholars with leading mentors in AI alignment, interpretability, governance, and security. Fellows work on cutting‑edge research projects, participate in seminars/workshops, and build networks within the AI safety community.

Job Description

AI Alignment Research Fellow (Summer 2026 – MATS Program)

Organization

MATS (ML Alignment & Theory Scholars Program)

Program Type

Research Fellowship

Duration

12 weeks (Summer 2026)

Location

Berkeley, California (USA) and London, United Kingdom
(Extension phase primarily based in London)


Program Overview

The MATS Program is a highly selective, independent research fellowship that connects emerging researchers with leading mentors in AI alignment, interpretability, governance, and security. Over a 12-week period, fellows conduct original research while participating in talks, workshops, and networking events with the broader AI alignment research community.

The program provides structured mentorship, research management support, and access to a vibrant research ecosystem, preparing fellows to produce impactful work and pursue long-term careers in AI safety.


Research Areas & Tracks

Applicants may apply to one or more of the following research tracks:

  • Technical Governance

  • Empirical AI Safety

  • Policy & Strategy

  • Theoretical Research

  • Compute Governance

Applicants can also express interest in working with specific mentor streams.

⚠️ Note: Neel Nanda’s stream has a separate application, which closes on January 2, 2026.


Key Responsibilities

  • Conduct original research in AI alignment, safety, or governance

  • Collaborate closely with assigned mentors

  • Attend research talks, workshops, and community events

  • Participate in evaluations, reviews, and feedback sessions

  • Contribute to written research outputs and long-term projects


Eligibility & Qualifications

  • Open to applicants from diverse academic and professional backgrounds, including:

    • Machine Learning, Computer Science, Mathematics

    • Economics, Policy, Governance

    • Physics, Cognitive Science, and related fields

  • Strong motivation to contribute to AI safety and alignment

  • Demonstrated technical aptitude or research potential

  • Prior AI safety experience is helpful but not required


Application Process

Stage 1: General Application

All applicants must submit the Summer 2026 General Application (estimated 1–2 hours to complete), which includes:

  • General background information

  • Track-specific questions

  • Preferred mentor streams

  • Submission of two references (contacted at a later stage)

Stage 2: Additional Evaluations

Depending on track and stream, applicants may be required to complete:

  • Code screening (required for Empirical track applicants)

  • Work tests or project proposals

  • Intermediate interviews

Stage 3: Interviews & Final Offers

  • Shortlisted candidates interview with potential mentors

  • Mentors rank preferred candidates based on mutual fit

  • Final offers are issued based on mentor selection


Mentorship

MATS mentors are leading academics, industry researchers, and independent experts across AI safety domains, including:

  • Agent foundations and decision theory

  • Mechanistic and conceptual interpretability

  • AI control, monitoring, and evaluations

  • Cybersecurity, adversarial robustness, and safeguards

  • Governance, forecasting, and policy strategy

  • Threat modeling, compute proliferation, and compliance

  • Red-teaming, deceptive alignment, and model organisms

  • Hardware interventions and information security


Important Dates

Application Timeline

  • General Application Open: December 16, 2025

  • Application Deadline: January 18, 2026

  • Additional Evaluations: Late January – March 2026

  • Final Offers Released: Late March / Early April 2026

Program Dates

  • Main Program: Early June – Late August 2026

  • Extension Phase: Begins September 2026 (primarily London-based)


How to Apply

All applicants must submit the Summer 2026 General Application.
Applicants interested in Neel Nanda’s stream must apply separately by January 2, 2026.


External Application Link: https://www.matsprogram.org/apply

Share this Opportunity

Posted: December 29, 2025 | Closes: January 18, 2026
AI Alignment Research Fellow (Summer 2026 – MATS Program) at MATS Program (ML Alignment & Theory Scholars)