The Alignment Fellowship
Transformative AI is coming. What will you do?
The capabilities of today’s AI systems are rapidly advancing—matching or surpassing human experts in coding (IOI Gold), mathematics (IMO Gold), and a broad range of scientific disciplines (OpenAI o3 on GPQA Diamond) on well-defined tasks. If progress continues at its current pace, AI could become the most transformative technology in history, reshaping the world in ways we can scarcely imagine—potentially within just years.
To navigate this transition safely, the world needs talented, driven individuals to contribute—whether through fundamental research in AI labs, expanding theoretical understanding in academia, or shaping policy in think tanks and government.
CAISH is excited to introduce The Alignment Fellowship, an intensive 6-week program that equips participants with the knowledge and tools to understand and contribute to AI safety. Whether your background is technical or policy-focused, you'll learn from leading experts in the field through workshops, lectures, paper discussions, and hands-on projects, bringing your knowledge of AI safety from 0 to 1 (binary pun intended).
The Alignment Fellowship will run from October 20th to early December 2025, in-person in Cambridge, UK.
Applications for the alignment fellowship are now closed.
Program Format
The program consists of two stages:
Stage 1: Context Loading
1-3 facilitator-led workshops to introduce core AI Safety concepts and motivation.
Stage 2: Inference Time
A variety of wokshops and lectures on various aspects of AI Safety, led by a mix of professionals and academics. The technical prerequisite knowledge for these workshops ranges heavily, as some are programming-based (python), while others are policy-oriented.
Context Loading, in AI, refers to how much relevant information or ‘context’ an AI model can consider at once when processing input or generating responses.
Inference refers to the process where a trained model makes predictions or decisions based on new input data, using patterns and relationships it learned during training. It's like how humans apply learned knowledge to new situations.
~ Claude 3.5 Sonnet
Previous workshop leads & lecturers have been from:
FAQs
-
No, the Alignment Fellowship does not require any fees.
-
Workshops are all hosted in-person in Cambridge, UK.
We expect almost all applicants to already be based in either Cambridge or London. For London-based participants, we may be able to cover travel expenses from London to Cambridge if it would otherwise prohibit you from joining the programme.
-
We will accept applications from students, working professionals, and academics who want to expand their knowledge of AI Safety.
Applicants may be from techincal backgrounds (e.g. STEM students, professional SWEs, ML engineers, etc) as well as policy/governance backgrounds (e.g. civil servants, international studies student, public policy students, etc)
If you have a unique background but think the programme will benefit you, please err on the side of applying!
-
No.
For technical track applicants, some experience with machine learning or AI is a plus, but not required.
-
Depending on the time of day, workshops will include either dinner or snacks for all participants attending.
-
We expect participants to commit roughly 3-5 hours of time per week to the Alignment Fellowship split between attending workshops, pre-readings & assignments, and social events.
For any additional questions, contact sambhav@cambridgeaisafety.org