Caroline Bishop
Apr 08, 2026 17:45
OpenAI pronounces new fellowship program for exterior researchers targeted on AI security and alignment, operating September 2026 via February 2027.
OpenAI is opening its doorways to outdoors researchers with a brand new Security Fellowship program aimed toward advancing unbiased work on AI alignment and security challenges. Functions are actually open, with a Might 3 deadline.
The five-month program runs from September 14, 2026 via February 5, 2027, focusing on researchers, engineers, and practitioners who wish to deal with security questions affecting each present and future AI programs. OpenAI has partnered with Constellation to offer workspace in Berkeley, although distant participation is an possibility.
What OpenAI Desires
The corporate outlined precedence analysis areas together with security analysis, ethics, robustness, scalable mitigations, privacy-preserving security strategies, agentic oversight, and high-severity misuse domains. They’re particularly looking for work that is “empirically grounded, technically sturdy, and related to the broader analysis neighborhood.”
Fellows will not get inside system entry—a notable limitation—however will obtain API credit, compute assist, a month-to-month stipend, and mentorship from OpenAI workers. The expectation is evident: produce one thing tangible by program’s finish, whether or not that is a analysis paper, benchmark, or dataset.
Who Ought to Apply
OpenAI is casting a large internet on backgrounds. Laptop science is clear, however they’re additionally welcoming candidates from social science, cybersecurity, privateness, and human-computer interplay fields. The corporate explicitly acknowledged they “prioritize analysis means, technical judgment, and execution over particular credentials.”
Letters of reference are required. Profitable candidates might be notified by July 25.
The Greater Image
This fellowship arrives as AI security issues have moved from tutorial debate to mainstream regulatory dialogue. OpenAI has confronted criticism through the years for allegedly deprioritizing security analysis in favor of functionality growth—a pressure that led to high-profile departures from its security workforce.
This system represents an try to domesticate exterior security analysis expertise whereas doubtlessly deflecting a few of that criticism. Whether or not it indicators a real shift in priorities or serves primarily as an optics play stays to be seen.
For researchers involved in AI security work with entry to OpenAI assets and mentorship, functions shut Might 3 on the program’s official web page. Questions may be directed to openaifellows@constellation.org.
Picture supply: Shutterstock


