About the Team
The Safety Systems team is at the forefront of OpenAI's mission to build and deploy safe AGI, driving our commitment to AI safety and fostering a culture of trust and transparency.
The Model Policy team aligns model behavior with desired human values and norms. We co-design policy with models and for models. Key focus areas include: addressing critical societal challenges like info-hazard risks and how the model should respond in mental health contexts; defining evaluation criteria for foundational models’ ability to reason about safety, values, and questions of cultural norms; and driving rapid policy taxonomy iteration based on data.
About the Role
Providing access to powerful AI models introduces a host of challenging questions when it comes to model safety: How do we define safe behavior for how a model should behave? To what end? How do we do this in such a way that is actionable, objective and sustains replicability?
This is a senior role in which you’ll help shape policy creation and development at OpenAI and make an impact by helping ensure that our groundbreaking technologies do not create harm. The ideal candidate can identify and develop cohesive and thoughtful taxonomies of harm on high risk topics with a sense of urgency. They can balance internal and external input in making complex decisions, carefully think through trade-offs, and write principled, enforceable policies based on our values. Importantly, this role is embedded in our research teams and directly informs model training.
This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.
In this role, you’ll:
Design model policies that govern safe model behavior in an objective and defensible way - e.g. how should the model respond in risky/unsafe scenarios? What does unsafe mean?
You will develop taxonomies that inform data collection campaigns, model behaviour and monitoring strategies and also toe the line between maximizing utility and preventing catastrophic risk.
Lead prioritization for safety efforts across the company for new model launches, understanding and addressing technical and business trade-offs.
Develop a broad range of subject matter expertise while maintaining agility across topics.
You will work across many internal teams which will require high organizational acumen and confident decision making.
You might thrive in this role if you:
Have extensive experience researching LLMs, ML, AI, tech policy, moral reasoning, and/or enjoy classification problems.
Have extensive experience defining, refining and enforcing policies for ML models.
Deeply understand the operational challenges of enforcing policies with RLHF and can incorporate this into policy design.
Can analyze the benefits and risks of open-ended problem spaces; can generate ideas required to solve ambiguous problems and take full ownership of the solution.
Most relevant publications:
OpenAI o1 System Card (Section 3)
GPT-4o System Card (Section 3)
GPT-4 System Card (Sections 2,3,4)
About OpenAI
OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.
We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.
For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.
Qualified applicants with arrest or conviction records will be considered for employment in accordance with applicable law, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.
We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.
OpenAI Global Applicant Privacy Policy
At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.
If an employer mentions a salary or salary range on their job, we display it as an "Employer Estimate". If a job has no salary data, Rise displays an estimate if available.
Lead the engineering growth team at OpenAI to improve ChatGPT's user experience and drive strategic growth initiatives.
Contribute to maintaining and improving the reliability of OpenAI’s supercomputing infrastructure as a Software Engineer on the Fleet Hardware Health team.
A Staff Product Researcher role at Pinterest focusing on monetization to deliver actionable insights and drive product impact in a hybrid work model.
Lead applied research, public engagement, and community of practice initiatives at Georgetown University's Beeck Center to foster equitable public service innovation.
Lead clinical research monitoring efforts for pioneering heart valve technologies at Edwards, driving patient safety and regulatory compliance in a remote role.
Ideal Option is looking for a Laboratory Analyst 2 to conduct diagnostic medical laboratory testing and support clinical operations in Renton, WA.
House Majority PAC is looking for a detail-oriented Regional Research Director with opposition research experience to support Democratic campaigns through strategic research and media fact-checking.
A leading consulting firm is looking for a Forensic Pathologist to deliver expert medical opinions and collaborate with scientists and engineers in Atlanta.
Experienced finance professionals are invited to contribute to BMO Capital Markets' U.S. Utilities research team as a Senior Equity Research Associate in New York.
The Arkansas Archeological Survey is looking for a Society/Survey Liaison in Fayetteville to coordinate public archeology outreach and events collaboratively with the Arkansas Archeological Society.
Environmental Specialist opportunity at American Structurepoint in Columbus, OH, ideal for a recent graduate passionate about environmental science and infrastructure development.
Assist in managing clinical trial disclosures and documentation accuracy within a contract research setting at Katalyst Healthcare & Life Sciences.
Experienced Senior Research Specialist sought to drive impactful research for the Department of Defense acquisition and sustainment initiatives at BryceTech.
The Policy/Research Analyst role at FWI focuses on policy development and program performance evaluation to enhance employment services for veterans.
A Senior Director Analyst at Gartner will leverage deep networking expertise to deliver impactful research and actionable insights to global IT leaders in a fully remote capacity.
OpenAI is a US based, private research laboratory that aims to develop and direct AI. It is one of the leading Artifical Intellgence organizations and has developed several large AI language models including ChatGPT.
67 jobs