Train better AI models

OpenAI Alignment team

9 followers.

OpenAI is an artificial intelligence research company most well known for language models. The Alignment team works on making sure models behave in ways humans find desirable. This means that models shouldn't ever perpetuate systemic racism, harass users, spread disinformation, etc. Sadly, we're not very good at preventing this things yet compared to where we want to be. We believe a big part of improvement is on the human data collection side, and that there are many interesting problems at the intersection of sociology and machine learning. We have a few full-time job openings for this work. Example jobs: https://jobs.lever.co/openai/4fe793c7-5591-412a-95a6-8b787b1e8ade https://jobs.lever.co/openai/93ee05c7-74ee-4a9d-a32e-5fa88e286f1c

check New check Scoping check Scoping QA check Staffing check In progress check Final QA done_all Completed

Project scope

Scope version notes