Job Title : Research Engineer – AI Post‑Training, Safety & Alignment
Location : San Francisco, Bay Area
About Us
We’re an independent research team working at the cutting edge of AI safety and post‑training technique development. Our mission is to make frontier models more reliable, interpretable, and aligned with human values — and we’re looking for experienced research engineers to help us push that frontier.
The Role
You’ll design and run experiments on the post‑training stack — from RLHF and preference optimization to scalable oversight and interpretability. We’re particularly interested in engineers who can bridge the gap between theoretical alignment research and robust, production‑level experimentation.
Responsibilities
You Might Be a Fit If You :
Why Join Us
Apply for a confidential chat!
Research Engineer • Santa Rosa, CA, United States