Flag job

Report

Research Scientist/Engineer, Alignment Finetuning

Salary

$280k - $425k

Min Experience

0 years

Location

San Francisco, CA

JobType

full-time

About the job

Info This job is sourced from a job board

About the role

Anthropic's mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. About the role: As a Research Scientist/Engineer on the Alignment Finetuning team at Anthropic, you'll lead the development and implementation of techniques aimed at training language models that are more aligned with human values: that demonstrate better moral reasoning, improved honesty, and good character. You'll work to develop novel finetuning techniques and to use these to demonstrably improve model behavior. Responsibilities: Develop and implement novel finetuning techniques using synthetic data generation and advanced training pipelines Use these to train models to have better alignment properties including honesty, character, and harmlessness Create and maintain evaluation frameworks to measure alignment properties in models Collaborate across teams to integrate alignment improvements into production models Develop processes to help automate and scale the work of the team You may be a good fit if you: Have an MS/PhD in Computer Science, ML, or related field, or equivalent experience Possess strong programming skills, especially in Python Have experience with ML model training and experimentation Have a track record of implementing ML research Demonstrate strong analytical skills for interpreting experimental results Have experience with ML metrics and evaluation frameworks Excel at turning research ideas into working code Can identify and resolve practical implementation challenges Strong candidates may also have: Experience with language model finetuning Background in AI alignment research Published work in ML or alignment Experience with synthetic data generation Familiarity with techniques like RLHF, constitutional AI, and reward modeling Track record of designing and implementing novel training approaches Experience with model behavior evaluation and improvement

About the company

Anthropic's mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.

Skills

python
ml
ml model training
ml research
ml metrics
synthetic data generation
rlhf
constitutional ai
reward modeling