Flag job

Report

Research Engineer, Frontier Red Team (CBRN, Biosecurity)

Salary

$280k - $425k

Min Experience

0 years

Location

San Francisco, CA

JobType

full-time

About the job

Info This job is sourced from a job board

About the role

Anthropic's mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. We're building a team that will research and mitigate extreme risks from future models. This team will intensively red-team models to test the most significant risks they might be capable of in areas such as biosecurity, cybersecurity risks, or autonomy. We believe that clear demonstrations can significantly advance technical research and mitigations, as well as identify effective policy interventions to promote and incentivize safety. As part of this team, you will lead research to baseline current models and test whether future frontier capabilities could cause significant harm. Day-to-day, you may decide you need to finetune a model to see whether it becomes superhuman in an eval you've designed; whiteboard a threat model with a national security expert; test a new training procedure or how a model uses a tool; or brief government, labs, and other research teams. Our goal is to see the frontier before we get there. We're currently hiring for our CBRN workstream, with an emphasis on biosecurity risks (as outlined in our Responsible Scaling Policy). By nature, this team will be an unusual combination of backgrounds. We are particularly looking for people with experience in these domains: Biosecurity: You're a computational biologist who's concerned about the implications of AI development. You're an academic who researches biosecurity defense. You have experience modeling biological phenomena or developing advanced threat modeling simulations. Science: You're an ML researcher who builds agents to augment chemistry or biology research. You've built a protein language model and you enjoyed looking through the embedding space. You're a team lead at an ML-for-drug discovery company. You've built software for astronauts or materials scientists. Evaluations: You've managed a large-scale benchmark development project, in AI or other domains. You have ideas about how AI and ML evaluations can be better.

About the company

Anthropic's mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.

Skills

python
machine learning
computational biology
molecular biology
bioengineering
bioinformatics