About the role
About the role:
We are looking for ML engineers to help build safety and oversight mechanisms for our AI systems. As a Trust and Safety Machine Learning Engineer, you will work to train models which detect harmful behaviors and help ensure user well-being. You will apply your technical skills to uphold our principles of safety, transparency, and oversight while enforcing our terms of service and acceptable use policies.
Responsibilities:
Build machine learning models to detect unwanted or anomalous behaviors from users and API partners, and integrate them into our production system.
Improve our automated detection and enforcement systems as needed.
Analyze user reports of inappropriate accounts and build machine learning models to detect similar instances proactively.
Surface abuse patterns to our research teams to harden models at the training stage.
You may be a good fit if you:
Have 4+ years of experience in a research/ML engineering or an applied research scientist position, preferably with a focus on trust and safety.
Have proficiency in SQL, Python, and data analysis/data mining tools.
Have proficiency in building trust and safety AI/ML systems, such as behavioral classifiers or anomaly detection.
Have strong communication skills and ability to explain complex technical concepts to non-technical stakeholders.
Care about the societal impacts and long-term implications of your work.
Strong candidates may also have experience with:
Have experience with machine learning frameworks like Scikit-Learn, Tensorflow, or Pytorch.
Have experience with high performance, large-scale ML systems.
Have experience with language modeling with transformers.
Have experience with reinforcement learning.
Have experience with large-scale ETL.
About the company
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.