Flag job

Report

Research Engineer - Data

Salary

$350k - $400k

Location

Menlo Park, California, United States

JobType

full-time

About the job

Info This job is sourced from a job board

About the role

About Periodic Labs

The most important scientific discoveries of our time won’t happen in a traditional lab. We’re an AI and physical sciences company building state-of-the-art models to accelerate breakthroughs across materials, energy, and beyond. Backed by world-class investors and growing rapidly, we operate at the pace the frontier requires. Our team brings deep expertise, genuine ownership, and an insatiable drive to push the boundaries of what’s scientifically possible.

About the Role

You will build and drive the data foundation for our research efforts. This means owning data strategy end-to-end: sourcing and procuring external datasets, integrating internally generated experimental data into the training stack, and ensuring the team always has the right data — in the right shape — to train and improve frontier models.

This role sits at the intersection of data engineering, research infrastructure, and strategy. You will work closely with pretraining, midtraining, and RL researchers to understand what data the models need, then build the pipelines and systems to get it there. The work spans collecting and organizing diverse data sources, improving data quality through deduplication and preprocessing, and ensuring that new experimental results are incorporated in a structured, repeatable way that makes them useful for model development.

What You’ll Do

  • Own data strategy across the training stack — identifying gaps, evaluating new sources, and shaping the overall data roadmap in collaboration with research leads

  • Source, evaluate, and procure external datasets across scientific domains including chemistry, physics, materials science, mathematics, and lab instrumentation

  • Build and maintain robust pipelines for ingesting, processing, and versioning large-scale datasets from heterogeneous sources

  • Design and implement data quality systems including deduplication, domain classification, quality filtering, and format normalization at scale

  • Integrate internally generated experimental data — from lab instrumentation, simulations, and model outputs — into the training stack in a structured and repeatable way

  • Build tooling that makes it easy for researchers to inspect, query, and understand the data that goes into training runs

  • Instrument data pipelines with metadata, lineage tracking, and versioning so experiments are reproducible and data decisions are auditable

  • Collaborate with pretraining and midtraining engineers on token budget management, data mixing ratios, and curriculum design

  • Stay current with research on data-efficient training, synthetic data generation, and data selection methods — and bring relevant ideas into production

You Will Thrive in This Role If You Have

  • Experience building large-scale data pipelines for LLM pretraining or midtraining, including web-scale or scientific corpora

  • Expertise in data quality techniques such as exact and fuzzy deduplication (MinHash, SimHash), perplexity filtering, classifier-based quality scoring, and PII scrubbing

  • Experience working with diverse scientific data formats — papers, patents, structured databases, simulation outputs, lab instrument exports — and normalizing them for model consumption

  • Experience with distributed data processing frameworks such as Apache Spark, Ray, or Dask at multi-terabyte to petabyte scale

  • Familiarity with dataset versioning, lineage tracking, and reproducibility tooling such as DVC, Delta Lake, or custom solutions

  • Experience sourcing and evaluating third-party datasets, including licensing considerations and quality assessment

  • Strong Python engineering skills and comfort building production-quality tooling in a research environment

  • Experience collaborating directly with ML researchers to translate data needs into pipeline requirements and back again

  • A research-oriented mindset — you run experiments on data, measure outcomes, and iterate with rigor

Especially Strong Candidates May Also Have

  • Experience curating scientific datasets specifically for domain-adaptive continued pretraining or instruction tuning

  • Familiarity with synthetic data generation methods, including model-generated data pipelines and quality verification

  • A background in a physical science or engineering discipline that informs how you think about scientific data quality and structure

  • Experience with multimodal data — integrating text, structured numerical data, molecular representations, or spectral data into unified training pipelines

Mechanics

Minimum education: Bachelor’s degree or an equivalent combination of education and training or experience

Location: Our lab is located in Menlo Park and we prefer folks to be located in Menlo Park or San Francisco but can be flexible based on role

Compensation: The annual base compensation range for this role is $350,000-400,000 commensurate with experience

Visa sponsorship: Yes, we sponsor visas and will do everything we can to assist in this process with our legal support.

We’re building a team of the world’s best — the scientists, engineers, and problem-solvers who don’t just follow the frontier, they define it. If you’re driven to bring AI to life in the physical world and make discoveries that have never been made before, you belong here.

About the company

Builds autonomous laboratories for AI-driven scientific discovery.

Skills

Python
Apache Spark
Ray
Dask
Delta Lake
DVC