About the role
xAI's mission is to create AI systems that can accurately understand the universe and aid humanity in its pursuit of knowledge.
Our team is small, highly motivated, and focused on engineering excellence. This organization is for individuals who appreciate challenging themselves and thrive on curiosity.
We operate with a flat organizational structure. All employees are expected to be hands-on and to contribute directly to the company's mission. Leadership is given to those who show initiative and consistently deliver excellence. Work ethic and strong prioritization skills are important.
All engineers and researchers are expected to have strong communication skills. They should be able to concisely and accurately share knowledge with their teammates.
Tech Stack
Python / Rust
PyTorch / JAX
CUDA / CUTLASS / Triton / NCCL
Kubernetes
SGLang: This team is leading the development of one of the most popular open-source inference engines, SGLang (https://github.com/sgl-project/sglang/tree/main). You have the opportunity to work on open-source projects.
Location
The role is based in the Bay Area [San Francisco and Palo Alto]. Candidates are expected to be located near the Bay Area or open to relocation.
Focus
Optimizing the latency and throughput of model inference.
Building reliable production serving systems to serve millions of users.
Accelerating research on scaling test-time compute.
Ideal Experiences
Worked on system optimizations for model serving, such as batching, caching, load balancing, and model parallelism.
Worked on low-level optimizations for inference, such as GPU kernels and code generation.
Worked on algorithmic optimizations for inference, such as quantization, distillation, and speculative decoding.
Worked on large-scale, high concurrent production serving.
Worked on testing, benchmarking, and reliability of inference services.