Procore Technologies
Website:
procore.com
Job details:
Job Ad
We are looking for a
Senior Engineer, GTM AI to join the Procore India team. In this role, you will be on the front lines of product development, turning conceptual agentic workflows into production-grade features that directly impact Procore's sellers. You will be responsible for implementing robust, scalable, and high-performance AI services, integrating complex agentic loops, and ensuring that our code remains maintainable as we scale. You will work closely with product managers and other engineers to deliver the "how"—the actual code, the data pipelines, and the LLM integrations that make our platform intelligent.
Key Responsibilities
Core Development & Implementation
- Agentic Workflows: Implement complex agentic logic using frameworks like LangGraph, Workato or similar. You will be responsible for coding the "decision pathways" that allow our agents to act autonomously and reliably.
- Service Reliability: Design and deploy high-performance microservices. You are responsible for ensuring low-latency response times for AI-powered features, optimizing for both cost and speed.
- Code Quality & Standards: Write clean, modular, and well-tested code. You will lead by example in code reviews, ensuring that our shared codebase adheres to strict engineering standards.
- Data Integration: Build and maintain the ETL and ingestion pipelines that feed our vector and graph databases, ensuring our agents have access to the most relevant, up-to-date context.
Collaboration & Execution
- Technical Troubleshooting: Serve as a subject matter expert when features break. You will debug complex issues across the LLM-infrastructure stack—from prompt hallucination to database latency.
- Collaboration: Partner with GTM Staff Engineer and Product Managers to define implementation specs. You will translate requirements into technical tasks, breaking down large problems into deliverable sprints.
Engineering Outcomes You'll Own
- Production-Ready Features: Take a feature from PRD to production, ensuring it is secure, performant, and reliable under load.
- Low-Latency Agent Loops: Optimize the "Time to First Token" and overall workflow completion time for our end users.
- Refined Codebase: Reduce technical debt by identifying inefficiencies in our existing AI services and proactively refactoring for better maintainability.
- Observability Mastery: Build comprehensive logging and monitoring for our AI agents, ensuring we know exactly why a decision was made when we need to audit the system.
Technical Requirements
- Languages: Expert-level Python proficiency, with a focus on writing high-performance, maintainable code (async/await, type hinting, and concurrency).
- AI/ML Stack: Strong hands-on experience with LLM integration, building RAG (Retrieval-Augmented Generation) pipelines, advanced prompt engineering, and managing vector databases (e.g., Pinecone, Milvus, Chroma).
- Agentic Frameworks: Proven experience architecting and implementing solutions using frameworks such as LangGraph, CrewAI, or Semantic Kernel.
- Cloud Infrastructure: Solid operational experience with AWS (ECS/EKS, Lambda, SQS), containerization with Docker, and maintaining production-grade CI/CD pipelines.
- Data Systems: Proficiency in SQL (specifically PostgreSQL) and NoSQL databases, with a focus on optimizing for high-speed, scalable data retrieval.
- Engineering Standards: Mastery of RESTful/gRPC API design, rigorous unit testing (e.g., pytest), and implementing observability tools (e.g., Datadog, NewRelic) to ensure system reliability and performance.
Experience Profile
- 4–7 years of experience in software engineering, ideally in a product-focused environment.
- Proven history of shipping and maintaining production code that serves real users.
- Comfortable working in an agile environment with rapid release cycles.
- Strong "Security-by-Design" mindset—you consider how to prevent prompt injection and data leakage at the code level.
Nice to Have
- Experience building internal tools for Sales/Revenue Operations (CRM integrations).
- Familiarity with "Human-in-the-Loop" (HITL) workflows in AI systems.
- Contributions to open-source AI projects or frameworks.
- Experience with AI Evaluation (Evals) and tracking the performance of LLM outputs over time.
Click on Apply to know more.