Infrastructure Engineer
Min Experience
5 years
Location
Bangalore
JobType
full-time
- Overview
About the role
Overview
Company name: Simplismart | HQ Location: San Francisco | Website | LinkedIn
Role: Infrastructure Engineer
- Salary: Rs. 20-35 lakhs per year
- Experience: 5-10 years
- Location: Bangalore
- Type: Full-time
We are excited to announce that we recently raised $7M in funding led by Accel! As we continue to grow, we’re looking for a talented Infrastructure Engineer to join our team. Your background caught our attention, and we believe your expertise would be a great fit for this role.
Key Responsibilities:
- Design, implement, and maintain scalable infrastructure solutions.
- Monitor system performance and troubleshoot issues proactively.
- Collaborate with development teams to ensure seamless integration and deployment.
- Optimize cloud services and infrastructure costs.
- Develop and maintain documentation for infrastructure processes and systems.
Qualifications:
- Proven experience in infrastructure engineering or related fields.
- Strong understanding of cloud platforms (AWS, Azure, GCP).
- Proficiency in scripting languages (e.g., Python, Bash).
- Experience with CI/CD pipelines and containerization technologies (e.g., Docker, Kubernetes).
- Excellent problem-solving skills and attention to detail.
About the company
About us
Fastest inference for generative AI workloads. Simplified orchestration via a declarative language similar to terraform. Deploy any open-source model and take advantage of Simplismart’s optimised serving. With a growing quantum of workloads, one size does not fit all; use our building blocks to personalise an inference engine for your needs.
API vs In-house
Renting AI via third-party APIs has apparent downsides: data security, rate limits, unreliable performance, and inflated cost. Every company has different inferencing needs: *One size does not fit all.* Businesses need control to manage their cost <> performance tradeoffs. Hence, the movement towards open-source usage: businesses prefer small niche models trained on relevant datasets over large generalist models that do not justify ROI.
Need for MLOps platform
Deploying large models comes with its hurdles: access to compute, model optimisation, scaling infrastructure, CI/CD pipelines, and cost efficiency, all requiring highly skilled machine learning engineers. We need a tool to support this advent towards generative AI, as we had tools to transition to cloud and mobile. MLOps platforms simplify orchestration workflows for in-house deployment cycles. Two off-the-shelf solutions readily available:
- Orchestration platforms with model serving layer: *do not offer optimised performance for all models, limiting user’s ability to squeeze performance*
- GenAI Cloud Platforms: *GPU brokers offering no control over cost*
Enterprises need control. Simplismart’s MLOps platform provides them with building blocks to prepare for the necessary inference. The fastest inference engine allows businesses to unlock and run each model at performant speed. The inference engine has been optimised at three levels: the model-serving layer, infrastructure layer, and a model-GPU-chip interaction layer, while also enhanced with a known model compilation technique.