Flag job

Report

DevOps Engineer (EST only)

Min Experience

3 years

Location

United States

JobType

full-time

About the job

Info This job is sourced from a job board

About the role

At the forefront of health tech innovation, CopilotIQ+Biofourmis is transforming in-home care with the industry's first AI-driven platform that supports individuals through every stage of their health journey-from pre-surgical optimization to acute, post-acute and chronic care. We are helping people live healthier, longer lives by bringing personalized, proactive care directly into their homes. With CopilotIQ's commitment to enhancing the lives of seniors with chronic conditions and Biofourmis' advanced data-driven insights and virtual care solutions, we're setting a new standard in accessible healthcare. If you're passionate about driving real change in healthcare, join the CopilotIQ+Biofourmis Team!

What is the DevOps Engineer role?

CopilotIQ is looking for a DevOps Engineer (Individual Contributor, Full-Time remote position working EST hours). Your primary responsibility will be to design and develop infrastructure solutions that ensure scalability, high availability, and cost-efficiency for our cloud-based services. Your work will directly impact the deployment, monitoring, and security of our critical healthcare software infrastructure. 

This is an incredible role for someone looking to get into one of the most exciting companies in healthcare technology with a strong mission that has quantifiable improvements on patients' outcomes! Your work will directly impact people's lives!

What you'll be doing:

  • Build and manage our cloud infrastructure for scalability, high availability and cost-efficiency; 
  • Design and implement Infrastructure-as-Code (IaC) driven deployments for cloud-native services, enabling fully automated continuous deployments across Staging, Test, and Production environments.
  • Proven experience in AWS microservices and serverless deployments;
  • Build and maintain efficient CI/CD pipelines (Jenkins, ArgoCD, Github Actions)
  • Own monitoring and alerting flows, ensure general observability metrics and maintainability of all our critical services;
  • Implement disaster recovery and backup/restore strategies;
  • Strengthen the AWS infrastructure security by managing AWS WAF, KMS keys, Secrets Manager, SSL/TLS certificates, IAM, CloudTrail logs and more;
  • Heavily contribute to system reliability, including being on-call.

What you'll bring:

  • Bachelor's degree in Computer Science or related field;
  • 3-5 years combined Software Engineering/DevOps experience, with at least 2 years in DevOps focused on running and maintaining cloud-deployed SaaS application;
  • You are a mission driven strong collaborator that has product passion and deeply cares for growth, learning and adding meaningful impact through your efficient technical work;
  • Self-motivated with a proactive, "go-getter" mentality;
  • Demonstrates and applies broad knowledge of software engineering concepts, practices, and procedures for product and solution development and deployment at scale;
  • Experience with user authentication and authorization technologies and general security practices (definitely a plus, familiarity with any of JWT, Oauth, Open Policy Agent, IAM Roles);
  • Documentation of Cloud infrastructure and associated processes.

Technologies Required:

  • Solid fundamentals of networking and container technologies, especially as related to Docker and Kubernetes – working knowledge of major Internet protocols such as HTTP, DNS, TCP and IP routing;
  • Solid foundation in AWS services and best practices, including EKS, EC2, Lambda, API Gateway, S3, DynamoDB, RDS, Route 53, and CloudWatch;
  • Solid Expertise with at least one of the IaC tools (e.g. Cloudformation, Terraform, CDK);
  • Experience with Datadog or Prometheus/Grafana (ELK/Loki a plus).

Bonus Points for:

  • AWS CDK, Cloud Development Kit (TypeScript/Java/Python) and AWS Solutions Constructs;
  • Experience deploying and maintaining ML/AI workloads in production (batch + real-time inference) and experience supporting LLM-based systems (prompt pipelines, RAG, embeddings, etc.). Integrate ML workflows into standard CI/CD systems (GitHub Actions, CodePipeline, etc.)
  • Building or integrating OAuth/OIDC flows; general familiarity in the authentication/authorization space with RBAC/ACLs and OPA;
  • Exposure to microservices/serverless patterns (ECS/Lambda/API Gateway);
  • Awareness of OWASP Top 10 and cloud security scanning (image/code);
  • Work experience in the healthcare space and familiarity with HIPAA regulations (at least, as it pertains to ingestion, processing, storage and serving of patient data).

About the company

Provides AI-driven remote monitoring and telehealth for senior chronic care.

Skills

Docker
Kubernetes
AWS
Jenkins
ArgoCD
GitHub Actions
Terraform
CloudFormation
CDK
Datadog
Prometheus
Grafana
API Gateway
S3
DynamoDB
RDS
CloudWatch
OAuth
JWT
IAM