UST
Website:
ust.com
Job details:
Job Summary:
We are looking for a talented Pipeline Engineer to design, develop, and optimize scalable data and processing pipelines that support large-scale distributed systems. The ideal candidate will have hands-on experience with microservices, big data technologies, and modern CI/CD workflows. You will play a crucial role in building and maintaining the data and service pipelines that power our core systems.
________________________________________
Mandatory:- Java, CICD, AWS, Kafka, Docker, Kubernetes
Good to have: -Helm, ETL
Key Responsibilities:
Design, build, and maintain robust and scalable data pipelines using Java or Scala.
Develop microservices and backend components using Spring Boot.
Implement and support big data processing workflows leveraging tools like Kafka, Spark, Zeppelin, and Hadoop.
Design efficient data integration workflows that ensure high performance and reliability.
Work with distributed data stores such as S3, Cassandra, MongoDB, Couchbase, Redis, and Elasticsearch.
Apply modern serialization techniques using Protocol Buffers, Avro, or Thrift for high-efficiency data exchange.
Set up and maintain CI/CD pipelines to support automated testing and continuous delivery.
Containerize applications using Docker and orchestrate them using Kubernetes and Helm.
Utilize AWS cloud services to deploy and manage data infrastructure and services.
Collaborate with cross-functional teams to ensure data quality, reliability, and performance.
________________________________________
Required Skills & Experience:
2+ years of hands-on experience building scalable microservices using Java or Scala.
Strong experience with Spring Boot for backend service development.
Solid understanding of functional programming concepts.
Experience implementing CI/CD pipelines and version control (e.g., Git, Jenkins, GitLab CI).
Proficiency with big data technologies such as Kafka, Spark, Zeppelin, Hadoop, and AWS EMR.
Experience working with distributed storage and database solutions like S3, Cassandra, MongoDB, Elasticsearch, Couchbase, or Redis.
Familiarity with modern data serialization formats (Protocol Buffers, Avro, Thrift).
Experience with Docker and Kubernetes, including Helm for managing Kubernetes applications.
Familiarity with AWS cloud services and infrastructure management.
________________________________________
Preferred Qualifications:
Experience with real-time data streaming and processing.
Familiarity with observability tools (e.g., Prometheus, Grafana, ELK).
Knowledge of security practices related to data pipelines and API interactions.
Exposure to infrastructure as code (IaC) tools like Terraform or CloudFormation.
Soft Skills:-
________________________________________
Good written and verbal communication
Strong sense of ownership and ability to drive tasks independently
Proactive about raising blockers and suggesting solutions
Able to collaborate effectively across backend, frontend, and DevOps teams
Comfortable working in a fast-paced, asynchronous environment
Click on Apply to know more.