Senior Data Engineer | GPM

Min Experience

5 years

Location

Chennai

JobType

full-time

About the role

We are hiring on behalf of our client, a leading ad-tech company, supported by a Nasdaq-listed parent company with over 5,000 employees worldwide.

Role Overview:

We are looking for a talented  Senior Data Engineer with expertise in Relational databases and NoSQL databases, ETL process, Kafka, and data warehouses like Snowflake, Redshift, cloud platforms like AWS, GCP, or Azure to join a highly collaborative and agile team. If you have experience in product companies or startups and thrive on solving complex technical challenges, this is the perfect opportunity for you!

Experience: 5-9 years
Location: Chennai
work mode: Hybrid - flexible

Qualifications: 

  • 5-9 years of experience in data engineering, with a focus on building and managing data pipelines.
  • Strong proficiency in relational databases (e.g., MySQL, PostgreSQL) and NoSQL databases (e.g., MongoDB, Cassandra).
  • Experience in building data pipelines with data warehouses like SnowflakeRedshift 
  • Experience in processing unstructured data stored from S3 using Athena, Glue etc.
  • Hands-on experience with Kafka for real-time data streaming and messaging.
  • Solid understanding of ETL processes, data integration, and data pipeline optimization.
  • Proficiency in programming languages like PythonJava, or Scala for data processing.
  • Experience with Apache Spark for big data processing and analytics is an advantage
  • Familiarity with cloud platforms like AWSGCP, or Azure for data infrastructure is a plus.
  • Excellent problem-solving skills and attention to detail.
  • Strong communication and collaboration skills, with the ability to work effectively in a team environment.

Key Responsibilities:

  • Design, build, and maintain efficient and scalable data pipelines to support data integration and transformation across various sources.
  • Work with relational databases (e.g., MySQL, PostgreSQL, etc.) and NoSQL databases (e.g., MongoDB, Cassandra, etc.) to manage and optimize large datasets.
  • Utilize Apache Spark for distributed data processing and real-time analytics.
  • Implement and manage Kafka for data streaming and real-time data integration between systems.
  • Collaborate with cross-functional teams to gather and translate business requirements into technical solutions.
  • Monitor and optimize the performance of data pipelines and architectures, ensuring high availability and reliability.
  • Ensure data quality, consistency, and integrity across all systems.
  • Stay up-to-date with the latest trends and best practices in data engineering and big data technologies.

Skills

Relational Databases
NoSQL
ETL
Kafka
Snowflake
Redshift
AWS
Microsoft Azure
GCP
SQL