Flag job

Report

AWS Data Engineer + Hadoop + Spark

Salary

₹18 - 30 LPA

Min Experience

3 years

Location

Hyderabad, Telangana, India

JobType

full-time

About the job

Info This job is sourced from a job board

About the role

Job Summary

AWS expertise (EMR, EC2, Athena, Glue) Tech Skills: Java, Spark Streaming and Batch, Hadoop, AWS Design, develop, and implement large-scale data processing solutions using Java, Spark Streaming, Spark Batch, and Hadoop. Build and maintain data pipelines for real-time and batch processing, ensuring high performance and reliability. Develop and optimize data storage and retrieval systems using Hadoop Distributed File System (HDFS) and related technologies. Utilize AWS services (e.g., EMR, S3, Lambda, Glue, Airflow) to build and deploy cloud-based big data solutions. Implement data ingestion, transformation, and storage processes for various data sources. Optimize Spark applications for performance and scalability. Troubleshoot and resolve complex data processing issues. Must to have Skills: AWS expertise (EMR, EC2, Athena, Glue) Hadoop ecosystem Spark processing engine expertise anyone Programming language expertise( JAVA or Scala / Python) Nice to have skills: Airflow scheduling

Skills

java
spark
hadoop
aws