Flag job

Report

AWS Data Engineer

Location

Bengaluru, Karnataka, India

JobType

full-time

About the job

Info This job is sourced from a job board

About the role

Capco

Website: capco.com
Job details:
AWS DATA ENGINEER

Job Description

  • Design, develop, test, deploy, and maintain large-scale data pipelines on a native AWS cloud deployment using native AWS services most likely AWS Glue, AWS Data Migration Service.
  • Must have prior experience of data engineering on the AWS platform.
  • Good Understanding of Spark/Pyspark/ Python/ SQL.
  • Experience of working with AWS data stores including S3, AWS, RDS, DynamoDB, AWS Data Lake. Will have used these technologies in previously executed projects.
  • Should have good understanding of AWS Services like Redshift, Kinesis Streaming, Glue, Iceberg, Lambda, Athena, S3, EC2, SQS, SNS.
  • Should understand monitoring and observability toolsets i.e. CloudWatch, Tivoli Netcool.
  • Basic AWS networking understanding: VPC, SG, Subnets, load Balancers.
  • Collaborate with cross-functional teams to gather technical requirements and deliver high-quality ETL solutions.
  • Strong AWS development experience for data ETL/pipeline/integration/automation work.
  • Should have a deep understanding of Data & Analytics Solution development lifecycle.
  • Strong knowledge on CI/CD, Jenkins, able to write testing scripts and automate as much as possible.
  • IaC Terraform or CloudFormation
  • Basic containers undersanding
  • Bitbucket/Git
  • Experience of working in an agile/scrum team.
  • Experience in Private Bank/Wealth Management domain.
Click on Apply to know more.

Skills

Python
Agile
AWS
Bitbucket
CI
CloudFormation
CloudWatch
cross-functional
data engineer
data lake
DynamoDB
EC2
ETL
Git
Jenkins
Lambda
Spark
SQL
Terraform
VPC