Bigdata Engineer

Min Experience

3 years

Location

Bangalore

JobType

full-time

About the role

Role: Big Data Engineer
Experience: 4-7 Years
Location: Bangalore & Hyderabad
Work Mode: Hybrid
Mandatory Skills: Java / Python, Hadoop, MapReduce, AWS, API's, Data Structures & Algorithms
 

Job Overview:
Our client works with a variety of clients from start-ups to fortune 500 companies. We are looking for a detailed oriented self-starter to assist our engineering and analytics teams in various roles as a Software Development Engineer.
This position will be a part of a growing team working towards building world class large scale Big Data architectures. This individual should have a sound understanding of programming principles, experience in programming in Java, Python or similar languages and can expect to spend a majority of their time coding.
 

Responsibilities:
● Hands on coder with good experience in programming languages like Java, Python or Scala.
● Hands-on experience on the Big Data stack like Hadoop, Mapreduce, Spark, Hbase, and ElasticSearch.
● Good understanding of programming principles and development practices like check in policy, unit testing, code deployment
● Self starter to be able to grasp new concepts and technology and translate them into large scale engineering developments
● Excellent experience in Application development and support, integration development and data management.
● daily with customers across leading Fortune 500 companies to understand strategic requirements
● Stay up-to-date on the latest technology to ensure the greatest ROI for customers.
● Hands on coder with good understanding on enterprise level code
● Design and implement APIs, abstractions and integration patterns to solve challenging distributed computing problems
● Experience in defining technical requirements, data extraction, data transformation, automating jobs, productionizing jobs, and exploring new big data technologies within a Parallel Processing environment
● Must be a strategic thinker with the ability to think unconventional / out of box.
● Analytical and data driven orientation.
● Raw intellect, talent and energy are critical.
● Understands the demands of a private, high growth company.
● Ability to be both a leader and hands on "doer".

Qualifications: -
● 4+ Years of track record of relevant work experience and a computer Science or related technical discipline is required
● Experience with functional and object-oriented programming, Java, Python or Scala is a must.
● Hands-on experience on the Big Data stack like Hadoop, Mapreduce, Spark, Hbase, and ElasticSearch.
● Good understanding of AWS services and experience in working with API’s, microservices.
● Effective communication skills (both written and verbal)
● Ability to collaborate with a diverse set of engineers, data scientists and product managers
● Comfort in a fast-paced start-up environment
 

Preferred Qualification:
● Experience in agile methodology
● Experience with database modeling and development, data mining and warehousing.
● Experience in architecture and delivery of Enterprise scale applications and capable in developing framework, design patterns etc.
● Should be able to understand and tackle technical challenges, propose comprehensive solutions and guide junior staff
● Experience working with large, complex data sets from a variety of sources

Skills

Java
Hadoop
HDFS
Mapreduce
Data Structures
Algorithms
AWS