Flag job

Report

Senior Software Engineer - Data

Location

Bengaluru, Karnataka, India

JobType

full-time

About the job

Info This job is sourced from a job board

About the role

Uber

Website: uber.com
Job details:
About The Role

As an Engineer on the Earner Data Intelligence team, you will be dealing with large scale data pipelines and data sets that are critical and foundational for Uber to make decisions for better customer experience. You will be working on a petabyte scale of analytics data from the multiple Uber applications. Help us build the software systems and data models that will enable data scientists to understand our user behavior better and thrive on the data driven mindset at Uber.

About The Team

The Earner Data Intelligence team is responsible for the designing core foundational data sets that are critical to understand customers needs and helps business teams take right decisions in solving these critical problems. The team's mission is to ensure high quality for all the critical data flows for analytics purposes across all verticals in Uber and enable faster implementation of data needs by building standardized tools and framework for accurate analysis. We are currently revamping all critical analytical data flows across domains to build high quality data sets and frameworks that are used across Uber.

Basic Qualifications

  • 7+ years of extensive Data engineering experience working with large data volumes and different sources of data.
  • Strong data modeling skills, domain knowledge and domain mapping experience.
  • Strong experience of using SQL language and writing complex queries.
  • Experience with using other programming languages like Java, Scala, Python
  • Good problem solving and analytical skills
  • Good communication, mentoring and collaboration skills.

Preferred Qualifications

  • Extensive experience in data engineering and working with Big data
  • Experience with ETL or Streaming data and one or more of, Kafka, HDFS, Apache Spark , Apache Flink , Hadoop
  • Experience backend services and familiarity with one of the cloud platform ( AWS/ Azure / Google cloud)

What the Candidate Will Do

  • Responsible for defining the Source of Truth (SOT), Dataset designfor multiple Uber teams.
  • Identify unified data models collaborating with Data Science teams
  • Streamline data processing of the original event sources and consolidate them in source of truth event logs
  • Build and maintain real-time/batch data pipelines that can consolidate and clean up usage analytics
  • Build systems that monitor data losses from the different sources and improve the data quality
  • Own the data quality and reliability of the Tier-1 & Tier-2 datasets including maintaining their SLAs, TTL and consumption
  • Devise strategies to consolidate and compensate the data losses by correlating different sources
  • Solve challenging data problems with cutting edge design and algorithms.

Competencies

Data Engineering

  • Fundamentals of Data Engineering and Big data technologies
  • Pipeline creation, writing Spark jobs
  • Experience coding SQL queries and other languages like Scala, Java, Python.

Data Architecture & Design (REQUIRED)

  • Good at designing Data Models
  • Understanding of SOA / Micro services
  • Familiar with AWS / Azure / GC cloud services
Click on Apply to know more.

Skills

Python
AWS
Apache
Apache Flink
Apache Spark
Azure
backend
batch data
big data technologies
data architecture
data modeling
data models
data science
ETL
Flink
Google Cloud
Hadoop
Java
Kafka
Spark
SQL