Flag job

Report

Data Engineer / Big Data Engineer

Location

Chennai, Tamil Nadu, India

JobType

full-time

About the job

Info This job is sourced from a job board

About the role

Ilink Digital

Website: ilink-digital.com
Job details:
Key Responsibilities

  • Application Development
  • Develop and maintain data-driven applications using Python and/or Scala.
  • Design modular, scalable, and maintainable backend components.
  • Participate in system analysis, technical design, and modelling of information systems.
  • ETL & Data Integration
  • Design, develop, and optimize ETL processes (Extraction, Transformation, Load).
  • Implement data pipelines using DataStage/Informatica/ Data Integration tools, or similar platforms.
  • Ensure data quality, transformation logic validation, and performance optimization.
  • Manage batch and streaming data ingestion workflows.
  • Database Development
  • Develop solutions using relational and non-relational databases: Need to have experience using one of the below
    • Oracle
    • Informix
    • Teradata
    • MongoDB
    • Hive
  • Design optimized schemas, indexes, and queries.
  • Implement data modelling (conceptual, logical, physical models).
  • Distributed Architecture & Streaming
  • Work within enterprise architectures including one / more of the following
    • JEE-based systems
    • Kubernetes-based containerized deployments
    • Kafka-based streaming data platforms
  • Support real-time and near-real-time data processing frameworks.
  • Data Visualization & Reporting
  • Develop dashboards using Power BI or equivalent BI tools.
  • Translate business requirements into analytical views and KPIs.
  • Ensure performance-efficient data models for reporting.
  • Landing Zone & Advanced Data Platform Tools
Work with or support environments leveraging one or more of the following

  • Presto
  • Feast (Feature Store)
  • Spark
  • MLflow
  • Kubeflow
  • HPE Ezmeral
  • Superset
  • Ray
  • Jupyter / Notebooks
  • Java-based services

Required Experience

  • Hands-on experience in:
    • Python and/or Scala development
    • ETL process design and implementation
    • Database development (relational & NoSQL including MongoDB)
    • Information systems modelling (analysis & design)
    • Dashboard development
Required Technical Knowledge

Advanced Knowledge In:

  • Enterprise Architectures: JEE, Kubernetes, Kafka (and/ or)
  • ETL Tools: DataStage / Informatica / equivalent (and/ or )
  • Databases: Oracle, Informix, Teradata, MongoDB, Hive

Working Knowledge In

  • Scala and/or Python
  • Power BI
  • Distributed data processing frameworks

Preferred Qualifications

  • Experience with Big Data ecosystems (Spark, Hive, Kafka).
  • Exposure to ML platforms (MLflow, Kubeflow, Feast).
  • Understanding of containerized deployments (Docker + Kubernetes).
  • Experience working in Agile environments.
  • Strong problem-solving and system design skills.

Requirements

Preferred Qualifications

  • Experience with Big Data ecosystems (Spark, Hive, Kafka).
  • Exposure to ML platforms (MLflow, Kubeflow, Feast).
  • Understanding of containerized deployments (Docker + Kubernetes).
  • Experience working in Agile environments.
  • Strong problem-solving and system design skills.
Click on Apply to know more.

Skills

Python
Power BI
Agile
backend
data ingestion
data models
data visualization
database
Docker
ETL
Hive
Kubeflow
NoSQL
Oracle
platform tools
Ray
Spark
BI tools
Teradata