Flag job

Report

Data Engineer

Min Experience

3 years

Location

Bangalore

JobType

full-time

About the role

 

Job Title: Data Engineer 

Organization: Living Things Pvt. Ltd

 

Location: IIT Bombay, Powai, Mumbai

Job Type: Full-Time

Experience Level: Mid-Level (3 - 4 years experience)

 

About Us:

Living Things is a pioneering IoT platform by iCapotech Pvt Ltd, dedicated to accelerating the net zero journey towards a sustainable future. We bring mindfulness in energy usage by our platform. Our solution seamlessly integrates with existing air conditioners, empowering businesses & organisations to optimise & reduce energy usage, enhance operational efficiency, reduce carbon footprints, and drive sustainable practices. Analysis of Electricity consumption across all locations from Electricity Bills. By harnessing the power of real-time data analytics and intelligent insights, our energy saving algorithm helps in saving a minimum of 15% on Air Conditioner’s energy consumption. 

About the Role:

We are seeking a highly skilled and motivated Data Engineer to join our growing data team. You will play a critical role in designing, building, and maintaining our data infrastructure, enabling data-driven decision-making across the organization.

 

Job Responsibilities:

  •  
  1. Manage and optimize relational (PostgreSQL, MySQL) and NoSQL (MongoDB) databases, including performance tuning and schema evolution management.
  2. Leverage cloud platforms (AWS, Azure, GCP) for data storage, processing, and analysis, with a focus on optimizing cost, performance, and scalability using cloud-native services.
  3. Design, build, and maintain robust, scalable, and fault-tolerant data pipelines using modern orchestration tools (Apache Airflow, Apache Flink, Dagster).
  4. Implement and manage real-time data streaming solutions (Apache Kafka, Kinesis, Pub/Sub).
  5. Knowledge of BI tools (Metabase, Power BI, Looker, QuickSight) and the ability to design data models that support efficient querying for analytical purposes.
  6. Collaborate closely with Data Scientists, Analysts, and Business stakeholders to understand data requirements and translate them into technical data solutions.
  7. Stay updated on the latest data engineering technologies and best practices, and advocate for their adoption where appropriate.
  8. Contribute to the development and improvement of data infrastructure and processes, including embracing DataOps principles for automation and collaboration.
  9. Work with containerization (e.g., Docker) and orchestration tools (e.g., Kubernetes) for deploying and managing data services.
  10. Implement data governance policies and practices, including data lineage and metadata management.

 

 

Skills and Qualifications:

  1. Essential:
    • Strong proficiency in Python, SQL, MongoDB.
    • Experience with relational databases (PostgreSQL, MySQL) and NoSQL databases (MongoDB)Understanding of database internals, indexing, and query optimization.
    • Knowledge of Data Modeling, Data Warehousing principles, and ETL/ELT methodologies. 
    • Proficiency with cloud platforms (AWS, Azure, GCP), including data storage (S3, ADLS Gen2, GCS), data warehousing services (e.g., Redshift, Snowflake, BigQuery), and managed services for data processing (AWS Glue, Azure Data Factory, Google Cloud Dataflow).
    • Experience with data quality and validation techniques, and implementing automated data quality frameworks.
    • Strong analytical and problem-solving abilities. Ability to troubleshoot complex data pipeline issues.
    • Experience with BI tools (Metabase, Power BI, Looker, QuickSight) from a data provisioning perspective.
    •  
  2. Preferred:
    • Experience with Data Lake, Data Lakehouse, or Data Mesh architectures.
    • Hands-on experience with data processing frameworks like Apache Spark, Apache Kafka, and stream processing technologies (Spark Streaming, Flink).
    • Experience with workflow orchestration tools like Apache Airflow, Dagster.
    • Understanding of DataOps and MLOps concepts and practices.
    • Experience with data observability and monitoring tools.
    • Excellent communication and presentation skills.

 

Skills

kafka
Apache Spark