Flag job

Report

Senior Data Engineer

Location

Kolkata metropolitan area, West Bengal, India

JobType

full-time

About the job

Info This job is sourced from a job board

About the role

Parentheses Labs

Website: parentheseslabs.com
Job details:

Senior Data Engineer

Parentheses Labs. • Kolkata (In-office / Remote)

Role

Senior Data Engineer

Experience

5+ years

Location

Kolkata — In-office / Remote (hybrid flexibility)

Employment Type

Full-time

Compensation

₹15,00,000 – ₹18,00,000 per annum (CTC), based on experience

Reports To

Head of Business Vertical / TBD

About Parentheses Labs

Parentheses Labs is an AI and software products company headquartered in Kolkata, serving clients across the US, the GCC, and India. We build data-driven products and platforms across multiple verticals — from banking performance infrastructure to adaptive learning systems and B2B SaaS — and we believe great engineering starts with great data.

About the Role

We are looking for a Senior Data Engineer to own the data backbone that powers our analytics, AI, and reporting workloads across multiple business verticals. You will work directly with engineering, product, and business stakeholders to design pipelines, model warehouses, and deliver insights that decision-makers can actually trust and act on.

What You'll Do

•      Design, build, and maintain robust ELT/ETL pipelines feeding our Snowflake warehouse from diverse sources (APIs, databases, files, event streams).

•      Write performant, well-tested SQL — from analytical queries to complex transformations — and own data modeling decisions (star/snowflake schemas, slowly changing dimensions, marts).

•      Build and maintain Power BI datasets, semantic models, and dashboards that business teams actually use; optimize DAX and refresh performance.

•      Partner with stakeholders across verticals (banking, HVAC services, education, MSME platforms) to translate business questions into data products.

•      Apply AI tooling (LLMs, copilots, embeddings, RAG) to accelerate data engineering work — pipeline scaffolding, data quality checks, documentation, and ad-hoc analytics.

•      Own data quality, lineage, and observability — write tests, set up monitoring, and make sure broken pipelines get noticed before stakeholders do.

•      Build advanced Excel models when that's the right tool for the job (financial analyses, exec-ready workbooks, what-if models).

•      Mentor junior engineers and analysts, review code, and raise the bar for engineering craft on the team.

•      Document everything you build — architectures, schemas, runbooks — so the next engineer (or AI agent) can pick it up in a day.

What We're Looking ForRequired

•      5+ years of professional data engineering experience, with a strong track record of shipping production data systems.

•      Excellent SQL skills — you can write, optimize, and debug complex analytical queries; you understand query plans and indexing.

•      Hands-on production experience with Snowflake — warehouse design, role/grant model, cost optimization, Snowpipe, tasks, and streams.

•      Strong Power BI skills — semantic modeling, DAX, row-level security, gateways, and performance tuning.

•      Advanced Excel — Power Query, pivot models, complex formulas, and the judgment to know when Excel is the right tool versus when it isn't.

•      Demonstrated AI fluency — comfortable using LLM-based tools (Claude, ChatGPT, Copilot, Cursor, etc.) as a daily part of your engineering workflow, with a clear understanding of where they help and where they don't.

•      Excellent written and verbal communication in English — you can explain a data model to a CFO and a CTO in the same meeting.

•      Strong educational background — degree in Computer Science, Engineering, Statistics, Mathematics, or a related quantitative field from a reputed institution.

•      Genuine intellectual curiosity and a quick-learner mindset — you enjoy picking up new domains, tools, and stacks, and you don't wait to be told what to learn next.

•      Comfort working across multiple business verticals and context-switching between projects without losing quality.

Nice to Have

•      Experience with dbt for transformation modeling and lineage.

•      Python proficiency for data work (pandas, SQLAlchemy, orchestration with Airflow / Prefect / Dagster).

•      Exposure to streaming or event-driven data (Kafka, Kinesis, Snowpipe Streaming).

•      Experience integrating data with marketing platforms (GA4, ad platforms, HubSpot, Salesforce) or financial systems.

•      Familiarity with cloud data ecosystems on AWS, Azure, or GCP.

•      Experience building or fine-tuning LLM-powered analytics features (text-to-SQL, RAG over warehouse data, AI assistants).

•      Prior experience working with US/international clients.

What We Offer

•      Compensation of ₹15–18 LPA, calibrated to experience and skill depth.

•      Hybrid working model — work from our Kolkata office or remotely, with flexibility based on project needs.

•      Direct exposure to multiple industries and clients across India, the US, and the GCC.

•      A modern, AI-first engineering culture — we expect you to use the best tools available, and we invest in them.

•      High ownership and short feedback loops — your work is visible to leadership and to clients.

•      Learning budget for courses, certifications, and conferences relevant to your growth.

•      A small, technically strong team where senior engineers shape the architecture and the hiring bar.


How to Apply

Application link - https://equip.co/job-posts/9qsYsb/


Shortlisted candidates will go through a technical SQL/Snowflake exercise, a system design discussion, and a stakeholder-communication round.

  • Parentheses Labs is an equal opportunity employer. We hire on the basis of skill, curiosity, and craft, and we welcome applications from candidates of all backgrounds.


Contact - careers@parentheseslabs.com

Click on Apply to know more.

Skills

Python
Power BI
Airflow
AWS
Azure
Backbone
banking
data engineer
data modeling
ETL
GCP
HubSpot
HVAC
Kafka
Pandas
SaaS
Salesforce
Snowflake
SQL
SQLAlchemy
statistics