Website:
Job details:
Job ID
504350
Posted since
04-May-2026
Organization
Smart Infrastructure
Field of work
Real Estate
Company
Brightly Software India Private Limited
Experience level
Experienced Professional
Job type
Full-time
Work mode
Remote only
Employment type
Permanent
Location(s)
- Noida - Uttar Pradesh - India
Principal Data Engineer - Driving Data Excellence & BI Transformation
We are seeking a highly skilled and experienced
Principal Data Engineer to be a cornerstone of our data team. This pivotal role will focus on architecting, building, and optimizing our data infrastructure, driving BI enablement, and leading critical data transformation and migration projects. You will be instrumental in designing and implementing scalable, efficient, and robust ETL/ELT solutions that power our analytics and business intelligence initiatives.
Key Responsibilities
- Lead the design, development, and optimization of large-scale data pipelines and data warehouse/lake solutions.
- Drive data integration (ETL/ELT) strategies and implementations across diverse platforms and technologies.
- Collaborate with BI and analytics teams to ensure data availability, quality, and accessibility for reporting and insights.
- Mentor junior engineers and contribute to establishing best practices in data engineering and software development.
- Evaluate and implement new data technologies and approaches to enhance our data ecosystem.
What You'll Bring
- Bachelor's degree in engineering or science, or equivalent practical experience.
- 10+ years of progressive experience in software and data engineering, with at least 8 years dedicated to data-centric roles.
- Mastery of programming languages such as Java, Scala, and/or Python.
- Strong expertise in SQL and extensive cloud data experience, particularly with Snowflake and AWS data services.
- Extensive experience in designing and implementing data integration (ETL/ELT) solutions across diverse platforms, utilizing languages like Java, Scala, Python, PySpark, and SparkSQL.
- Hands-on proficiency with ETL/ELT tools such as dbt, along with robust data pipeline orchestration skills.
- Demonstrated ability to build, optimize, and maintain production-grade data pipelines supporting batch, replication/CDC, and event streaming patterns for data lakes and warehouses.
- In-depth expertise with AWS-based data services (e.g., Kinesis, Glue, RDS, Athena) and Snowflake Cloud Data Warehouse.
- Skilled in data modeling, data migration strategies, and performance tuning for large-scale data systems.
- A proven history of contributing to or leading major initiatives involving the consolidation and rationalization of large-scale data environments, including complex data pipelines and internal/external partner integrations.
- A proactive and innovative approach to exploring and implementing cutting-edge technologies and solutions.
- Comprehensive knowledge of relational databases and advanced SQL skills to support analytical needs.
- Familiarity with leading BI tools such as Power BI and Apache Superset.
- Familiarity with the Software Development Life Cycle (SDLC), source control (e.g., Git), and best practices for ensuring data quality.
- Adherence to software engineering and Agile development best practices.
- Familiarity with AI-powered coding tools (e.g., GitHub Copilot, Claude Code) to enhance productivity and code quality is a plus.
- Outstanding written and verbal communication skills, with the ability to articulate complex technical concepts to both technical and non-technical stakeholders
- Ability to align work schedule with the US Eastern Standard Time (EST) zone to ensure at least a 6-hour overlap with our US teams.
Click on Apply to know more.