Scientist Technologies
Website:
geturbanai.com
Job details:
Job description
Location- Bangalore
Experience- 5+ Yrs
Overview:
Data Engineer with experience in designing, building, and optimizing scalable data pipelines and data models. Strong expertise in SQL, Azure Data Factory (ADF), and Python for data processing, transformation, and integration across multiple data sources.
Key Responsibilities:
- Design and develop ETL/ELT pipelines using Azure Data Factory (ADF) to ingest, transform, and load data from various sources (APIs, databases, files).
- Write optimized and complex SQL queries for data extraction, transformation, and performance tuning.
- Develop data processing scripts using Python (Pandas, PySpark) for data cleansing, transformation, and automation.
- Build and maintain data models (star schema, snowflake schema) for analytics and reporting.
- Ensure data quality, consistency, and integrity across systems.
- Optimize database performance through indexing, partitioning, and query tuning.
- Integrate data from multiple systems like SQL Server, APIs, and cloud storage.
- Collaborate with stakeholders to understand business requirements and translate them into data solutions.
- Monitor and troubleshoot data pipelines and workflows.
Technical Skills:
- SQL: Advanced query writing, joins, CTEs, window functions, performance tuning
- Azure Data Factory (ADF): Pipelines, triggers, data flows, linked services
- Python: Pandas, NumPy, PySpark (optional)
- Data Modeling: Star schema, Snowflake schema, normalization, dimensional modeling
- Databases: SQL Server / Snowflake / Azure SQL
- Cloud: Azure Data Services
- Version Control: Git / Bitbucket
Preferred Qualifications:
- Experience with data warehousing concepts
- Knowledge of ETL best practices
- Familiarity with big data tools (Spark, Databricks)
- Understanding of API integrations
Click on Apply to know more.