Data Engineering Position Overview
We are seeking an experienced Data Engineer to design, develop, and optimize enterprise-grade data pipelines using Azure Databricks, Azure Data Factory, and Python.
* Key Responsibilities:
* Data Pipeline Development: Develop ETL/ELT pipelines using Azure Databricks (PySpark, Delta Lake) and Azure Data Factory (ADF).
* Data Flows & Transformations: Create pipelines, data flows, and complex transformations with ADF, PySpark, and T-SQL for seamless data extraction, transformation, and loading.
* Database & Query Optimization: Optimize database performance through SQL query tuning, index optimization, and code improvements to ensure efficient data retrieval and manipulation.
Requirements:
* 5+ years of hands-on expertise with Azure Databricks, Python, PySpark, and Delta Lake.
* Strong SQL Server / T-SQL experience with a focus on query optimization, indexing strategies, and coding best practices.
Expected Skills:
* Azure Databricks: 5+ years
* Python: 5+ years
* PySpark: 5+ years
* Deltalake: 5+ years
* SQLServer/T-SQL: 5+ years
Working Experience:
Our ideal candidate will have a minimum of 5 years of working experience in data engineering or related fields.
Licenses/Certifications:
The position requires relevant certifications such as AWS Certified Developer - Associate or Microsoft Certified: Azure Developer Associate.