Job Opportunity
Job Title: Data Engineer
We are seeking a highly skilled Data Engineer to design, implement and optimize enterprise-grade data pipelines.
* Create scalable, governed, and performant data solutions using Azure Databricks, Azure Data Factory, SQL Server and Python.
* Migrate legacy workloads into cloud-native pipelines while maintaining current architecture.
* Collaborate with cross-functional teams for seamless integration and communication.
* Develop ETL/ELT pipelines using Azure Databricks (PySpark, Delta Lake) and Azure Data Factory (ADF).
* Pipeline Development: develop complex transformations with ADF, PySpark and T-SQL for data extraction, transformation and loading.
* Ongoing database & query optimization through index optimization and code improvements.
* Innovate SSIS package design and deployment, migration support and automation.
* Integrate Microsoft Purview for cataloging, lineage tracking and role-based security.
* Expertise in Automation and Technical Writing is also required
Requirements:
Technical Skills:
* Azure Databricks (5+ years)
* Python (5+ years)
* T-SQL (Strong experience)
* Azure Data Factory (5+ years)
* Unity Catalog
* Git (Azure DevOps Repos) and CI/CD practices
Nice To Have:
* Exposure to Change Data Capture (CDC), Change Data Feed (CDF), and Temporal Tables
* Experience with Microsoft Power BI and Azure-native integrations
* Familiarity with Profisee Master Data Management (MDM)
* Agile/Scrum experience
Preferred Qualifications:
* Microsoft Certified: Azure Data Engineer Associate (DP-203)
* Microsoft Certified: Azure Solutions Architect Expert or equivalent advanced Azure certification
* Databricks Certified Data Engineer Associate or Professional
* Additional Microsoft SQL Server or Azure certifications demonstrating advanced database and cloud expertise
This is an exciting opportunity to take your data engineering skills to the next level.
Please note that some of these requirements may change after reformatting so make sure you check with employer