Job Title: Business Intelligence Developer
We are seeking a highly skilled Data Engineer to design, implement, and optimize enterprise-grade data pipelines. In this role, you will leverage Azure Databricks, Azure Data Factory, SQL Server, and Python to enable scalable, governed, and performant data solutions.
You will play a key role in modernizing our data platform on the Azure Cloud, ensuring reliability, efficiency, and compliance across the full data lifecycle.
Key Responsibilities:
* Data Pipeline Development: Design, build, and optimize ETL/ELT pipelines using Azure Databricks (PySpark, Delta Lake) and Azure Data Factory (ADF).
* Data Flows & Transformations: Develop pipelines, data flows, and complex transformations with ADF, PySpark, and T-SQL for seamless data extraction, transformation, and loading.
* Data Processing: Develop Databricks Python notebooks for tasks such as joining, filtering, and pre-aggregation.
* Database & Query Optimization: Optimize database performance through SQL query tuning, index optimization, and code improvements to ensure efficient data retrieval and manipulation.
* SSIS & Migration Support: Maintain and enhance SSIS package design and deployment for legacy workloads; contribute to migration and modernization into cloud-native pipelines.
* Collaboration & DevOps: Work with cross-functional teams using Git for version control and Azure DevOps pipelines for deployment.
* Data Governance & Security: Partner with governance teams to integrate Microsoft Purview and Unity Catalog for cataloging, lineage tracking, and role-based security.
* API & External Integration: Implement REST APIs to retrieve analytics data from diverse external data feeds, enhancing accessibility and interoperability.
* Automation: Automate ETL processes and database maintenance tasks using SQL Agent Jobs, ensuring data integrity and operational reliability.
* Advanced SQL Expertise: Craft and optimize complex T-SQL queries to support efficient data processing and analytical workloads.