Enterprise Data Solutions Specialist
We are seeking a highly skilled professional to design, implement, and optimize data pipelines that drive business value.
Leverage Azure Databricks, Azure Data Factory, SQL Server, and Python to create scalable, governed, and performant data solutions.
* Design, build, and optimize Extract-Transform-Load/Extract-Load-Transform pipelines using Azure Databricks (PySpark, Delta Lake) and Azure Data Factory (ADF).
* Develop data flows, complex transformations, and database scripts with ADF, PySpark, and T-SQL for seamless data extraction, transformation, and loading.
* Develop Databricks Python notebooks for tasks such as joining, filtering, and pre-aggregation.
* Optimize database performance through SQL query tuning, index optimization, and code improvements to ensure efficient data retrieval and manipulation.
* Maintain and enhance integration of legacy systems; contribute to migration and modernization into cloud-native data platforms.
* Work with cross-functional teams using version control tools for collaboration and automation frameworks for deployment.
* Partner with governance teams to integrate cataloging, lineage tracking, and role-based security solutions.
* Implement REST APIs to retrieve analytics data from diverse external sources, enhancing accessibility and interoperability.
* Automate ETL processes and database maintenance tasks using SQL Agent Jobs, ensuring data integrity and operational reliability.
* Craft and optimize complex T-SQL queries to support efficient data processing and analytical workloads.
Key Responsibilities:
1. Design and implement enterprise-grade data pipelines.
2. Develop and maintain scalable, governed, and performant data solutions.
3. Collaborate with cross-functional teams on data platform development.
4. Partner with governance teams on security and compliance initiatives.
5. Implement automation frameworks for deployment and maintenance.