Job Title: Data Architect
We are seeking a highly skilled professional to design, implement, and optimize large-scale data architectures. In this role, you will leverage Azure Databricks, Azure Data Factory, SQL Server, and Python to enable scalable, governed, and performant data solutions.
Key Responsibilities:
• Data Pipeline Development: Design, build, and optimize ETL/ELT pipelines using Azure Databricks (PySpark, Delta Lake) and Azure Data Factory (ADF).
• Data Flows & Transformations: Develop pipelines, data flows, and complex transformations with ADF, PySpark, and T-SQL for seamless data extraction, transformation, and loading.
• Data Processing: Develop Databricks Python notebooks for tasks such as joining, filtering, and pre-aggregation.
• Database & Query Optimization: Optimize database performance through SQL query tuning, index optimization, and code improvements to ensure efficient data retrieval and manipulation.
• SSIS & Migration Support: Maintain and enhance SSIS package design and deployment for legacy workloads; contribute to migration and modernization into cloud-native pipelines.
• Collaboration & DevOps: Work with cross-functional teams using Git (Azure Repos) for version control and Azure DevOps pipelines (CI/CD) for deployment.
• Data Governance & Security : Partner with governance teams to integrate Microsoft Purview and Unity Catalog for cataloging, lineage tracking, and role-based security.
• API & External Integration: Implement REST APIs to retrieve analytics data from diverse external data feeds, enhancing accessibility and interoperability.
• Automation: Automate ETL processes and database maintenance tasks using SQL Agent Jobs, ensuring data integrity and operational reliability.
• Advanced SQL Expertise: Craft and optimize complex T-SQL queries to support efficient data processing and analytical workloads.
Required Skills:
• 5+ years of hands-on expertise with Azure Databricks, Python, PySpark, and Delta Lake.
•5+ years of proven experience with Azure Data Factory for orchestrating and monitoring pipelines.
• Strong SQL Server / T-SQL experience with a focus on query optimization, indexing strategies, and coding best practices.
• Demonstrated experience in SSIS package design, deployment, and performance tuning.
• Hands-on knowledge of Unity Catalog for governance.
• Experience with Git (Azure DevOps Repos) and CI/CD practices in data engineering projects.
Nice to Have:
• Exposure to Change Data Capture (CDC), Change Data Feed (CDF), and Temporal Tables.
• Experience with Microsoft Purview, Power BI, and Azure-native integrations.
• Familiarity with Profisee Master Data Management (MDM).
• Working in Agile/Scrum environments.
Preferred Qualifications:
• Microsoft Certified: Azure Data Engineer Associate (DP-203)
• Microsoft Certified: Azure Solutions Architect Expert or equivalent advanced Azure certification
• Databricks Certified Data Engineer Associate or Professional
• Additional Microsoft SQL Server or Azure certifications demonstrating advanced database and cloud expertise