Unlock Data InsightsWe're looking for a skilled data professional with expertise in designing and implementing scalable data pipelines.This role focuses on extracting value from diverse sources, transforming it into actionable insights, and delivering high-quality datasets.Develop efficient ETL/ELT pipelines using PySpark, Spark SQL, and Delta Lake.Ingest, clean, and transform data from various sources (APIs, SQL databases, cloud storage).
Design reusable pipeline frameworks, data validation logic, and performance-tuned transformations.Curate datasets and deliver insights through Power BI dashboards.Implement best practices for lakehouse development, orchestration, and version control.Troubleshoot pipeline performance issues and ensure data accuracy, reliability, and quality.