Job Overview
We are seeking a seasoned professional to design, develop and maintain scalable ETL/ELT pipelines using Azure Data Services. The ideal candidate will have expertise in designing high-performance data processing solutions with Apache Spark on Azure Databricks.
Key Responsibilities:
* Data Pipeline Development: Build efficient data workflows/pipelines for performance and cost efficiency in a cloud environment.
* Data Quality Management: Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver clean, reliable datasets.
Required Skills:
* Columnar Storage Formats: Experience with Parquet, Delta, Hudi, Iceberg formats.
* Data Quality Tools: Familiarity with Great Expectations and Soda.
* Event-Driven Architecture: Knowledge of Step Functions, EventBridge, or Kinesis.
Benefits:
* Professional Growth: Opportunities for career advancement and skill development.
* Diversity and Inclusion: Collaborative and diverse work environment that encourages teamwork.
* Global Opportunities: Chance to work on international projects.
What Can You Expect From Us?
* Career Advancement: Professional development and constant evolution of skills.
* Team Collaboration: Opportunity to work with a diverse team of professionals.