Job Opportunity
As a seasoned data professional, you will play a pivotal role in designing, developing, and maintaining scalable ETL/ELT pipelines using Azure Data Services.
You will collaborate closely with cross-functional teams to deliver clean, reliable datasets that meet business requirements.
The successful candidate will leverage their expertise in data processing solutions with Apache Spark on Azure Databricks to drive high-performance data ingestion and processing.
Key Responsibilities:
* Design, develop, and maintain scalable ETL/ELT pipelines using Azure Data Services
* Collaborate with data scientists, analysts, and business stakeholders to understand data requirements
* Deliver clean, reliable datasets that meet business needs
* Optimize data workflows/pipelines for performance and cost efficiency
Requirements:
* Experience with Azure Data Bricks, Azure Data Factory, and Azure Data Services
* Knowledge of columnar and table formats (Parquet, Delta, Hudi, Iceberg)
* Proficiency in data quality tools (Great Expectations, Soda)
* Understanding of Step Functions, EventBridge, or Kinesis
* Best practices for API security (Cognito, WAF, IAM Policies)
What We Offer:
A dynamic work environment that fosters growth and development opportunities. Collaborative team culture with open communication channels. The chance to work on complex, high-impact projects that drive business success.