About the Job
We are seeking a skilled Data Engineer to join our team and design, develop, and maintain scalable ETL/ELT pipelines using Azure Data Services.
* Experience with columnar and table formats (Parquet, Delta, Hudi, Iceberg)
* Use of Data Quality tools (Great Expectations, Soda)
* Knowledge of Step Functions, EventBridge, or Kinesis
* Best practices for API security (Cognito, WAF, IAM Policies)
Key Responsibilities:
* Design, develop, and maintain scalable ETL/ELT pipelines using Azure Data Services
* Build high-performance data processing solutions with Apache Spark (PySpark) on Azure Databricks
* Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver clean, reliable datasets
* Optimize data workflows/pipelines for performance and cost efficiency in a cloud environment
* Implement best practices around data security, governance, and compliance
* Develop CI/CD pipelines for data engineering workflows
* Monitor, troubleshoot, and enhance existing data solutions for reliability and performance
* Document design patterns, best practices, and operational procedures
What We Offer:
* A dynamic work environment with opportunities for growth and development
* Collaboration with a talented team of professionals
* A competitive compensation package
* Ongoing training and support
Requirements:
* Bachelor's degree in Computer Science, Information Technology, or related field
* Minimum 3 years of experience in data engineering
* Excellent problem-solving skills
* Strong communication and collaboration skills