About the RoleWe're looking for a hands-on Data Engineer to build and optimise our analytics platform on Azure Databricks on a International Project.
You'll own data ingestion, transformation, quality and delivery pipelines that power real-time dashboards and ML workloads for our clients.What You'll DoDesign and implement robust ETL/ELT pipelines in Databricks using PySparkModel data in Delta Lake and optimise tables for performance and costAutomate pipeline tests and data-quality checks with TDD patternsCollaborate with ML engineers to productionise features and track experiments (MLflow a plus)Tune clusters, jobs and query plans for reliable throughput and low latencyShip everything through Azure DevOps CI/CDMust-Have SkillsAzure (Data Lake Gen2, Key Vault)Databricks & Delta Lake (job orchestration, cluster management, Workspace APIs, Unity Catalog)Python & PySparkSQL (analytical queries, window functions, performance tuning)Test-Driven Development (pytest, db-test, Great Expectations or similar)Nice-to-HaveLangGraph or other LLM-workflow toolingMLflow for feature & model lifecycle managementWhy Join Us?Cutting-edge data-engineering architectureBudget for Databricks and Azure certificationsFlexible remote-first cultureCompetitive remunerationInternational CompanyDiverse Nationality team