About the Role:
We are seeking an experienced Senior Data Engineer to join our high-impact team building robust, cloud-native data infrastructure that supports machine learning and analytics workflows at scale.
This role is ideal for someone who has hands-on experience with Databricks, enjoys building efficient and scalable data pipelines, and is eager to grow their platform skills in a dynamic, multi-cloud environment.
You will be part of a cross-functional team including platform engineers, DevOps, and data scientists, collaborating on ingestion, transformation, orchestration, and data reliability for production-grade pipelines.
Key Responsibilities:
* Design and maintain scalable ETL/ELT pipelines on Databricks using Spark, Delta Lake, and Python.
* Develop orchestration logic using tools such as AWS Step Functions, Lambda, or Databricks Workflows.
* Contribute to medallion architecture layers (Bronze, Silver, Gold) for structured data processing.
* Collaborate on infrastructure provisioning and pipeline automation using Terraform and GitHub Actions.
* Troubleshoot Spark job performance and help ensure reliable, efficient data pipelines.
* Support cross-functional teams (data scientists, ML engineers) by delivering curated, production-ready data.
Requirements:
* 3–6 years of experience in data engineering or data platform roles.
* Solid experience with Databricks and Delta Lake, including job and cluster setup.
* Strong in PySpark, SQL, and scripting for data transformation.
* Familiarity with AWS services: S3, Lambda, Step Functions, IAM, CloudWatch.
* Exposure to CI/CD practices and infrastructure automation using Terraform.
What We Offer:
* Remote work options.
* Coworking space financial coverage.
* Flexible working hours.
* B2B benefits package.
* Professional development opportunities.
* English language lessons.
* Performance incentives.
* Paid courses and certifications.
* International conference participation.