About the Role:
As a seasoned Data Engineer, you will be part of a high-impact team responsible for building and maintaining robust, cloud-native data infrastructure that supports machine learning and analytics workflows at scale. This role is ideal for individuals who have hands-on experience with Databricks and enjoy building efficient and scalable data pipelines.
Key Responsibilities:
* Design, develop, and maintain scalable ETL/ELT pipelines on Databricks using Spark, Delta Lake, and Python.
* Develop orchestration logic using tools such as AWS Step Functions, Lambda, or Databricks Workflows.
* Contribute to medallion architecture layers (Bronze, Silver, Gold) for structured data processing.
* Collaborate on infrastructure provisioning and pipeline automation using Terraform and GitHub Actions.
* Troubleshoot Spark job performance and ensure reliable, efficient data pipelines.
* Support cross-functional teams (data scientists, ML engineers) by delivering curated, production-ready data.
Requirements:
* 3–6 years of experience in data engineering or data platform roles.
* Solid experience with Databricks and Delta Lake, including job and cluster setup.
* Strong in PySpark, SQL, and scripting for data transformation.
* Familiarity with AWS services: S3, Lambda, Step Functions, IAM, CloudWatch.
* Exposure to CI/CD practices and infrastructure automation using Terraform.
What We Offer:
* Remote work possibilities.
* Coworking space financial coverage.
* Flexible working hours.
* Professional development opportunities.
* English language lessons on all levels.
* Performance-based financial incentives.
* Paid courses and certifications.
* Participation at international conferences.
Become Part of Our Team:
This role is perfect for someone who has a strong passion for technology, enjoys tackling complex technical challenges, and is eager to grow their skills in a dynamic environment. If you're looking for a challenging and rewarding opportunity, we encourage you to apply.