We're looking for a Senior Data Engineer to design, build, and scale modern data platforms on AWS. You'll work with Python, Spark, DBT, and AWS-native services in an Agile environment to deliver scalable, secure, and high-performance data solutions.
What you'll do
Develop and optimize ETL/ELT pipelines with Python, DBT, and AWS services (Data Ops Live).
Build and manage S3-based data lakes using modern data formats (Parquet, ORC, Iceberg).
Deliver end-to-end data solutions with Glue, EMR, Lambda, Redshift, and Athena.
Implement strong metadata, governance, and security using Glue Data Catalog, Lake Formation, IAM, and KMS.
Orchestrate workflows with Airflow, Step Functions, or AWS-native tools.
Ensure reliability and automation with Cloud Watch, Cloud Trail, Code Pipeline, and Terraform.
Collaborate with analysts and data scientists to deliver business insights in an Agile setting. Required Skills & Experience
7–10 years of experience in data engineering, with 4+ years on AWS platforms
Strong in Python (incl. AWS SDKs), DBT, SQL, and Spark
Proven expertise with AWS data stack (S3, Glue, EMR, Redshift, Athena, Lambda)
Hands-on Experience With Workflow Orchestration (Airflow/Step Functions)
Familiarity with data lake formats (Parquet, ORC, Iceberg) and Dev Ops practices (Terraform, CI/CD)
Solid understanding of data governance & security best practices Bonus
Exposure to Data Mesh principles and platforms like Data. World
Familiarity with Hadoop/HDFS in hybrid or legacy environments
How To Apply
If this sounds like you, share your resume and a brief note about your experience to ******
Refer someone in your network who fits this role
#hiring #pythondeveloper #automation
Show more Show less