We are seeking a Senior Data Engineer to design and maintain scalable data pipelines on AWS, ensuring performance, quality, and security.
* The ideal candidate will have hands-on experience with ETL pipeline development using AWS Glue.
* A strong background in Python and SQL programming languages is required.
* Familiarity with AWS services such as S3, SageMaker, and APIs is essential for this role.
Key Responsibilities:
1. Data Ingestion: Build efficient data ingestion pipelines using AWS Glue to collect data from various sources.
2. Data Processing: Develop and optimize data processing workflows using Apache Spark and AWS Glue.
3. Data Storage: Design and implement secure data storage solutions using AWS S3 and other cloud-based storage options.
4. Data Analytics: Collaborate with data scientists to develop predictive models and perform data analysis using AWS SageMaker and other tools.
Tech Stack:
* AWS (S3, Glue, SageMaker)
* Python, SQL
* AWS services (APIs, Lambda, etc.)
Requirements:
1. Experience: 5+ years of experience in data engineering with AWS.
2. Skills: Strong understanding of data modeling, ETL pipeline development, and data governance.
3. Education: Bachelor's degree in Computer Science or related field.