We are seeking a seasoned data engineer to design and maintain scalable data pipelines on AWS, ensuring high-performance, quality, and security. This individual will collaborate with data scientists and analysts to integrate data from multiple sources and support AI/ML initiatives.
Key Responsibilities:
* Create and optimize Extract, Transform, Load (ETL) pipelines using AWS Glue.
* Work with AWS S3, Glue, and SageMaker for data and artificial intelligence workflows.
* Develop solutions in Python and SQL.
* Integrate data from Salesforce and APIs.
* Ensure data governance, documentation, and adherence to best practices.
Tech Stack:
* AWS (S3, Glue, SageMaker)
* Python, SQL
* Salesforce, APIs
Requirements:
* Proven experience in data engineering with AWS.
* Strong skills in Python and SQL programming.
* Experience with ETL, data modeling, and pipeline optimization.
* Advanced English language proficiency.