 
        
        Job Overview
We are seeking a Senior Data Engineer to design and maintain scalable data pipelines on AWS, ensuring performance, quality, and security.
This role requires a strong background in data engineering with expertise in AWS services such as S3, Glue, and SageMaker. The ideal candidate will have experience in building and optimizing ETL pipelines, working with data and AI workflows, and developing solutions in Python and SQL.
Key Responsibilities:
 * ETL Pipeline Development: Build efficient ETL pipelines using AWS Glue to process and transform large datasets.
 * Data Engineering: Work with AWS services such as S3, Glue, and SageMaker to design and implement scalable data architectures.
 * Programming Skills: Develop solutions in Python and SQL to support data analytics and business intelligence initiatives.
 * Integration: Integrate data from Salesforce and APIs to support business-critical applications.
Tech Stack:
 * AWS (S3, Glue, SageMaker)
 * Python, SQL
 * Salesforce, APIs
Requirements:
 * Experience: Proven experience in data engineering with AWS services.
 * Skills: Strong programming skills in Python and SQL.
 * Knowledge: Experience with ETL, data modeling, and pipeline optimization.
The ideal candidate will be responsible for designing and maintaining scalable data pipelines, collaborating with data scientists and analysts, and ensuring data governance, documentation, and best practices. They will work closely with cross-functional teams to drive business outcomes through data-driven insights.