Mission- Responsible for building, managing, and optimizing data pipelines;
- Support for defining data flows and access to data sources;
- Ensure compliance withdata governance, data architecture, and data security requirements.Responsibilities- Create, maintain, and optimize data pipelines for data structures that encompass data modeling, data transformation, schemas, metadata, and workload management;
- Drive Automation through effective metadata management;
- Participate in ensuring compliance and governance during data use, including the maintenance of the data catalog;
- Collaborate and support data owners and other stakeholders around data topics;
- Ensure the proper use of the data architecture defined by Data Factory team;
- Respect and follow security principles ensuring compliance with data delivery.Competencies & Skills- Strong experience with various Data Management architectures, mainly AWS environment;
- Strong experience withData Modeling;
- Ability to design, build and manage data pipelines;
- Experience working with commercial and open source message queuing technologies;
- Highly creative and collaborative;
- Excellent communication skills and the ability to collaborate effectively with cross-functional teams.Specific Technical Knowledge- Solid knowledge in: Databrics, Spark, python, terraform, SQL, Scala, S3, AWS Glue, AWS Athena, AWS Lambda, AWS Step Functions;
Big data, Relational Databases;
- Experience in: EMR, Gitflow, Agile Metodology;
DevOps.Languages- English: Advanced or FluentAdditional information- Remote work