Imagine transforming your career in a collaborative, diverse and innovative environment that encourages teamwork. Here, you can expand ideas through the right tools, contributing to success.
We're looking for an experienced professional who wants to learn and grow with us. In this role, you'll design, develop and maintain scalable data pipelines using Azure services.
Key Responsibilities:
* Design high-performance data processing solutions with Apache Spark on Azure Databricks
* Collaborate with stakeholders to understand data requirements and deliver clean datasets
* Optimize data workflows for performance and cost efficiency in the cloud
* Implement best practices around data security, governance and compliance
* Develop CI/CD pipelines for data engineering workflows
Requirements:
* Experience in Python, PySpark and Azure Data Services
* Azure Data Factory, Snowflakes and Denodo skills are desirable
Benefits:
* Professional development and constant evolution of your skills
* Opportunities to work outside Brazil
* A collaborative and innovative environment
We promote an inclusive culture and work for equity. Join our team and become part of a company that respects individuality.