Job Description
We are looking for a Data | Sofware Engineer to lead the evolution of a petabyte‑scale data platform on AWS. In this role, you will design and build highly scalable, resilient, and high‑performance data pipelines and services, applying advanced software and data engineering practices. You will collaborate closely with product, engineering, and operations teams to enable next‑generation data‑driven marketing solutions, accelerate innovation, and ensure long‑term platform scalability and reliability.
Key Responsibilities
Design, implement, and evolve a petabyte‑scale AWS data platform, applying advanced engineering, distributed systems principles, and cloud‑native practices to ensure performance, reliability, and scalability.
Build and optimize highly scalable data pipelines using Scala, Spark, and cloud‑native services, ensuring efficient, resilient, and production‑ready data processing that supports analytics and downstream products.
Act as a principal‑level technical leader, mentoring engineers, driving high‑quality code reviews, and promoting best practices to elevate engineering maturity across the organization.
Contribute to architectural and design decisions, working closely with architects, product owners, and engineering leadership to guide platform evolution aligned with business strategy.
Plan and deliver complex technical initiatives, ensuring high‑quality outcomes, predictable execution, and effective collaboration within agile value‑stream teams.
Ensure operational readiness by following coding standards, testing practices, monitoring strategies, and release procedures, supporting production deployments to maintain platform stability.
Explore and adopt modern technologies, such as GraphQL integrations, vector databases, and AI/LLM‑based techniques, enhancing data access, platform usability, and development efficiency.
Qualifications
What we expect from you
Expert‑level software and data engineering experience in large‑scale data platforms.
Deep experience building petabyte‑scale systems using Scala and Apache Spark.
Strong expertise across the AWS data ecosystem (Glue, S3, Athena, Managed Airflow, Iceberg).
Advanced understanding of distributed systems and highly parallelized workloads.
Strong skills in query optimization, data partitioning, and efficient storage patterns.
Cloud ecosystem mastery (AWS required; Azure/GCP are pluses).
Experience influencing long‑term architecture and platform design.
Excellent code review, mentorship, and technical leadership capabilities.
Hands‑on experience working in agile environments and collaborating across multiple engineering teams.
Strong version control and multi‑repo collaboration skills (Git, GitHub, Bitbucket).
Bachelor’s degree in Computer Science, Engineering, or related field, or equivalent experience.
+ Advanced English and availability to work hybrid in São Carlos/SP.
Nice to have
Experience with DBT or modern transformation frameworks.
Knowledge of concurrent and parallel programming.
Experience with AWS DMS or similar migration tools.
Familiarity with Angular and TypeScript.
Experience with vector databases (pgvector, Redis vector fields, etc.).
Practical experience applying AI/LLM techniques to data platforms, data access, or software quality.
Additional Information
Experian Careers - Creating a better tomorrow together
#J-18808-Ljbffr