Job Description
Come to one of the biggest IT companies in the world. Here you can transform your career and develop as a professional.
Why to join? We believe that people make the difference, that's why we live a culture of unlimited learning full of opportunities for improvement and mutual development. The ideal scenario to expand ideas through the right tools, contributing to our success in a collaborative environment.
We are looking for Data Engineer who wants to learn and transform his career.
In this role you will:
* Have extensive experience (5+ years) working as a Data Engineer, with a strong focus on AWS technologies and serverless architectures;
* Experience working as a Data Engineer, with focus on Azure is valuable;
* Have in-depth knowledge of AWS services such as S3, Lambda, Glue, Athena, Redshift, and DynamoDB, and their capabilities for building scalable data processing systems;
* Proven expertise in designing and implementing AWS serverless architectures for large-scale data processing and storage;
* Strong programming skills in languages like Python, Java, or Scala, along with experience using SQL for data manipulation and querying;
* Hands-on experience with data integration and ETL tools, such as AWS Glue or Apache Spark, for transforming and processing data;
* English fluency;
* Familiarity with data modeling techniques and data warehousing concepts, including star and snowflake schemas;
* Solid understanding of data security, access control, and compliance requirements in a data-driven environment;
* Experience with data visualization tools (e.g., Tableau, Power BI) and the ability to collaborate with analysts and data scientists to deliver actionable insights;
* Strong problem-solving and analytical skills, with a detail-oriented approach to ensure data accuracy and integrity;
* Excellent communication and collaboration skills, with the ability to work effectively in a cross-functional team environment.
Key responsibilities:
* Design and implement an AWS Serverless DataLake architecture to efficiently handle large volumes of data and support various data processing workflows;
* Develop data ingestion pipelines and data integration processes, ensuring the smooth and reliable transfer of data from various sources into the DataLake;
* Implement data transformation and data enrichment processes using AWS Lambda, Glue, or similar serverless technologies to ensure data quality and consistency;
* Collaborate with data scientists and analysts to understand their data requirements and design appropriate data models and schemas in the DataLake;
* Optimize data storage and retrieval mechanisms, leveraging AWS services such as S3, Athena, Redshift, or DynamoDB, to provide high-performance access to the data;
* Monitor and troubleshoot the DataLake infrastructure, identifying and resolving performance bottlenecks, data processing errors, and other issues;
* Continuously evaluate new AWS services and technologies to enhance the DataLake architecture, improve data processing efficiency, and drive innovation;
* Mentor and provide technical guidance to junior data engineers, fostering their growth and ensuring adherence to best practices;
* Collaborate with cross-functional teams to understand business requirements, prioritize tasks, and deliver high-quality solutions within defined timelines.
Required Skills and Qualifications
Extensive experience (5+ years) working as a Data Engineer, with a strong focus on AWS technologies and serverless architectures;
Experience working as a Data Engineer, with focus on Azure is valuable;
In-depth knowledge of AWS services such as S3, Lambda, Glue, Athena, Redshift, and DynamoDB, and their capabilities for building scalable data processing systems;
Proven expertise in designing and implementing AWS serverless architectures for large-scale data processing and storage;
Strong programming skills in languages like Python, Java, or Scala, along with experience using SQL for data manipulation and querying;
Hands-on experience with data integration and ETL tools, such as AWS Glue or Apache Spark, for transforming and processing data;
English fluency;
Familiarity with data modeling techniques and data warehousing concepts, including star and snowflake schemas;
Solid understanding of data security, access control, and compliance requirements in a data-driven environment;
Experience with data visualization tools (e.g., Tableau, Power BI) and the ability to collaborate with analysts and data scientists to deliver actionable insights;
Strong problem-solving and analytical skills, with a detail-oriented approach to ensure data accuracy and integrity;
Excellent communication and collaboration skills, with the ability to work effectively in a cross-functional team environment.
Benefits
Professional development and constant evolution of your skills, always in line with your interests;
Opportunities to work outside Brazil;
A collaborative, diverse and innovative environment that encourages teamwork;
Health insurance;
Dental Plan;
Life insurance;
Transportation vouchers;
Meal/Food Voucher;
Childcare assistance;
Gympass;
TCS Cares – free 0800 that provides psychological assistance (24 hrs/day), legal, social and financial assistance to associates;
Partnership with SESC;
Reimbursement of Certifications;
Free TCS Learning Portal – Online courses and live training;
International experience opportunity;
Discount Partnership with Universities and Language Schools;
Bring Your Buddy – By referring people you become eligible to receive a bonus for each hire;
TCS Gems – Recognition for performance;
Xcelerate – Free Mentoring Career Platform.
Others
At TATA Consultancy Services we promote an inclusive culture, we always work for equity. This applies to Gender, People with Disabilities, LGBTQIA+, Religion, Race, Ethnicity. All our opportunities are based on these principles.