About the RoleAs a key member of our team, you will play a pivotal role in designing and implementing an AWS Serverless DataLake architecture.Your primary objective will be to efficiently handle large volumes of data and support various data processing workflows.You will develop data ingestion pipelines and integration processes, ensuring the smooth and reliable transfer of data from diverse sources into the DataLake.To ensure data quality and consistency, you will implement transformation and enrichment processes using AWS Lambda, Glue, or similar serverless technologies.Collaborating closely with data scientists and analysts, you will design appropriate data models and schemas in the DataLake.This involves understanding their data requirements and optimizing storage and retrieval mechanisms to provide high-performance access to the data.You will be responsible for monitoring and troubleshooting the DataLake infrastructure, identifying and resolving performance bottlenecks, data processing errors, and other issues.Additionally, you will continuously evaluate new AWS services and technologies to enhance the DataLake architecture, improve data processing efficiency, and drive innovation.As a seasoned professional, you will mentor and provide technical guidance to junior data engineers, fostering their growth and ensuring adherence to best practices.You will also collaborate with cross-functional teams to understand business requirements, prioritize tasks, and deliver high-quality solutions within defined timelines.
#J-18808-Ljbffr