Job Overview
Data engineers play a pivotal role in unlocking the value of data. In this position, you will be responsible for designing and implementing a robust AWS Serverless DataLake architecture that efficiently handles large volumes of data.
You will develop data ingestion pipelines and integration processes to ensure seamless and reliable data transfer from various sources into the DataLake. Additionally, you will implement data transformation and enrichment processes using AWS Lambda, Glue, or similar serverless technologies to guarantee data quality and consistency.
Key responsibilities include collaborating with data scientists and analysts to comprehend their data requirements, designing suitable data models and schemas in the DataLake, optimizing data storage and retrieval mechanisms, and monitoring and troubleshooting the DataLake infrastructure.
A strong background in cloud computing, big data processing, and data engineering is essential, along with experience working with AWS services such as S3, Lambda, Glue, Athena, Redshift, and DynamoDB. Proficiency in programming languages like Python, Java, or Scala, and knowledge of SQL for data manipulation and querying are critical skills for this role.
The ideal candidate will possess a strong problem-solving and analytical approach, excellent communication and collaboration skills, and the ability to work effectively in a cross-functional team environment. If you are passionate about data engineering and want to contribute to building scalable and efficient data processing systems, apply now!