Design and Implement a Scalable Data Architecture
We are seeking an experienced Data Engineer who wants to transform their career by designing and implementing a robust AWS Serverless DataLake architecture. This role requires the ability to efficiently handle large volumes of data and support various data processing workflows.
* Develop scalable data ingestion pipelines and integration processes, ensuring smooth and reliable transfer of data from various sources into the DataLake.
* Implement data transformation and enrichment processes using AWS Lambda, Glue, or similar serverless technologies to ensure data quality and consistency.
* Collaborate with data scientists and analysts to understand their data requirements and design appropriate data models and schemas in the DataLake.
Optimize Data Storage and Retrieval
We are looking for someone who can leverage AWS services such as S3, Athena, Redshift, or DynamoDB to optimize data storage and retrieval mechanisms, providing high-performance access to the data.
Monitoring and Troubleshooting
The ideal candidate will be responsible for monitoring and troubleshooting the DataLake infrastructure, identifying and resolving performance bottlenecks, data processing errors, and other issues.