Job Overview:
You will design and implement scalable data processing systems, leveraging AWS serverless technologies to handle large volumes of data.
* Develop efficient data ingestion pipelines and integration processes to transfer data from various sources into a centralized repository.
* Implement data transformation and enrichment processes using AWS Lambda, Glue, or similar technologies to ensure data quality and consistency.
* Collaborate with data scientists and analysts to understand their data requirements and design appropriate data models and schemas.
* Optimize data storage and retrieval mechanisms, leveraging AWS services like S3, Athena, Redshift, or DynamoDB, to provide high-performance access to the data.
Requirements:
* 5+ years of experience as a Data Engineer, with a strong focus on cloud-based technologies and scalable architectures.
* In-depth knowledge of AWS services and their capabilities for building robust data processing systems.
* Proven expertise in designing and implementing scalable data architectures for large-scale data processing and storage.
* Strong programming skills in languages like Python, Java, or Scala, along with experience using SQL for data manipulation and querying.
* Hands-on experience with data integration and ETL tools, such as AWS Glue or Apache Spark, for transforming and processing data.
What We Offer:
* Professional development and continuous skill enhancement.
* Opportunities to work in a collaborative and diverse environment that encourages teamwork.
* A platform for growth and innovation.