Job Summary:
We are seeking a highly skilled Data Engineer to drive the development and optimization of our data pipelines. This is a critical role that requires strong technical expertise, excellent communication skills, and the ability to collaborate effectively with cross-functional teams.
Key Responsibilities:
* Design, build, and maintain scalable data architectures for large datasets.
* Develop and implement efficient data processing workflows using Databricks, Spark, and Python.
* Collaborate with stakeholders to ensure compliance and governance during data use.
Requirements:
* Strong experience with cloud-based data management systems, particularly AWS environment.
* Ability to design, develop, and manage complex data pipelines.
* Excellent problem-solving skills and the ability to work effectively in a team environment.
Technical Skills:
* Solid knowledge of Databricks, Spark, Python, Terraform, SQL, Scala, S3, AWS Glue, AWS Athena, AWS Lambda, and AWS Step Functions.