Cloud-Ready Data Engineer
We are seeking a highly skilled Data Engineer with strong expertise in Machine Learning Operations (MLOps) and cloud infrastructure. This role involves designing, developing, and maintaining scalable data pipelines using Python, Airflow, and PySpark to process large volumes of financial transaction data. The ideal candidate will have experience with workflow orchestration tools, particularly Airflow, and hands-on knowledge of AWS stack, especially SageMaker, Lambda, S3, and other relevant services.
Key Responsibilities:
* Design and develop robust data pipelines to support payment processing systems, fraud detection algorithms, and financial analytics solutions.
* Implement and optimize MLOps infrastructure on AWS to automate the machine learning lifecycle from development to production.
* Collaborate with data scientists and business stakeholders to implement machine learning solutions for fraud detection, risk assessment, and financial forecasting.
* Ensure data quality, reliability, and security across all data engineering workloads.
Qualifications & Skills:
* 3-5 years of experience in Data Engineering with a focus on MLOps in production environments.
* Strong proficiency in Python programming and data processing frameworks (PySpark).
* Experience with workflow orchestration tools, particularly Airflow.
* Hands-on experience with AWS stack, especially SageMaker, Lambda, S3, and other relevant services.
* Working knowledge of machine learning model deployment and monitoring in production.
* Excellent problem-solving skills and ability to work in a fast-paced fintech environment.