Financial Data Engineer
In this role, you will be responsible for building robust data pipelines and ML infrastructure to support our payment processing systems, fraud detection algorithms, and financial analytics solutions. This includes designing, developing, and maintaining scalable data pipelines using Python, Airflow, and PySpark to process large volumes of financial transaction data.
Key Responsibilities
* Implement and optimize MLOps infrastructure on AWS to automate the full machine learning lifecycle from development to production.
* Build and maintain deployment pipelines for ML models using SageMaker and other AWS services.
* Collaborate with data scientists and business stakeholders to implement machine learning solutions for fraud detection, risk assessment, and financial forecasting.
* Ensure data quality, reliability, and security across all data engineering workloads.
* Optimize data architecture to improve performance, scalability, and cost-efficiency.
* Implement monitoring and alerting systems to ensure production ML models perform as expected.
Qualifications & Skills
* A strong proficiency in Python programming and data processing frameworks (PySpark).
* Hands-on experience with AWS stack, especially SageMaker, Lambda, S3, and other relevant services.
* Working knowledge of machine learning model deployment and monitoring in production.
* Experience with data modeling and database systems (SQL and NoSQL).
* Knowledge of containerization (Docker) and CI/CD pipelines.