Job Title: Data Architect for Machine Learning Operations
About the Role
We are seeking an experienced Data Engineer with strong MLOps expertise and machine learning modeling experience in the financial domain.
This role involves designing, developing, and maintaining scalable data pipelines using Python, Airflow, and PySpark to process large volumes of financial transaction data.
Key Responsibilities
* Design and Develop Data Pipelines: Build robust data pipelines using Python, Airflow, and PySpark to process large volumes of financial transaction data.
* MLOps Infrastructure: Implement and optimize MLOps infrastructure on AWS to automate the full machine learning lifecycle from development to production.
* ML Model Deployment: Build and maintain deployment pipelines for ML models using SageMaker and other AWS services.
* Collaboration with Stakeholders: Collaborate with data scientists and business stakeholders to implement machine learning solutions for fraud detection, risk assessment, and financial forecasting.
* Data Quality and Security: Ensure data quality, reliability, and security across all data engineering workloads.
Requirements and Qualifications
* Experience: 3-5 years of experience in Data Engineering with a focus on MLOps in production environments.
* Technical Skills: Strong proficiency in Python programming and data processing frameworks (PySpark), experience with workflow orchestration tools (Airflow), hands-on experience with AWS stack, especially SageMaker, Lambda, S3, and other relevant services.
* Educational Background: Bachelor's or Master's degree in Computer Science, Mathematics, Statistics, or related fields.
What We Offer
A dynamic and growth-oriented environment with opportunities to work on advanced financial technology projects.
A competitive salary package with benefits and opportunities for professional growth and development.
The chance to work with state-of-the-art machine learning and cloud infrastructure.