Key to success in fintech is building robust data pipelines and ML infrastructure.
In this role, you will be responsible for designing scalable data architectures using Python, Airflow, and PySpark to process large volumes of financial transaction data.
Our team requires a skilled Data Engineer with strong MLOps expertise and machine learning modeling experience in the financial domain.
* The ideal candidate should have hands-on experience with workflow orchestration tools, particularly Airflow.
* A strong proficiency in Python programming and data processing frameworks (PySpark) is required.
* We are seeking an experienced professional who can implement and optimize MLOps infrastructure on AWS to automate the full machine learning lifecycle from development to production.
* An excellent problem-solving skills and ability to work in a fast-paced fintech environment are essential.
As part of our commitment to innovation, we collaborate with prestigious organizations to deliver cutting-edge fintech solutions.
We build scalable, high-performance payment processing systems, fraud detection algorithms, and financial analytics platforms.
About the Role
This remote role offers the opportunity to work with state-of-the-art machine learning and cloud infrastructure in a dynamic and growth-oriented environment.
You will collaborate with data scientists and business stakeholders to implement machine learning solutions for fraud detection, risk assessment, and financial forecasting.
Responsibilities:
* Design, develop, and maintain scalable data pipelines using Python, Airflow, and PySpark to process large volumes of financial transaction data.
* Implement and optimize MLOps infrastructure on AWS to automate the full machine learning lifecycle from development to production.
* Build and maintain deployment pipelines for ML models using SageMaker and other AWS services.
* Collaborate with data scientists and business stakeholders to implement machine learning solutions for fraud detection, risk assessment, and financial forecasting.
* Ensure data quality, reliability, and security across all data engineering workloads.
We require 3-5 years of experience in Data Engineering with a focus on MLOps in production environments.
Strong knowledge of containerization (Docker) and CI/CD pipelines is highly desirable.
Familiarity with data modeling and database systems (SQL and NoSQL) is also necessary.
Knowledge of the financial services or payment processing domain is highly desirable.
We prioritize exceptional problem-solving skills and the ability to work in a fast-paced fintech environment.