Salary
Competitive compensation package for this Data Engineer role is aligned with industry standards.
Job description
We are seeking a skilled Data Engineer to join our team. As a key member of the data engineering team, you will be responsible for designing, developing, and maintaining scalable data pipelines using Python, Airflow, and PySpark to process large volumes of financial transaction data.
Key Responsibilities
* Design, develop, and maintain robust data pipelines using Python, Airflow, and PySpark.
* Implement and optimize MLOps infrastructure on AWS to automate the full machine learning lifecycle from development to production.
* Build and maintain deployment pipelines for ML models using SageMaker and other AWS services.
* Collaborate with data scientists and business stakeholders to implement machine learning solutions for fraud detection, risk assessment, and financial forecasting.
* Ensure data quality, reliability, and security across all data engineering workloads.
* Optimize data architecture to improve performance, scalability, and cost-efficiency.
Required skills and qualifications
* 3-5 years of experience in Data Engineering with a focus on MLOps in production environments.
* Strong proficiency in Python programming and data processing frameworks (PySpark).
* Experience with workflow orchestration tools, particularly Airflow.
* Hands-on experience with AWS stack, especially SageMaker, Lambda, S3, and other relevant services.
* Working knowledge of machine learning model deployment and monitoring in production.
* Experience with data modeling and database systems (SQL and NoSQL).
* Familiarity with containerization (Docker) and CI/CD pipelines.
* Excellent problem-solving skills and ability to work in a fast-paced fintech environment.
Benefits
This role offers opportunities for growth and professional development in a dynamic and innovative environment.
Others
We value diversity and inclusion in the workplace and are committed to providing equal opportunities for all candidates.