Key Responsibilities
We are seeking a skilled Python Data Engineer with AI/ML focus to join our data & analytics team. This role is ideal for someone who loves building scalable data pipelines, operationalizing machine learning workflows and partnering closely with data scientists to bring models into production.
* Build, optimize and maintain ETL/ELT pipelines using Python, modern data engineering frameworks and Snowflake as a central data warehouse.
* Architect and manage data workflows ensuring accuracy, scalability and reliability.
* Work closely with data scientists to deploy, monitor and tune machine learning models.
* Develop feature engineering pipelines, preprocessing workflows and model-serving APIs.
* Integrate data from various sources including APIs, databases, cloud storage and streaming platforms.
* Implement MLOps best practices including versioning, CI/CD for ML and automated retraining workflows.
* Optimize data storage, compute usage and performance within Snowflake and cloud-native tools.
Required Skills & Qualifications
* 3–7+ years of experience as a Data Engineer, Python Engineer or similar backend/data role.
* Strong proficiency in Python including building production-grade data pipelines.
* Experience with Snowflake—data modeling, Snowpipe, tasks, streams, stored procedures and performance optimization.
* Experience with AI/ML workflows: feature engineering, inference pipelines or deploying models.
* Proficiency in SQL and relational databases.
* Hands-on experience with at least one cloud platform.
* Experience using data orchestration tools like Airflow, Prefect or Dagster.
* Familiarity with MLOps tools such as MLflow, Kubeflow, SageMaker or Vertex AI.