We are seeking a highly skilled professional to join our organization as a Data Engineer with an AI/ML focus.
Key Responsibilities:
* Develop, optimize, and maintain scalable data pipelines using Python, modern data engineering frameworks, and Snowflake as a central data warehouse.
* Design and manage complex data workflows, ensuring accuracy, scalability, and reliability.
* Collaborate with data scientists to deploy, monitor, and tune machine learning models.
* Create feature engineering pipelines, preprocessing workflows, and model-serving APIs.
* Integrate data from various sources (APIs, databases, cloud storage, streaming platforms).
* Implement MLOps best practices including versioning, CI/CD for ML, and automated retraining workflows.
* Optimize data storage, compute usage, and performance within Snowflake and cloud-native tools (AWS, GCP, or Azure).
Requirements:
* Extensive experience in ETL/ELT pipeline development using Python and modern data engineering frameworks.
* Strong understanding of data warehousing and cloud-native tools (Snowflake, AWS, GCP, or Azure).
* Familiarity with machine learning workflows, model deployment, and monitoring.
* Ability to work closely with cross-functional teams, including data scientists and engineers.
Benefits:
* Opportunity to work on cutting-edge AI/ML projects.
* Professional growth and development opportunities.
* A dynamic and collaborative work environment.
About the Role:
This is an exciting opportunity for a skilled Data Engineer to join our organization and contribute to the success of our AI/ML initiatives. The ideal candidate will have a strong background in data engineering, excellent problem-solving skills, and the ability to collaborate effectively with cross-functional teams.