Open Video project is focused on OTT platform development for one of the largest North American TV providers.
We are seeking a highly skilled AI Engineer to join our Software Quality Engineering (SQE) team. The ideal candidate will have a dual focus: contributing to AI / ML solution development while performing responsibilities typical of a Software Development Engineer in Test (SDET). This role involves building and deploying AI-powered solutions for enhancing software testing, automation frameworks, and processes.
Responsibilities:
1. AI Solution Development:
1. Develop AI-powered solutions to improve software quality assurance processes.
2. Leverage frameworks like Retrieval-Augmented Generation (RAG) and other state-of-the-art AI / ML frameworks to design innovative testing and debugging tools.
3. Create models for triaging and debugging test case failures using structured and unstructured data, such as log files.
2. Framework Optimization:
1. Enhance testing processes with AI to automate root-cause analysis, bug detection, and recommendation systems.
2. Collaborate with cross-functional teams to integrate AI solutions into existing software testing pipelines.
3. MLOps Development:
1. Design, implement, and maintain robust MLOps pipelines for continuous integration and deployment of AI / ML models.
2. Automate workflows for model training, evaluation, and deployment.
4. Test Case Creation & Maintenance:
1. Write, maintain, and optimize manual and automated test cases for functional, regression, and performance testing.
2. Collaborate with development teams to define test strategies and acceptance criteria for new features.
5. Automation Framework Development:
1. Build and enhance automation frameworks for Android, Web, and backend systems using tools like Appium, Selenium, or equivalent.
2. Integrate automation pre-checks, post-checks, and reporting systems to ensure test case reliability.
6. Test Data Management:
1. Work with engineering teams to ensure proper test data management, including synthetic data generation for testing AI-based solutions.
Mandatory Skills:
* Strong knowledge of machine learning frameworks (e.g., TensorFlow, PyTorch, Scikit-learn).
* Experience with MLOps tools (e.g., MLflow, Kubeflow, AWS Bedrock, SageMaker).
* Proficiency in RAG frameworks and understanding of knowledge-based systems.
* Familiarity with LangChain and Hugging Face Transformers. Experience working with Streamlit.
* Strong command of all AWS services and prompt engineering techniques and Dspy.
* Proficiency in test automation tools (e.g., Appium, Selenium, JUnit, TestNG).
* Strong programming skills in languages like Python, Java.
* Experience with CI / CD tools like Jenkins, GitLab CI, or equivalent.
General Skills:
* Familiarity with cloud platforms (e.g., AWS, GCP, Azure).
* Knowledge of SQL and database management for test data validation.
* Strong debugging and troubleshooting skills in software systems.
#J-18808-Ljbffr