 
        
        Highly analytical and detail-oriented professionals are required to rigorously test and evaluate AI-generated content. This involves identifying vulnerabilities, assessing risks, and ensuring compliance with safety, ethical, and quality standards.
Key Responsibilities:
 * Conduct Red Teaming exercises to identify harmful or unsafe outputs from large language models.
 * Evaluate and stress-test AI prompts across multiple domains.
 * Develop and apply test cases to assess accuracy, bias, toxicity, hallucinations, and misuse potential in AI-generated responses.
 * Collaborate with data scientists and safety researchers to report risks and suggest mitigations.
Requirements: Strong background in Quality Assurance, content review, or test case development for AI/ML systems, understanding of LLM behaviour's and failure modes, excellent critical thinking, pattern recognition, and analytical writing skills.