 
        
        Job Title
We are seeking analytical professionals with hands-on experience in AI/LLM quality assurance and red teaming. The ideal candidate will rigorously test and evaluate AI-generated content to identify vulnerabilities and ensure compliance with safety and quality standards.
Key Responsibilities:
 * Conduct red teaming exercises to identify adversarial outputs from large language models (LLMs).
 * Evaluate and stress-test AI prompts across multiple domains to uncover potential failure modes.
 * Develop and apply test cases to assess accuracy, bias, toxicity and misuse potential in AI-generated responses.
Collaborate with data scientists and prompt engineers to report risks and suggest mitigations. Perform manual QA and content validation ensuring factual consistency and coherence.
Requirements:
 * Proven experience in AI red teaming or LLM safety testing.
 * Familiarity with prompt engineering, NLP tasks and ethical considerations in generative AI.
 * Strong background in Quality Assurance, content review or test case development for AI/ML systems.