 
        
        Job Opportunity
We are seeking analytical professionals with hands-on experience in Red Teaming, Prompt Evaluation, and AI/LLM Quality Assurance to join our rigorous testing and evaluation process.
The ideal candidate will assess risks and ensure compliance with safety, ethical, and quality standards for AI-generated content.
 * Evaluate and stress-test AI prompts across multiple domains to uncover potential failure modes.
 * Develop test cases to assess accuracy, bias, toxicity, hallucinations, and misuse potential in AI-generated responses.
 * Collaborate with data scientists and prompt engineers to report risks and suggest mitigations.
 * Perform manual QA and content validation ensuring factual consistency, coherence, and guideline adherence.
 * Create evaluation frameworks and scoring rubrics for prompt performance and safety compliance.
 * Document findings, edge cases, and vulnerability reports with high clarity and structure.
 * Familiarity with prompt engineering, NLP tasks, and ethical considerations in generative AI.
 * Strong background in Quality Assurance, content review, or test case development for AI/ML systems.
 * Understanding of LLM behaviors, failure modes, and model evaluation metrics.
 * Excellent critical thinking, pattern recognition, and analytical writing skills.
 * Ability to work independently, follow detailed evaluation protocols, and meet tight deadlines.
 * Potential for career growth and professional development.
Action Required: XConnect Registration You will also receive an invitation to our internal job platform, XConnect. Please take a few minutes to register and complete your profile.