 
        
        We are seeking highly analytical and detail-oriented professionals to rigorously test and evaluate AI-generated content for vulnerabilities, risks, and quality standards.
Key Responsibilities:
 * Conduct Red Teaming exercises to identify adversarial outputs from large language models.
 * Evaluate and stress-test AI prompts to uncover potential failure modes.
 * Develop and apply test cases to assess accuracy, bias, toxicity, hallucinations, and misuse potential in AI-generated responses.
 * Collaborate with data scientists, safety researchers, and prompt engineers to report risks and suggest mitigations.
 * Perform manual QA and content validation ensuring factual consistency, coherence, and guideline adherence.
Requirements:
 * Proven experience in AI red teaming or LLM safety testing.
 * Familiarity with prompt engineering, NLP tasks, and ethical considerations in generative AI.
 * Strong background in Quality Assurance, content review, or test case development for AI/ML systems.
 * Understanding of LLM behaviors and model evaluation metrics.
Preferred Qualifications:
 * Prior work with teams like OpenAI or other LLM safety initiatives.
 * Experience in risk assessment or AI policy & governance.
Next Steps:
To proceed further in the evaluation process, complete two assessments: Assessment Test (Evaluates linguistic and analytical skills) and Language test.
Action Required: XConnect Registration.