 
        
        Role Description:
We seek experienced professionals to rigorously evaluate AI-generated content, identify vulnerabilities and assess risks.
Key Responsibilities:
 * Conduct Red Teaming exercises to identify adversarial outputs from large language models.
 * Evaluate and stress-test AI prompts across multiple domains to uncover potential failure modes.
 * Develop and apply test cases to assess accuracy, bias, toxicity, hallucinations and misuse potential in AI-generated responses.
 * Collaborate with data scientists and safety researchers to report risks and suggest mitigations.
 * Perform manual QA and content validation across model versions ensuring factual consistency and coherence.
 * Document findings and vulnerability reports with high clarity and structure.
Requirements:
 * Proven experience in AI red teaming, LLM safety testing or adversarial prompt design.
 * Familiarity with prompt engineering, NLP tasks and ethical considerations in generative AI.
 * Strong background in Quality Assurance, content review or test case development for AI/ML systems.
 * Understanding of LLM behaviors and model evaluation metrics.
 * Excellent critical thinking and analytical writing skills.
Preferred Qualifications:
 * Prior work with teams focused on LLM safety initiatives.
 * Experience in risk assessment or AI policy & governance.
A background in linguistics, psychology or computational ethics is beneficial. To proceed further, complete two assessments: the Versant English Proficiency Test and XConnect Registration.