 
        
        AI Red Teaming Specialist
We are seeking detail-oriented professionals with hands-on experience in testing and evaluating AI-generated content to identify vulnerabilities, assess risks, and ensure compliance with safety, ethical, and quality standards.
Key Responsibilities:
 * Conduct thorough examinations of adversarial or unsafe outputs from large language models.
 * Evaluate and stress-test AI prompts across multiple domains to uncover potential failure modes.
 * Develop and apply test cases to assess accuracy, bias, toxicity, hallucinations, and misuse potential in AI-generated responses.
 * Collaborate with data scientists to report risks and suggest mitigations.
 * Perform manual quality assurance and content validation across model versions ensuring factual consistency, coherence, and guideline adherence.
Requirements:
 * Proven experience in AI red teaming, LLM safety testing, or prompt design.
 * Familiarity with prompt engineering, NLP tasks, and ethical considerations in generative AI.
 * Strong background in quality assurance, content review, or test case development for AI systems.
 * Understanding of LLM behaviors, failure modes, and evaluation metrics.