 
        
        Artificial Intelligence Red Teaming Specialist
We are seeking highly analytical professionals to join our team as Artificial Intelligence Red Teaming Specialists. The ideal candidate will possess excellent technical skills, a strong understanding of AI and machine learning concepts, and the ability to think critically and outside the box.
The primary responsibility of this role will be to conduct rigorous testing and evaluation of AI-generated content to identify potential vulnerabilities and assess risks. This will involve developing and applying test cases to assess accuracy, bias, toxicity, hallucinations, and misuse potential in AI-generated responses.
Key Responsibilities:
 * Conduct thorough analysis and evaluation of AI-generated content to identify potential flaws and areas for improvement.
 * Develop and implement effective test cases to assess the accuracy, reliability, and safety of AI-generated responses.
 * Collaborate with data scientists, safety researchers, and prompt engineers to report findings and suggest mitigations.
 * Perform manual quality assurance and content validation across model versions to ensure factual consistency, coherence, and guideline adherence.
Requirements:
 * Proven experience in AI red teaming, LLM safety testing, or adversarial prompt design.
 * Familiarity with prompt engineering, NLP tasks, and ethical considerations in generative AI.
 * Strong background in Quality Assurance, content review, or test case development for AI/ML systems.
What We Offer:
 * A competitive compensation package.
 * A dynamic and collaborative work environment.
 * Ongoing training and professional development opportunities.