 
        
        Red Teaming Specialist
We're seeking a seasoned Red Teaming Specialist to join our team. As a key member of our AI content quality assurance process, you'll play a vital role in ensuring the integrity and safety of our language models.
Key responsibilities include:
 * Conducting thorough Red Teaming exercises to identify vulnerabilities in large language models
 * Evaluating AI prompts for potential failure modes and biases
 * Developing test cases to assess accuracy, bias, and misuse potential in AI-generated responses
 * Collaborating with data scientists to report risks and suggest mitigations
 * Performing manual QA and content validation across model versions
Requirements:
 * Proven experience in AI red teaming, LLM safety testing, or adversarial prompt design
 * Familiarity with prompt engineering, NLP tasks, and ethical considerations in generative AI
 * Strong background in Quality Assurance, content review, or test case development for AI/ML systems
 * Understanding of LLM behavior's, failure modes, and model evaluation metrics
What We Offer:
We provide a challenging and rewarding work environment that fosters growth and collaboration. Our team is passionate about delivering high-quality AI solutions that make a real impact.