 
        
        Red Teaming Specialist
A Red Teaming Specialist is responsible for conducting adversarial testing of AI models to identify potential security vulnerabilities and biases.
Key Responsibilities:
 * Design and execute Red Teaming exercises to simulate real-world attacks on large language models (LLMs).
 * Evaluate AI prompts for potential failure modes and develop test cases to assess accuracy, bias, and toxicity.
 * Collaborate with data scientists and safety researchers to report risks and suggest mitigations.
 * Develop evaluation frameworks and scoring rubrics for prompt performance and safety compliance.
Requirements:
 * Proven experience in AI red teaming, LLM safety testing, or adversarial prompt design.
 * Strong background in Quality Assurance, content review, or test case development for AI/ML systems.
 * Excellent critical thinking, pattern recognition, and analytical writing skills.
 * Ability to work independently and meet tight deadlines.