 
        
        Job Title: AI Red Teaming Specialist
Our organization is seeking an experienced and highly analytical professional to fill the role of AI Red Teaming Specialist.
The successful candidate will have hands-on experience in Red Teaming, Prompt Evaluation, and AI/LLM Quality Assurance. They will be responsible for testing and evaluating AI-generated content to identify vulnerabilities and ensure compliance with safety and quality standards.
Key Responsibilities:
 * Conduct rigorous Red Teaming exercises to identify potential weaknesses in large language models.
 * Evaluate and stress-test AI prompts across multiple domains to uncover potential failure modes.
 * Develop and apply test cases to assess accuracy, bias, toxicity, hallucinations, and misuse potential in AI-generated responses.
 * Collaborate with data scientists to report risks and suggest mitigations.
 * Perform manual QA and content validation ensuring factual consistency and guideline adherence.
 * Create evaluation frameworks and scoring rubrics for prompt performance and safety compliance.
Requirements:
 * Proven experience in AI red teaming, LLM safety testing, or adversarial prompt design.
 * Familiarity with prompt engineering, NLP tasks, and ethical considerations in generative AI.
 * Strong background in Quality Assurance, content review, or test case development for AI/ML systems.
 * Understanding of LLM behavior's, failure modes, and model evaluation metrics.
 * Excellent critical thinking, pattern recognition, and analytical writing skills.