 
        
        Red Teaming Expert Job Description
\
We are seeking a highly analytical and detail-oriented professional to join our team as a Red Teaming Expert.
The primary responsibility of this role will be to conduct rigorous testing and evaluation of AI-generated content to identify potential vulnerabilities, assess risks, and ensure compliance with safety, ethical, and quality standards.
\
Key Responsibilities:
\
\
 1. Conduct Red Teaming exercises to identify adversarial, harmful, or unsafe outputs from LLMs.
\
 2. Evaluate and stress-test AI prompts across multiple domains (e.g. finance, healthcare, security) to uncover potential failure modes.
\
 3. Develop and apply test cases to assess accuracy, bias, toxicity, hallucinations, and misuse potential in AI-generated responses.
\
 4. Collaborate with data scientists, safety researchers, and prompt engineers to report risks and suggest mitigations.
\
 5. Perform manual QA and content validation across model versions, ensuring factual consistency, coherence, and guideline adherence.
\
 6. Create evaluation frameworks and scoring rubrics for prompt performance and safety compliance.
\
 7. Document findings, edge cases, and vulnerability reports with high clarity and structure.
\
\
Requirements:
\
\
 8. Proven experience in AI red teaming, LLM safety testing, or adversarial prompt design.
\
 9. Familiarity with prompt engineering, NLP tasks, and ethical considerations in generative AI.
\
 10. Strong background in Quality Assurance, content review, or test case development for AI/ML systems.
\
 11. Understanding of LLM behavior, failure modes, and model evaluation metrics.
\
 12. Excellent critical thinking, pattern recognition, and analytical writing skills.
\
 13. Ability to work independently, follow detailed evaluation protocols, and meet tight deadlines.
\