Red Team Specialist Job Description
We are seeking a skilled Red Team Specialist to join our team. The ideal candidate will have hands-on experience in Red Teaming, Prompt Evaluation, and AI/LLM Quality Assurance.
Main Responsibilities:
* Conduct rigorous Red Teaming exercises to identify adversarial outputs from large language models (LLMs).
* Evaluate and stress-test AI prompts across multiple domains to uncover potential failure modes.
* Develop and apply test cases to assess accuracy, bias, toxicity, hallucinations, and misuse potential in AI-generated responses.
The successful candidate will be able to work independently and collaboratively with data scientists, safety researchers, and prompt engineers to report risks and suggest mitigations.
Requirements:
* Proven experience in AI red teaming, LLM safety testing, or adversarial prompt design.
* Familiarity with prompt engineering, NLP tasks, and ethical considerations in generative AI.
* Strong background in Quality Assurance, content review, or test case development for AI/ML systems.
* Excellent critical thinking, pattern recognition, and analytical writing skills.
* Ability to meet tight deadlines and follow detailed evaluation protocols.
PREFERRED QUALIFICATIONS
* Prior work with teams like Open AI, Anthropic, Google DeepMind, or other LLM safety initiatives.
* Experience in risk assessment, red team security testing, or AI policy & governance.
A background in linguistics, psychology, or computational ethics is also beneficial.