Job Title: AI Safety and Security Expert
We are seeking analytical professionals with experience in red teaming, prompt evaluation, and AI/LLM quality assurance. They will help us test and evaluate AI-generated content to identify vulnerabilities and ensure safety standards.
* Conduct red teaming exercises to identify harmful outputs from large language models (LLMs).
* Evaluate and stress-test AI prompts across multiple domains to uncover potential failure modes.
* Develop test cases to assess accuracy, bias, toxicity, hallucinations, and misuse potential in AI-generated responses.
* Collaborate with data scientists to report risks and suggest mitigations.
* Perform manual QA and content validation across model versions, ensuring factual consistency and guideline adherence.
Requirements:
* Proven experience in Ai Red Teaming, LLM Safety Testing, or Adversarial Prompt Design.
* Familiarity with Prompt Engineering, NLP Tasks, and ethical considerations in generative AI.
* Strong background in quality assurance, content review, or test case development for AI/ML systems.
Preferred Qualifications:
* Prior work with teams like Open AI, Anthropic, Google DeepMind, or other LLM safety initiatives.