Job Role: AI/LLM Security Specialist
We are seeking detail-oriented professionals with hands-on experience in Red Teaming, Prompt Evaluation, and AI/LLM Quality Assurance. This role requires rigorous testing and evaluation of AI-generated content to identify vulnerabilities, assess risks, and ensure compliance with safety, ethical, and quality standards.
* Conduct comprehensive Red Teaming exercises to identify adversarial outputs from large language models.
* Evaluate AI prompts across multiple domains to uncover failure modes and potential biases.
* Develop and apply test cases to assess accuracy, bias, and misuse potential in AI-generated responses.
Key Requirements:
* Proven experience in AI red teaming or LLM safety testing is essential.
* Familiarity with prompt engineering, NLP tasks, and ethical considerations is necessary for success.
Preferred Qualifications:
* Prior work with LLM safety initiatives can be beneficial.
A background in linguistics or computational ethics can be advantageous.