Job Summary
* We are seeking a skilled expert to conduct Red Teaming exercises and evaluate AI prompts for safety and accuracy.
* The ideal candidate will have experience in AI red teaming, LLM safety testing, or adversarial prompt design.
About the Role
This is a challenging opportunity to work independently and collaboratively with data scientists, safety researchers, and prompt engineers to report risks and suggest mitigations.
Responsibilities