 
        About the Role We are seeking highly analytical and detail-oriented professionals to join our team in conducting Red Teaming exercises. * Red Teaming involves identifying adversarial, harmful, or unsafe outputs from large language models. * Prompt Evaluation requires assessing AI prompts across multiple domains to uncover potential failure modes. * AI/LLM Quality Assurance involves developing test cases to evaluate accuracy, bias, toxicity, hallucinations, and misuse potential in AI-generated responses. * The successful candidate will collaborate with data scientists, safety researchers, and prompt engineers to report risks and suggest mitigations. Key Responsibilities