 
        Job Summary:We are seeking highly analytical and detail-oriented professionals to rigorously test and evaluate AI-generated content.Key Responsibilities:Conduct rigorous red teaming exercises to identify vulnerabilities in large language models.Evaluate and stress-test AI prompts across multiple domains to ensure accuracy, bias, toxicity, hallucinations, and misuse potential.Develop comprehensive test cases to assess the effectiveness of AI-generated responses.Collaborate with data scientists to report risks and suggest mitigations to improve AI model performance.Requirements:Essential Skills:Proven experience in AI red teaming, LLM safety testing, or adversarial prompt design.Familiarity with NLP tasks and ethical considerations in generative AI.Strong background in Quality Assurance for AI/ML systems.Understanding of LLM behaviors and model evaluation metrics.Preferred Qualifications:Prior work with teams focused on LLM safety initiatives.Onboarding Process:Please register on our internal job platform to access project details and communication channels. All documentation and collaboration will be managed through this platform.