 
        
        Job Title: AI Safety Specialist
About the Role
We are seeking a highly analytical and detail-oriented professional with hands-on experience in AI safety testing, risk assessment, and quality assurance to join our team.
Key Responsibilities:
 * Conduct thorough evaluations of large language models (LLMs) to identify vulnerabilities and assess risks.
 * Design and execute test cases to evaluate accuracy, bias, toxicity, hallucinations, and misuse potential in AI-generated responses.
 * Collaborate with data scientists and safety researchers to report risks and suggest mitigations.
 * Perform manual quality assurance and content validation across model versions to ensure factual consistency and coherence.
Requirements:
 * Proven experience in AI red teaming, LLM safety testing, or adversarial prompt design.
 * Familiarity with prompt engineering, NLP tasks, and ethical considerations in generative AI.
 * Strong background in Quality Assurance, content review, or test case development for AI/ML systems.
Preferred Qualifications:
 * Prior work with teams focused on LLM safety initiatives.
 * Experience in risk assessment, red team security testing, or AI policy & governance.
What We Offer
A dynamic and challenging work environment where you can apply your skills and expertise to drive meaningful impact in AI safety.