 
        
        Job Title: Cybersecurity Specialist
We are seeking highly analytical professionals with hands-on experience in Red Teaming, Prompt Evaluation, and AI/LLM Quality Assurance.
Key Responsibilities:
 * Conduct thorough security assessments to identify vulnerabilities and risks associated with large language models.
 * Evaluate and stress-test AI prompts across multiple domains to uncover potential failure modes and areas for improvement.
 * Develop and apply test cases to assess accuracy, bias, toxicity, hallucinations, and misuse potential in AI-generated responses.
 * Collaborate with data scientists and prompt engineers to report findings and suggest mitigation strategies.
 * Perform manual QA and content validation across model versions to ensure quality and safety standards are met.
Requirements:
 * Proven experience in AI red teaming, LLM safety testing, or adversarial prompt design.
 * Familiarity with prompt engineering, NLP tasks, and ethical considerations in generative AI.
 * Strong background in Quality Assurance, content review, or test case development for AI/ML systems.
Preferred Qualifications:
 * Prior work with teams focused on LLM safety initiatives.
A background in linguistics, psychology, or computational ethics is a plus. The ideal candidate will have strong critical thinking, pattern recognition, and analytical writing skills, with the ability to work independently and meet tight deadlines.