 
        
        Content Assurance Specialist
We are seeking skilled professionals to test and evaluate content, ensuring safety and quality standards.
 * Conduct adversarial testing on large language models to identify harmful outputs.
 * Evaluate and stress-test AI prompts across finance, healthcare, security domains.
 * Develop test cases to assess accuracy, bias, toxicity, hallucinations, and misuse in AI responses.
 * Collaborate with data scientists, researchers, and prompt engineers to report risks and suggest mitigations.
 * Perform manual QA and content validation ensuring factual consistency and guideline adherence.
 * Create evaluation frameworks and scoring rubrics for prompt performance and safety compliance.
 * Document findings, edge cases, and vulnerability reports.
Required Skills:
 * Proven AI red teaming, LLM safety testing, or adversarial prompt design experience.
 * Prompt engineering, NLP tasks, and ethical considerations familiarity.
 * Strong Quality Assurance, content review, or test case development background.
 * Understanding of LLM behaviors, failure modes, and model evaluation metrics.
 * Critical thinking, pattern recognition, and analytical writing skills.
 * Ability to work independently and meet deadlines.
Preferred Qualifications:
 * Work experience with teams focused on AI innovation.
 * Risk assessment, red team security testing, or AI policy & governance experience.
A background in linguistics, psychology, or computational ethics is advantageous.
Next Steps:
Complete two assessments: Assessment Test and Language test. Register on our platform and complete your profile. Share your resume if interested.