As a meticulous Security Validator, you will play a pivotal role in scrutinizing AI-generated content for vulnerabilities and ensuring compliance with safety standards.
We are seeking experienced professionals with hands-on expertise in Red Teaming, Prompt Evaluation, and AI/LLM Quality Assurance.
* Conduct rigorous security audits to identify potential weaknesses in large language models (LLMs).
* Evaluate and stress-test AI prompts across multiple domains to uncover failure modes.
* Develop and apply test cases to assess accuracy, bias, toxicity, hallucinations, and misuse potential in AI-generated responses.
* Collaborate with data scientists and prompt engineers to report risks and suggest mitigations.
* Perform thorough quality assurance and content validation across model versions, ensuring factual consistency, coherence, and guideline adherence.
* Create evaluation frameworks and scoring rubrics for prompt performance and safety compliance.
* Document findings, edge cases, and vulnerability reports with high clarity and structure.
You will work independently, adhere to detailed evaluation protocols, and meet tight deadlines. We value excellent critical thinking, pattern recognition, and analytical writing skills.
A background in linguistics, psychology, or computational ethics is advantageous. Please complete the assessment to proceed further with your application.