Job Title: AI/ML Safety Specialist
We are seeking analytical professionals to join our team and contribute to the development of AI and large language models (LLMs).
* Conduct red teaming exercises to identify potential security threats from LLMs.
* Evaluate and stress-test AI prompts across various domains, including finance, healthcare, and security.
* Develop test cases to assess accuracy, bias, toxicity, hallucinations, and misuse potential in AI-generated responses.
* Collaborate with data scientists to report risks and suggest mitigations.
Requirements:
* Proven experience in AI red teaming, LLM safety testing, or adversarial prompt design.
* Familiarity with prompt engineering, NLP tasks, and ethical considerations in generative AI.
* Strong background in Quality Assurance, content review, and test case development for AI/ML systems.
* Understanding of LLM behaviors, failure modes, and model evaluation metrics.
Our ideal candidate will have a strong understanding of AI and ML concepts, as well as excellent analytical and problem-solving skills. If you are passionate about ensuring the safety and reliability of AI systems, we encourage you to apply for this exciting opportunity.