Emprego
Meus anúncios
Meus alertas e-mail de emprego
Fazer login
Encontrar um emprego Dicas de emprego Fichas de empresas
Pesquisar

Qa/red teaming expert

Campina Grande
Innodata Inc.
Anunciada dia 2 outubro
Descrição

Job Description:We are seeking highly analytical and detail-oriented professionals with hands-on experience in Red Teaming, Prompt Evaluation, and AI/LLM Quality Assurance. The ideal candidate will help us rigorously test and evaluate AI-generated content to identify vulnerabilities, assess risks, and ensure compliance with safety, ethical, and quality standards.Job Title :- Red Teaming, Prompt Evaluation, or AI/LLM Quality Assurance Expert Key Responsibilities:Conduct Red Teaming exercises to identify adversarial, harmful, or unsafe outputs from large language models (LLMs).Evaluate and stress-test AI prompts across multiple domains (e.g., finance, healthcare, security) to uncover potential failure modes.Develop and apply test cases to assess accuracy, bias, toxicity, hallucinations, and misuse potential in AI-generated responses.Collaborate with data scientists, safety researchers, and prompt engineers to report risks and suggest mitigations.Perform manual QA and content validation across model versions, ensuring factual consistency, coherence, and guideline adherence.Create evaluation frameworks and scoring rubrics for prompt performance and safety compliance.Document findings, edge cases, and vulnerability reports with high clarity and structure.Requirements:Proven experience in AI red teaming, LLM safety testing, or adversarial prompt design.Familiarity with prompt engineering, NLP tasks, and ethical considerations in generative AI.Strong background in Quality Assurance, content review, or test case development for AI/ML systems.Understanding of LLM behaviour's, failure modes, and model evaluation metrics.Excellent critical thinking, pattern recognition, and analytical writing skills.Ability to work independently, follow detailed evaluation protocols, and meet tight deadlines.Preferred Qualifications:Prior work with teams like Open AI, Anthropic, Google DeepMind, or other LLM safety initiatives.Experience in risk assessment, red team security testing, or AI policy & governance.Background in linguistics, psychology, or computational ethics is a plus.Kindly Attempt the below assessment to proceed further with your application.https://icap.innodata.com/registerfreelancer?enc=oUTZVsr/Pnz/0Xygc2EK32MdtinqnjC9vy8RU3Ha4EMfQdYWF8n1oAiUtOZXMcKhhhsYYecqmKWja8eKXV801gezikielezikiel

Se candidatar
Criar um alerta
Alerta ativado
Salva
Salvar
Vagas parecidas
Emprego Campina Grande
Emprego Paraíba
Emprego Nordeste
Página principal > Emprego > QA/Red Teaming Expert

Jobijoba Brasil

  • Dicas de emprego

Encontre vagas

  • Vagas de emprego por cargo
  • Pesquisa de vagas de emprego por área
  • Empregos por empresas
  • Empregos por localização

Contato / Parceria

  • Entre em contato
  • Publique suas ofertas no site Jobijoba

Menções legais - Menções legais e termos de uso - Política de dados - Gerir os meus cookies - Acessibilidade: Não conforme

© 2025 Jobijoba Brasil - Todos os direitos reservados

Se candidatar
Criar um alerta
Alerta ativado
Salva
Salvar