Join the AI Security team of a large financial multinational company
Responsibilities
* Own and advance the security posture of our AI-powered products, platforms, and infrastructure.
* Operate at the intersection of cybersecurity and AI, defending against novel AI-specific threats while enabling engineering teams to ship AI features quickly and safely.
* Full AI security lifecycle: from threat modeling LLM integrations and designing guardrails against prompt injection, to securing model supply chains, hardening RAG pipelines, and building automated security tooling that scales with our AI platform.
Requirements:
· 8+ years of experience in cybersecurity, application security
· 1+ years of focusing in AI/ML Security
· Deep understanding of LLM security risks: prompt injection, jailbreaking, data leakage, insecure output handling, and supply chain vulnerabilities
· 1+ years of experience securing AI/ML systems in production — including model serving, RAG pipelines, agentic AI, and API orchestration layers
· 2+ years of experience with cloud-native security across AWS, Azure, or GCP (IAM, network security, encryption)
· 3+ years of experience with security tooling: SAST, DAST, SCA, SIEM
· 2+ years of experience in authentication/authorization systems: OAuth 2.0, OIDC, SAML, RBAC
· 2+ years of experience with software engineering background in Python
· 3+ years of experience at large/international corporations
ENGLISH – ADVANCED (Professional) or FLUENT
Preferred:
· Good understanding of Secure SDLC, DevSecOps practice
· Familiarity with AI security tools: Garak, Rebuff, NeMo Guardrails, Prompt Guard, LLM Guard
· Experience with vector database security: Pinecone, Weaviate, ChromaDB, pgvector access control