We are building an elite AI Red Team to stress-test and harden enterprise-scale AI products. This role leads the design and execution of structured red team engagements across multiple AI systems, translating technical risk into enterprise-aligned assurance.
Requirements
- Design and lead adversarial testing of LLM and AI-driven systems
- Conduct threat modelling across model, infrastructure and data layers
- Execute and oversee testing for: Prompt injection, Jailbreaking, Model exploitation, Data leakage / extraction, RAG system manipulation
- Translate findings into structured, audit-ready documentation
- Map vulnerabilities and remediation pathways to: ISO 27001 controls, SOC 2 Trust Service Criteria, ISO 27701 privacy controls, ISO 27017 cloud security controls
- Partner closely with engineering, security, and compliance functions
- Present findings clearly to executive leadership
Benefits
- Comprehensive Private Medical Coverage
- Support for Mental Health Expenses
- Life Insurance Options
- Attractive Compensation Package