We are building an elite AI Red Team to stress-test and harden enterprise-scale AI products. This role sits at the intersection of adversarial machine learning, enterprise security architecture, and governance.
Requirements
- Design and lead adversarial testing of LLM and AI-driven systems
- Conduct threat modelling across model, infrastructure and data layers
- Translate findings into structured, audit-ready documentation
- Map vulnerabilities and remediation pathways to control frameworks
- Partner closely with engineering, security, and compliance functions
- Present findings clearly to executive leadership
Benefits
- Comprehensive Private Medical Coverage
- Support for Mental Health Expenses
- Life Insurance Options