Job Description

The Opportunity

We are building an elite AI Red Team to stress-test and harden enterprise-scale AI products deployed to some of the world’s largest organizations.

This is not a theoretical research role.

This role sits at the intersection of adversarial machine learning, enterprise security architecture, and governance. You will lead the design and execution of structured red team engagements across multiple AI systems — and translate technical risk into enterprise-aligned assurance.

If you have ever been frustrated watching AI risk findings remain stuck in a slide deck with no operational impact, this role is designed to change that.

What You’ll Do

  • Design and lead adversarial testing of LLM and AI-driven systems
  • Conduct threat modelling across model, infrastructure and data layers
  • Execute and oversee testing for:
  • Prompt injection
  • Jailbreaking
  • Model exploitation
  • Data leakage / extract...

Ready to Apply?

Take the next step in your AI career. Submit your application to C-Serv today.

Submit Application