Job Description

Job Title: Adversarial Tester – LLM

Department: Operations

Location: Remote - Global

Reports To: Associate Director, Gen AI Operations


Summary

We are seeking an adversarial tester who specializes in breaking and “failing” large language model (LLM) systems through creative prompt engineering, red‑teaming, and stress testing. The goal of this role is to systematically expose safety, reliability, and robustness gaps so they can be measured, fixed, and prevented in future releases.


Key Responsibilities

  • Design and execute adversarial prompt campaigns (jailbreaks, prompt injection, data exfiltration, model misalignment, policy evasion) to deliberately cause LLM failures in realistic scenarios.
  • Systematically “fail the model” by discovering new failure modes, reproducing them reliably, and documen...

Ready to Apply?

Take the next step in your AI career. Submit your application to Firstsource today.

Submit Application