Red Teamer / Pentester
As a Red Teamer at Ctrl+G, you will design and execute the offensive evaluations that measure what AI models can actually do in cybersecurity. You will build realistic attack scenarios, chain exploits, and craft challenges that test whether models can find vulnerabilities, compromise infrastructure, and operate like real adversaries.
Your findings become the data. Every exploit chain you design, every attack scenario you build, feeds directly into the training pipelines and benchmarks that teach models to defend. You are the offensive edge of a company whose mission is defense.
Representative projects
- Building offensive benchmarks—CTF-style challenges, network attack simulations, and multi-step exploit chains that measure what models can actually compromise.
- Designing realistic attack scenarios that test model capabilities: vulnerability discovery, exploit generation, lateral movement, privilege escalation.
- Cataloging vulnerability patterns in AI-generated code and turning them into structured training data.
- Collaborating with frontier labs to evaluate their models' offensive capabilities and publishing findings that advance the field.
You may be a good fit if you
- Have deep offensive security experience: penetration testing, exploit development, vulnerability research, or CTF competitions at a high level.
- Thrive in a fast environment, with radical candor (we don't like bs).
- Think like an attacker by default and can articulate exactly how and why something breaks.
- Understand that offense informs defense—and that the best training data comes from real attacks.
Apply for this role
Send us your details and we'll be in touch if there's a fit.
Not the right role?
Check out our other open positions or reach out directly.