Cyber Researcher
As a cyber researcher at Ctrl+G, you will work on closing the gap between AI offense and defense before it's too late. Models are writing code faster than humans can review it. Attackers have the advantage. Your work will directly shape whether AI becomes a force multiplier for defenders or a catastrophic liability.
You will engage with questions like: How do we measure what models can actually secure, exploit, and defend? How do we generate training data that teaches models to write 100% secure code? How do we build autonomous cyber defenses that scale with the threat?
Representative projects
- Building cyber capabilities benchmarks—CTF-style challenges, network attack simulations, secure coding evaluations, and new types of tests that measure what models can actually do.
- Researching methods that teach models to defend: generating training data, designing realistic environments, shifting the offense-defense equilibrium back toward security.
- Publishing research, delivering findings to frontier labs, advising our products.
Representative research questions
- How do we measure a model's actual security capabilities vs. its theoretical knowledge?
- Can we create training environments that teach models to write unbreakable code?
- How do we build autonomous defenses that scale faster than AI-enabled attacks?
You may be a good fit if you
- Have a strong background in offensive security: found CVEs, analyzed malware, developed pentesting tools, competed in or authored CTF challenges.
- Thrive in a fast environment, with radical candor (we don't like bs).
- Understand that security is asymmetric and that we're in a race.
- Believe that losing the productivity gains of AI-generated code would be a tragedy we can prevent.
Apply for this role
Send us your details and we'll be in touch if there's a fit.
Not the right role?
Check out our other open positions or reach out directly.