We’re looking for a Principal Engineer to lead the technical strategy and architecture for protecting foundation models against misuse—such as jailbreaks, prompt injection, toxic outputs, and custom policy violations. In this role, you’ll apply your expertise in scalable systems design, applied machine learning, and model-level defenses to build core infrastructure that ensures AI systems behave safely and responsibly in production. You’ll set technical direction and drive architectural decisions across a broad surface area of AI safety systems—designing safety interventions, integrating evaluation workflows, and developing models and tooling that detect and prevent harmful or non-compliant behavior. This role is ideal for someone who wants to work at the intersection of model behavior, product safety, and system engineering.
What You’ll Do
Architect and lead the development of model-level defenses against jailbreaks, prompt injection, and custom policy violationsDefine and drive evaluation strategies, including adversarial testing and stress-testing pipelines, to identify safety weaknesses before deploymentSet technical direction for scalable mitigation techniques such as safety-focused fine-tuning, prompt shielding, and post-processing methods to reduce harmful or non-compliant outputsCollaborate with red teamers and researchers to convert emerging threats into measurable evaluations and system-level safeguardsScale and improve human-in-the-loop pipelines for detecting toxic, biased, or non-compliant outputsStay up to date with LLM safety research, jailbreak tactics, and adversarial trends, and apply insights to real-world defenses
What We’re Looking For
7+ years of experience in applied machine learning, AI infrastructure, or safety-critical systems, with 3+ years in a senior or staff-level technical leadership roleDeep understanding of transformer-based architectures and experience building or evaluating safety interventions for LLMsProven expertise in analyzing and addressing adversarial behaviors, edge-case failures, and misuse scenariosDemonstrated ability to guide long-term technical strategy, influence organizational direction, and mentor cross-functional teamsStrong written and verbal communication skills, with experience influencing technical direction at the org or platform levelfield Bachelors, Masters, or PhD in Computer Science, Machine Learning, or a related field
Nice to Have
Experience applying techniques such as reinforcement learning from human feedback (RLHF), adversarial training, or safety fine-tuning at scaleHands-on work designing prompt-level defenses, content filtering systems, or mechanisms to prevent jailbreaks and policy violationsContributions to AI safety research, industry standards, or open-source tools related to model robustness, alignment, or evaluationFamiliarity with model governance frameworks, including safety policies, model cards, red teaming protocols, or risk classification methodologiesA10 Networks is an equal opportunity employer and a VEVRAA federal subcontractor. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability status, protected veteran status, or any other characteristic protected by law. A10 also complies with all applicable state and local laws governing nondiscrimination in employment. #LI-AN1Compensation: up to $246K USD