Tenable is looking for a Staff Software Engineer to join Tenable’s Engineering organization to support our mission of efficiently delivering high quality solutions to our customers' security challenges. You will need deep expertise in building and leading performance engineering strategies in cloud-based environments. You will drive performance test planning, execution, automation, and analysis to ensure our systems scale, perform, and remain reliable under load. As a Staff Engineer, you’ll act as a key driver of quality across multiple scrum teams and initiatives. You’ll work on evolving scalable and reusable test framework, along with defining performance strategy, track SLO, develop baseline metrics and influence engineering-wide quality standards and processes.
Your Opportunity:
Technical Leadership
Lead the design and implementation of performance, load, stress, scalability, and reliability tests for cloud-native applications.
Design and evolve scalable, reusable test automation frameworks (UI, API, integration, and performance).
Define and monitor key performance indicators (KPIs) such as response time, throughput, resource usage, and scalability metrics.
Define performance benchmarks, SLAs, and KPIs for cloud-native applications.
Collaborate closely with development, DevOps, and product teams to identify bottlenecks and optimize performance.
Create a framework and implement performance test strategy including baseline benchmarks, code and architectural improvements, future proofing design decisions for scalable performance optimization (including but not limited to cost optimization and throughput monitoring/alerting).
Mentor junior engineers and contribute to performance engineering best practices.
Define the performance strategy for distributed systems, generate baseline metrics and track SLOs.
Guide adoption of shift-left testing, CI/CD test integration, and service-level quality monitoring.
Hands-on Engineering
Develop and maintain automated test suites in Java, Python, Shell and Groovy.
Participate in new feature architectural reviews for code optimization, additional benchmarks and corner cases which would require monitoring and alerting for production bottlenecks and code optimization.
Build in-house tools to support test data management, environment simulation, and reliability.
Perform code reviews, debug the automation pipeline failures, pair programming, and mentor other engineers.
Monitor SLO's for feature releases, MTTR, for performance improvements, threshold alerting and scalable production quality code.
Strong analytical and debugging skills with the ability to interpret large volumes of test data.
Strategy & Vision
Collaborate with Product, Dev and SRE to embed quality early in the SDLC.
Champion customer-centric quality, ensuring that product reliability and usability meet real-world needs.
Establish quality metrics (test coverage, defect leakage, flakiness, MTTR) and drive data-driven improvements.
Enablement & Mentoring
Upskill QA and developer teams on test design patterns, mocking, and observability.
Serve as the go-to expert for all things related to quality engineering.
Excellent communication, leadership, and collaboration skills.
What You’ll Need:
Able to work at Columbia, MD Headquarters (3 days per week, subject to change).
8+ years of experience in software development/testing with at least 2 years in a technical leadership or staff-level role.
Strong coding skills in one or more languages esp. Java, Python, Groovy, and Shell.
Ability to do debug the bottlenecks , Identify the root cause and fix code.
Strong expertise in observability and diagnostics tools like Coralogix, Datadog, or Splunk).
Deep expertise in test automation frameworks (e.g. Playwright, Pytest).
Experience in CI/CD pipelines (e.g., Jenkins, GitHub Actions, CircleCI) and cloud-based deployments (AWS, GCP, Azure).
Good understanding of HTTP/HTTPS, Websocket APIs, Rest APIs, JSON , XML
Experience in performance testing tools (e.g., JMeter, Locust).
Experience in chaos engineering and resiliency testing.
Proven experience in building tools and libraries to improve test productivity.
And Ideally:
Experience in SaaS or distributed systems performance testing.
Knowledge of containerization (Docker, Kubernetes).
Experience with shift-left testing, chaos testing, resiliency testing or production validation tools.
Exposure in testing data pipeline by using data validation frameworks such as Great Expectations, Pandera etc.
#LI-Hybrid
#LI-LP1