Cloud Performance Engineer, Infrastructure
Uber
**About the Role**
We are seeking a highly skilled and motivated Cloud Performance Engineer to join Uber's Fleet Engineering organization. This role sits at the intersection of hardware and software performance engineering, focused on optimizing cloud and bare-metal infrastructure for Uber-scale workloads.
You'll work across multi-cloud, containerized, and bare-metal systems to analyze, tune, and improve performance end-to-end-from the Hardware, Linux kernel and runtimes to application-level services and distributed systems. This is an opportunity to collaborate with engineers, SREs, cloud partners & xPU vendors while helping shape the future of Uber's infrastructure. This role is ideal for someone with a hardware software co-design background who thrives on solving performance challenges.
What the Candidate Will Do
1. Analyze and optimize system/application performance across cloud, bare metal, and container workloads.
2. Collaborate with hyperscalers and silicon providers to evaluate emerging server, silicon, and infrastructure technologies for performance impact.
3. Use profiling tools (perf, eBPF, flamegraphs, tracing frameworks) to debug and resolve high-impact performance issues
4. Contribute to Uber's internal benchmarking and profiling frameworks to deliver real-world performance insights.
5. Partner with SREs and Infra engineers to automate performance analysis and observability, ensuring scalable alerting, telemetry, and diagnostics across services.
6. Drive architectural and tooling decisions that influence how Uber builds, monitors, and scales its systems.
**Basic Qualifications**
1. 3+ years of experience in hardware or systems-level performance engineering, SRE, or backend infrastructure development.
2. Proficiency in one or more languages (Golang, Java, Python, or C/C++)
3. Hands-on experience in performance profiling and tuning with tools like flamegraphs, Linux perf, eBPF, or similar.
4. Solid understanding of OS internals, including CPU scheduling, memory management, and networking.
5. Familiarity with Kubernetes/Docker and distributed systems.
6. Understanding of observability concepts (SLIs/SLOs, alerting, telemetry).
7. Ability to work independently troubleshoot and automate solutions.
**Preferred Qualifications**
1. Experience tuning databases, stream processing, batch or ML platforms (e.g. PyTorch, JAX).
2. Familiarity with microservices debugging and distributed tracing (OpenTelemetry, Jaeger).
3. Exposure to hardware performance (Arm, GPUs, accelerators).
4. Experience with CI/CD pipelines and performance regression detection.
5. Prior experience with large-scale service deployments at hyperscaler scale.
For San Francisco, CA-based roles: The base salary range for this role is USD$153,000 per year - USD$170,000 per year. For Sunnyvale, CA-based roles: The base salary range for this role is USD$153,000 per year - USD$170,000 per year. For all US locations, you will be eligible to participate in Uber's bonus program, and may be offered an equity award & other types of comp. You will also be eligible for various benefits. More details can be found at the following link https://www.uber.com/careers/benefits.
Uber is proud to be an Equal Opportunity/Affirmative Action employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, color, religion, national origin, disability, protected Veteran status, age, or any other characteristic protected by law. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. If you have a disability or special need that requires accommodation, please let us know by completing this form- https://docs.google.com/forms/d/e/1FAIpQLSdb_Y9Bv8-lWDMbpidF2GKXsxzNh11wUUVS7fM1znOfEJsVeA/viewform
Por favor confirme su dirección de correo electrónico: Send Email