Beijing, China
69 days ago
Solution Architect Intern, AI in Industry - 2025

NVIDIA is leading company of AI computing. At NVIDIA, our employees are passionate about AI, HPC , VISUAL, GAMING. Our Solution Architect team is more focusing to bring NVIDIA new technology into difference industries. We help to design the architecture of AI computing platform, analyze the AI and HPC applications to deliver our value to customers. This role will be instrumental in leveraging NVIDIA's cutting-edge technologies to optimize open-source and proprietary large models, create AI workflows, and support our customers in implementing advanced AI solutions. 

What you’ll be doing:

Drive the implementation and deployment of NVIDIA Inference Microservice (NIM) solutions 

Use NVIDIA NIM Factory Pipeline to package optimized models (including LLM, VLM, Retriever, CV, OCR, etc.) into containers providing standardized API access

Refine NIM tools for the community, help the community to build their performant NIMs 

Design and implement agentic AI tailored to customer business scenarios using NIMs

Deliver technical projects, demos and customer support tasks

Provide technical support and guidance to customers, facilitating the adoption and implementation of NVIDIA technologies and products 

Collaborate with cross-functional teams to enhance and expand our AI solutions

What we need to see:

Pursuing Bachelor or Master in Computer Science, AI, or a related field; Or PhD candidates in ML Infra or data systems for ML.

Proficiency in at least one inference framework (e.g., TensorRT, ONNX Runtime, PyTorch) 

Strong programming skills in Python or C++ 

Excellent problem-solving skills and ability to troubleshoot complex technical issues 

Demonstrated ability to collaborate effectively across diverse, global teams, adapting communication styles while maintaining clear, constructive professional interactions

Ways to stand out from the crowd:

Expertise in model optimization techniques, particularly using TensorRT 

Familiarity with disaggregated LLM Inference

CUDA optimization experience, extensive experience designing and deploying large scale HPC and enterprise computing systems 

Familiarity with main stream inference engines (e.g., vLLM, SGLang) 

Experience with DevOps/MLOps such as Docker, Git, and CI/CD practices 

Por favor confirme su dirección de correo electrónico: Send Email