Senior Product Manager - Enterprise Inference
Santa Clara, CA, United States
To realize value with AI, neural networks need to be deployed for inference powering applications running in the cloud, datacenter or at the edge. Common services that invoke AI inference include recommender systems, virtual assistants, large language models and generative AI. NVIDIA is at the forefront of advancing the latest research and optimizations to make the cost-efficient inferencing of customized models a reality for everybody. To keep pace with this multidimensional field, we seek a passionate product manager who understands inference and its ecosystem. We need a self-starter to continue to grow this area and work with customers, to define the future of inference. We're looking for the rare blend of both technical and product skills and passion about groundbreaking technology. If this sounds like you, we would love to learn more about you! What You'll be Doing: Develop NVIDIA's enterprise inference strategy in alignment with NVIDIA's portfolio of AI products and services
Distill insights from strategic customer engagements and define, prioritize and drive execution of product roadmap
Collaborate cross-organization with machine learning engineers and product teams to introduce new techniques and tools that improve performance, latency, throughput while optimizing for cost
Build outstanding developer experience with inference APIs providing seamless integration with the modern software development stack and relevant ecosystem partners
Ensure operational excellence and reliability of distributed inference serving systems - build processes around a robust set of analytics and alerting tooling focused on uptime SLAs and overall QoS
Develop industry and workload focused GTM strategy and playbook with marketing, sales and in partnership with NVIDIA's ecosystem of partners to drive enterprise adoption and establish leadership in inference
What We Need to See: BS or MS degree in Computer Science, Computer Engineering, or similar field or equivalent experience
6+ years of product management, or similar, experience at a technology company
3+ years of experience in building inference software
Solid understanding of Kubernetes and DevOps
Strong communication and interpersonal skills
Ways to Stand Out from the Crowd: Understanding of modern ML architectures and an intuition for how to optimize their TCO, particularly for inference
Advanced knowledge of NVIDIA Triton Inference Server, TensorRT or other inference acceleration libraries, such as Ray and DeepSpeed
Familiarity with the MLOps ecosystem and experience building integrations with popular MLOps tooling such as MLflow and Weights & Biases
The base salary range is $156,000 - $310,500. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. You will also be eligible for equity and benefits . NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
#J-18808-Ljbffr