LOG IN
SIGN UP
Canary Wharfian - Online Investment Banking & Finance Community.
Sign In
or continue with e-mail and password
Forgot password?
Don't have an account?
Create an account
or continue with e-mail and password
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Product Manager, Model Serving – AI/ML Solutions Team

ExperiencedNo visa sponsorship
J.P. Morgan logo

at J.P. Morgan

Bulge Bracket Investment Banks

Posted 8 days ago

No clicks

**Product Manager, Model Serving – AI/ML Solutions Team** Drive AI/ML innovation in our dynamic New York team. As a Product Manager, you'll shape model serving infrastructure, guiding its end-to-end lifecycle and ensuring top-tier client experiences. Key responsibilities include: - Developing product strategy and roadmaps for model serving capabilities - Managing discovery efforts and market research for ML deployment needs - Overseeing product backlogs for real-time and batch inference - Enabling cross-functional teams to deliver high-value ML platforms This role requires 5+ years of product management experience in AI/ML, with expertise in ML lifecycle stages, delivery management, and continuous improvement. A strong technical background is essential, including experience with AWS, Docker, Kubernetes, and JIRA. Agile methodologies and Release Management proficiency are also key.

Compensation
Not specified

Currency: Not specified

City
New York City
Country
United States

Full Job Description

Location: New York, NY, United States

You enjoy shaping the future of product innovation as a core leader, driving value for customers, guiding successful launches, and exceeding expectations. Join our dynamic AI/ML Solutions team and make a meaningful impact by delivering high-quality model serving infrastructure and capabilities that empower data scientists, engineers, and business stakeholders alike.

As a Product Manager in Model Serving, you are an integral part of the team that innovates new AI/ML product offerings and leads the end-to-end product life cycle. You will play a crucial role in developing and implementing our enterprise model serving platform capabilities spanning model deployment, inference infrastructure, and lifecycle management. Utilizing your deep understanding of how to get a product off the ground, you guide the successful launch of ML platform features, gather crucial feedback, and ensure top-tier client experiences. With a strong commitment to scalability, resiliency, and stability, you collaborate closely with cross-functional teams including ML engineers, data scientists, and platform engineers to deliver high-quality products that exceed customer expectations.

 
Job responsibilities
  • Develops a product strategy and product vision for model serving capabilities that delivers measurable value to internal and external customers across the AI/ML lifecycle
  • Manages discovery efforts and market research to uncover model deployment and inference needs, integrating insights into a prioritized product roadmap
  • Owns, maintains, and develops a product backlog that enables development teams to support the overall strategic roadmap for model serving, including real-time and batch inference
  • Builds the framework and tracks key success metrics such as inference latency, throughput, model availability, and cost efficiency,
  • Leads end-to-end product delivery processes including intake, dependency management, release management, product operationalization, delivery feasibility decision-making, and product performance reporting, while escalating opportunities to improve efficiencies and functional coordination
  • Carries out in-depth quantitative and qualitative analysis to support business cases for model serving investments and leadership decision-making
  • Leads the completion of change management activities across functional partners and ensures adherence to the firm's risk, controls, compliance, and regulatory requirements including model risk governance standards
  • Communicates proposed solutions and insights effectively to stakeholders across ML engineering, data science, risk, and business teams
  • Promotes adherence to industry standards and best practices for ML model deployment, serving infrastructure, and MLOps (e.g., model registries, CI/CD for ML, feature stores)
  • Stays informed on industry trends and emerging technologies in model serving, LLM inference optimization, ML observability, and AI platform architecture
Required qualifications, capabilities, and skills
  • 5+ years of experience or equivalent expertise in product management, with exposure to AI/ML platforms, MLOps, or a closely related domain
  • Advanced knowledge of the product development life cycle, design, and data analytics, with specific familiarity with ML model lifecycle stages (training, validation, deployment, monitoring, retraining)
  • Proven ability to lead product life cycle activities including discovery, ideation, strategic development, requirements definition, and value management
  • Demonstrated ability to execute operational management and change readiness activities in a fast-moving AI/ML environment
  • Strong understanding of delivery and a proven track record of implementing continuous improvement processes for ML platform capabilities
  • Strong influencing and partnership/collaboration skills to drive cross-functional teams including data scientists, ML engineers, and platform architects to build better solutions and execute product go-live plans
  • Experience in product or platform-wide release management, deployment processes, and strategies for ML systems; must be able to build solutions from the ground up
  • Strong technical background with experience working on AWS, containerized workloads (e.g., Docker, Kubernetes), and model serving frameworks; experience with JIRA and Agile methodologies
  • Foundational understanding of ML model serving concepts including online vs. batch inference, model versioning, shadow deployments, and canary releases
Preferred qualifications, capabilities, and skills
  • Demonstrated prior experience working in a highly matrixed, complex organization with multiple ML and data platform stakeholders
  • Practical experience with modern ML serving and orchestration technologies such as Ray Serve, Seldon, or Data Bricks
  • Experience with ML observability, model monitoring, and drift detection frameworks
  • Knowledge of LLM inference optimization techniques such as quantization, batching strategies, and GPU resource management
  • Familiarity with feature stores, model registries, and end-to-end MLOps pipeline

 
Be a leader committed to understanding customer needs with your advanced knowledge of product development, design, and data analytics

Product Manager, Model Serving – AI/ML Solutions Team

Compensation

Not specified

City: New York City

Country: United States

J.P. Morgan logo
Bulge Bracket Investment Banks

8 days ago

No clicks

at J.P. Morgan

ExperiencedNo visa sponsorship

**Product Manager, Model Serving – AI/ML Solutions Team** Drive AI/ML innovation in our dynamic New York team. As a Product Manager, you'll shape model serving infrastructure, guiding its end-to-end lifecycle and ensuring top-tier client experiences. Key responsibilities include: - Developing product strategy and roadmaps for model serving capabilities - Managing discovery efforts and market research for ML deployment needs - Overseeing product backlogs for real-time and batch inference - Enabling cross-functional teams to deliver high-value ML platforms This role requires 5+ years of product management experience in AI/ML, with expertise in ML lifecycle stages, delivery management, and continuous improvement. A strong technical background is essential, including experience with AWS, Docker, Kubernetes, and JIRA. Agile methodologies and Release Management proficiency are also key.

Full Job Description

Location: New York, NY, United States

You enjoy shaping the future of product innovation as a core leader, driving value for customers, guiding successful launches, and exceeding expectations. Join our dynamic AI/ML Solutions team and make a meaningful impact by delivering high-quality model serving infrastructure and capabilities that empower data scientists, engineers, and business stakeholders alike.

As a Product Manager in Model Serving, you are an integral part of the team that innovates new AI/ML product offerings and leads the end-to-end product life cycle. You will play a crucial role in developing and implementing our enterprise model serving platform capabilities spanning model deployment, inference infrastructure, and lifecycle management. Utilizing your deep understanding of how to get a product off the ground, you guide the successful launch of ML platform features, gather crucial feedback, and ensure top-tier client experiences. With a strong commitment to scalability, resiliency, and stability, you collaborate closely with cross-functional teams including ML engineers, data scientists, and platform engineers to deliver high-quality products that exceed customer expectations.

 
Job responsibilities
  • Develops a product strategy and product vision for model serving capabilities that delivers measurable value to internal and external customers across the AI/ML lifecycle
  • Manages discovery efforts and market research to uncover model deployment and inference needs, integrating insights into a prioritized product roadmap
  • Owns, maintains, and develops a product backlog that enables development teams to support the overall strategic roadmap for model serving, including real-time and batch inference
  • Builds the framework and tracks key success metrics such as inference latency, throughput, model availability, and cost efficiency,
  • Leads end-to-end product delivery processes including intake, dependency management, release management, product operationalization, delivery feasibility decision-making, and product performance reporting, while escalating opportunities to improve efficiencies and functional coordination
  • Carries out in-depth quantitative and qualitative analysis to support business cases for model serving investments and leadership decision-making
  • Leads the completion of change management activities across functional partners and ensures adherence to the firm's risk, controls, compliance, and regulatory requirements including model risk governance standards
  • Communicates proposed solutions and insights effectively to stakeholders across ML engineering, data science, risk, and business teams
  • Promotes adherence to industry standards and best practices for ML model deployment, serving infrastructure, and MLOps (e.g., model registries, CI/CD for ML, feature stores)
  • Stays informed on industry trends and emerging technologies in model serving, LLM inference optimization, ML observability, and AI platform architecture
Required qualifications, capabilities, and skills
  • 5+ years of experience or equivalent expertise in product management, with exposure to AI/ML platforms, MLOps, or a closely related domain
  • Advanced knowledge of the product development life cycle, design, and data analytics, with specific familiarity with ML model lifecycle stages (training, validation, deployment, monitoring, retraining)
  • Proven ability to lead product life cycle activities including discovery, ideation, strategic development, requirements definition, and value management
  • Demonstrated ability to execute operational management and change readiness activities in a fast-moving AI/ML environment
  • Strong understanding of delivery and a proven track record of implementing continuous improvement processes for ML platform capabilities
  • Strong influencing and partnership/collaboration skills to drive cross-functional teams including data scientists, ML engineers, and platform architects to build better solutions and execute product go-live plans
  • Experience in product or platform-wide release management, deployment processes, and strategies for ML systems; must be able to build solutions from the ground up
  • Strong technical background with experience working on AWS, containerized workloads (e.g., Docker, Kubernetes), and model serving frameworks; experience with JIRA and Agile methodologies
  • Foundational understanding of ML model serving concepts including online vs. batch inference, model versioning, shadow deployments, and canary releases
Preferred qualifications, capabilities, and skills
  • Demonstrated prior experience working in a highly matrixed, complex organization with multiple ML and data platform stakeholders
  • Practical experience with modern ML serving and orchestration technologies such as Ray Serve, Seldon, or Data Bricks
  • Experience with ML observability, model monitoring, and drift detection frameworks
  • Knowledge of LLM inference optimization techniques such as quantization, batching strategies, and GPU resource management
  • Familiarity with feature stores, model registries, and end-to-end MLOps pipeline

 
Be a leader committed to understanding customer needs with your advanced knowledge of product development, design, and data analytics