LOG IN
SIGN UP
Canary Wharfian - Online Investment Banking & Finance Community.
Sign In
OR continue with e-mail and password
E-mail address
Password
Don't have an account?
Reset password
Join Canary Wharfian
OR continue with e-mail and password
E-mail address
Username
Password
Confirm Password
How did you hear about us?
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Job Details

J.P. Morgan logo
Bulge Bracket Investment Banks

Lead Security Engineer -AI Red Team

at J.P. Morgan

ExperiencedNo visa sponsorship

Posted 15 days ago

No clicks

Senior security engineer role focused on designing and executing AI/LLM red teaming, adversarial testing, and secure architecture for generative AI and ML systems. You will develop red teaming methodologies, threat models, playbooks, and automation while collaborating with product, data science, cyber, legal, and risk stakeholders. The role requires hands-on cloud-native AI experience, vulnerability testing, IaC, Python scripting, and the ability to drive enterprise-level security solutions and continuous testing programs.

Compensation
Not specified

Currency: Not specified

City
Dallas
Country
United States

Full Job Description

Location: Plano, TX, United States

Take on a crucial role where you'll be a key part of a high-performing team delivering secure software solutions. Make a real impact as you help shape the future of software security at one of the world's largest and most influential companies. As a Lead Security Engineer at JPMorgan Chase within the Cybersecurity & Technology Controls for AI/ML, you are an integral part of team that works to deliver software solutions that satisfy pre-defined functional and user requirements with the added dimension of preventing misuse, circumvention, and malicious behavior. As a core technical contributor, you are responsible for carrying out critical technology solutions with tamper-proof, audit defensible methods across multiple technical areas within various business functions.  

Job Responsibilities
  • Develop and enhance security strategies, red teaming programs, and solution designs, while troubleshooting technical issues and creating scalable solutions.
  • Design secure, high-quality AI and software architectures, reviewing and challenging designs and code to ensure adversarial resilience.
  • Reduce AI and LLM security vulnerabilities by adhering to industry standards and emerging AI safety research, evolving policies, testing protocols, and controls.
  • Collaborate with stakeholders across product, data science, cyber, legal, and risk to understand AI use cases and recommend modifications during periods of heightened vulnerability or regulatory change.
  • Conduct discovery, threat modeling, and adversarial testing on generative AI, RAG pipelines, and ML systems to identify vulnerabilities such as prompt injection, jailbreaking, and data poisoning.
  • Provide guidance on secure design, logging, monitoring, and compensating controls for AI applications and platforms.
  • Define and implement AI red teaming methodologies, playbooks, and success metrics, establishing mechanisms for continuous testing and safe rollout of new AI models and features.
  • Work with platform and cloud security teams to ensure secure infrastructure configuration and alignment with enterprise security architecture.
  • Engage with external researchers, vendors, and standards bodies to track emerging AI threats and bring best practices into the organization.
  • Foster a team culture of diversity, equity, inclusion, and respect.
  • Collaborate within a cross-functional team to develop relationships, influence senior stakeholders, and drive alignment on AI risk tolerance and mitigation priorities.
Required Qualifications, Capabilities, and Skills
  • Formal training or certification in Public Cloud environment concepts and advanced hands-on experience with cloud-native AI services (e.g., Bedrock).
  • Experience with threat modeling, discovery, vulnerability, and penetration testing (e.g., MITRE ATLAS, OWASP Top 10 for LLMs) and foundational cybersecurity concepts such as IAM, Authentication, OIDC, SAML.
  • Practical experience with Infrastructure as Code (IaC) solutions like Terraform and CloudFormation.
  • Proficiency in Python scripting.
  • Strong understanding of AI/ML concepts and trends, with knowledge of AI red teaming foundational concepts to design and implement exercises for complex AI architectures.
  • Ability to conceptualize, design, validate, and communicate creative technical solutions to enterprise-level security problems, including building internal tools, dashboards, and automation for red teaming activities.
Preferred Qualifications, Capabilities, and Skills
  • Expertise in planning, designing, and implementing AI red teaming exercises and enterprise-level security solutions for generative AI, LLMs, and ML systems.
  • Experience with specialized AI security/red teaming tools and frameworks (e.g., PyRIT, Garak, custom LLM evaluation harnesses) and contributions to AI security or open-source security projects.
Lead Security Engineer at JPMorgan Chase within the Cybersecurity & Technology Controls for AI/ML

Job Details

J.P. Morgan logo
Bulge Bracket Investment Banks

15 days ago

clicks

Lead Security Engineer -AI Red Team

at J.P. Morgan

ExperiencedNo visa sponsorship

Not specified

Currency not set

City: Dallas

Country: United States

Senior security engineer role focused on designing and executing AI/LLM red teaming, adversarial testing, and secure architecture for generative AI and ML systems. You will develop red teaming methodologies, threat models, playbooks, and automation while collaborating with product, data science, cyber, legal, and risk stakeholders. The role requires hands-on cloud-native AI experience, vulnerability testing, IaC, Python scripting, and the ability to drive enterprise-level security solutions and continuous testing programs.

Full Job Description

Location: Plano, TX, United States

Take on a crucial role where you'll be a key part of a high-performing team delivering secure software solutions. Make a real impact as you help shape the future of software security at one of the world's largest and most influential companies. As a Lead Security Engineer at JPMorgan Chase within the Cybersecurity & Technology Controls for AI/ML, you are an integral part of team that works to deliver software solutions that satisfy pre-defined functional and user requirements with the added dimension of preventing misuse, circumvention, and malicious behavior. As a core technical contributor, you are responsible for carrying out critical technology solutions with tamper-proof, audit defensible methods across multiple technical areas within various business functions.  

Job Responsibilities
  • Develop and enhance security strategies, red teaming programs, and solution designs, while troubleshooting technical issues and creating scalable solutions.
  • Design secure, high-quality AI and software architectures, reviewing and challenging designs and code to ensure adversarial resilience.
  • Reduce AI and LLM security vulnerabilities by adhering to industry standards and emerging AI safety research, evolving policies, testing protocols, and controls.
  • Collaborate with stakeholders across product, data science, cyber, legal, and risk to understand AI use cases and recommend modifications during periods of heightened vulnerability or regulatory change.
  • Conduct discovery, threat modeling, and adversarial testing on generative AI, RAG pipelines, and ML systems to identify vulnerabilities such as prompt injection, jailbreaking, and data poisoning.
  • Provide guidance on secure design, logging, monitoring, and compensating controls for AI applications and platforms.
  • Define and implement AI red teaming methodologies, playbooks, and success metrics, establishing mechanisms for continuous testing and safe rollout of new AI models and features.
  • Work with platform and cloud security teams to ensure secure infrastructure configuration and alignment with enterprise security architecture.
  • Engage with external researchers, vendors, and standards bodies to track emerging AI threats and bring best practices into the organization.
  • Foster a team culture of diversity, equity, inclusion, and respect.
  • Collaborate within a cross-functional team to develop relationships, influence senior stakeholders, and drive alignment on AI risk tolerance and mitigation priorities.
Required Qualifications, Capabilities, and Skills
  • Formal training or certification in Public Cloud environment concepts and advanced hands-on experience with cloud-native AI services (e.g., Bedrock).
  • Experience with threat modeling, discovery, vulnerability, and penetration testing (e.g., MITRE ATLAS, OWASP Top 10 for LLMs) and foundational cybersecurity concepts such as IAM, Authentication, OIDC, SAML.
  • Practical experience with Infrastructure as Code (IaC) solutions like Terraform and CloudFormation.
  • Proficiency in Python scripting.
  • Strong understanding of AI/ML concepts and trends, with knowledge of AI red teaming foundational concepts to design and implement exercises for complex AI architectures.
  • Ability to conceptualize, design, validate, and communicate creative technical solutions to enterprise-level security problems, including building internal tools, dashboards, and automation for red teaming activities.
Preferred Qualifications, Capabilities, and Skills
  • Expertise in planning, designing, and implementing AI red teaming exercises and enterprise-level security solutions for generative AI, LLMs, and ML systems.
  • Experience with specialized AI security/red teaming tools and frameworks (e.g., PyRIT, Garak, custom LLM evaluation harnesses) and contributions to AI security or open-source security projects.
Lead Security Engineer at JPMorgan Chase within the Cybersecurity & Technology Controls for AI/ML