LOG IN
SIGN UP
Canary Wharfian - Online Investment Banking & Finance Community.
Sign In
or continue with e-mail and password
Forgot password?
Don't have an account?
Create an account
or continue with e-mail and password
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Software Engineer III - PySpark/AWS

ExperiencedNo visa sponsorship
J.P. Morgan logo

at J.P. Morgan

Bulge Bracket Investment Banks

Posted 11 days ago

No clicks

**Software Engineer III - PySpark/AWS** Lead data engineering efforts for AI-enabled systems at JPMorganChase. Build, optimize, and manage data pipelines, retrieval systems, and tool-use infrastructure that empower autonomous AI agents. Key responsibilities include ensuring data quality and security, deploying code via CI/CD pipelines, and mentoring junior engineers. Required qualifications include 3+ years of applied software engineering experience, expert-level Python/PySpark skills, extensive AWS and Databricks experience, and expertise in big data and data warehousing concepts. Foster a culture of engineering excellence within an agile team.

Compensation
Not specified USD

Currency: $ (USD)

City
Wilmington
Country
United States

Full Job Description

Location: Wilmington, DE, United States

We have an exciting and rewarding opportunity for you to take your data engineering career to the next level.

 

As an Software Engineer III - PySpark/AWS at JPMorganChase within the Corporate Sector-Global Finance team, you will be a key member of an agile team, responsible for building and delivering AI-enabled data products that are secure, stable, and scalable. In this role, you will develop data infrastructure, tool integrations, and retrieval systems that enable AI agents to access, interpret, and act on enterprise data in support of the firms business goals. You will work alongside senior engineers, grow your expertise in agentic AI data engineering, and contribute to a culture of engineering excellence.

Job Responsibilities

  • Building and optimizing data pipelines and workflows that serve as the backbone for agentic AI systems, ensuring agents have reliable, real-time access to high-quality, structured and unstructured data
  • Developing data retrieval and indexing layers that enable AI agents to autonomously search, query, and synthesize information across multiple data sources
  • Building and maintaining tool-use infrastructure APIs, data services, and function endpoints that AI agents invoke to execute tasks, retrieve data, and interact with enterprise systems
  • Implementing and enforcing best practices for data management, ensuring data quality, security, and compliance, including governance of data consumed and generated by autonomous AI agents
  • Hands-on development of secure, high-quality production code following AWS best practices, and deploying efficiently using CI/CD pipelines; 

    Building orchestration and state management layers that support multi-step agent workflows, including memory, context persistence, and task chaining

  • Writing and reviewing code daily, conducting thorough code reviews, and raising the technical bar across the team; 

    Mentoring and guiding junior and mid-level engineers through pairing, code reviews, and technical coaching

  • Collaborating with product owners, data scientists, and business stakeholders to translate business requirements into working, production-ready agentic AI solutions; 

    Evaluating and adopting emerging agentic AI frameworks, tools, and data engineering practices to continuously improve the teams development capabilities

 

Required Qualifications, Capabilities, and Skills

  • Formal training or certification on software engineering concepts and 3+ years applied experience
  • Expert-level programming skills in Python/PySpark with a strong portfolio of production-grade code
  • Extensive hands-on experience with Databricks and the AWS cloud ecosystem, including AWS Glue, S3, SQS/SNS, Lambda
  • Deep expertise with Spark and SQL
  • Strong hands-on experience with Lakehouse/Delta Lake architecture, application development, testing, and ensuring operational stability; Snowflake, Terraform and LLMs; Data Observability, Data Quality, Query Optimization & Cost Optimization
  • In-depth knowledge of Big Data and data warehousing concepts at enterprise scale
  • Extensive experience with CI/CD processes and automated testing frameworks
  • Solid understanding of agile methodologies, including DevOps practices, application resiliency, and security measures
  • Understanding of agentic AI concepts how autonomous AI agents plan, reason, use tools, and execute multi-step workflows and the data infrastructure required to support them
  • Experience building APIs, data services, and retrieval systems that serve as the connective tissue between AI agents and enterprise data
  • Demonstrated ability to lead by example through code, mentor engineers, and drive delivery across the team

 

Preferred Qualifications, Capabilities, and Skills

  • Experience with agentic AI frameworks (e.g., LangGraph, AutoGen, CrewAI, OpenAI Assistants API) and understanding of how data engineering underpins agent orchestration
  • Familiarity with tool-use and function-calling patterns for LLM-based agents, including building and exposing APIs and data endpoints that agents can invoke autonomously
  • Experience with vector databases (e.g., Pinecone, FAISS, Chroma) and embedding workflows for powering agent memory, semantic search, and retrieval-augmented generation (RAG)
  • Exposure to agent memory and state management patterns short-term context windows, long-term persistent memory stores, and conversation/task history management
  • Familiarity with guardrails and safety frameworks for autonomous AI systems, including input/output validation, action approval workflows, and human-in-the-loop controls
  • Understanding of observability and monitoring for agentic systems tracing agent decision paths, logging tool invocations, and debugging multi-step autonomous workflows
  • Understanding of responsible AI principles, particularly around autonomous decision-making, data provenance, and auditability of agent actions
Excellent opportunity to make an impact on a mission critical team using Databricks, PySpark and AI!

Software Engineer III - PySpark/AWS

Compensation

Not specified USD

City: Wilmington

Country: United States

J.P. Morgan logo
Bulge Bracket Investment Banks

11 days ago

No clicks

at J.P. Morgan

ExperiencedNo visa sponsorship

**Software Engineer III - PySpark/AWS** Lead data engineering efforts for AI-enabled systems at JPMorganChase. Build, optimize, and manage data pipelines, retrieval systems, and tool-use infrastructure that empower autonomous AI agents. Key responsibilities include ensuring data quality and security, deploying code via CI/CD pipelines, and mentoring junior engineers. Required qualifications include 3+ years of applied software engineering experience, expert-level Python/PySpark skills, extensive AWS and Databricks experience, and expertise in big data and data warehousing concepts. Foster a culture of engineering excellence within an agile team.

Full Job Description

Location: Wilmington, DE, United States

We have an exciting and rewarding opportunity for you to take your data engineering career to the next level.

 

As an Software Engineer III - PySpark/AWS at JPMorganChase within the Corporate Sector-Global Finance team, you will be a key member of an agile team, responsible for building and delivering AI-enabled data products that are secure, stable, and scalable. In this role, you will develop data infrastructure, tool integrations, and retrieval systems that enable AI agents to access, interpret, and act on enterprise data in support of the firms business goals. You will work alongside senior engineers, grow your expertise in agentic AI data engineering, and contribute to a culture of engineering excellence.

Job Responsibilities

  • Building and optimizing data pipelines and workflows that serve as the backbone for agentic AI systems, ensuring agents have reliable, real-time access to high-quality, structured and unstructured data
  • Developing data retrieval and indexing layers that enable AI agents to autonomously search, query, and synthesize information across multiple data sources
  • Building and maintaining tool-use infrastructure APIs, data services, and function endpoints that AI agents invoke to execute tasks, retrieve data, and interact with enterprise systems
  • Implementing and enforcing best practices for data management, ensuring data quality, security, and compliance, including governance of data consumed and generated by autonomous AI agents
  • Hands-on development of secure, high-quality production code following AWS best practices, and deploying efficiently using CI/CD pipelines; 

    Building orchestration and state management layers that support multi-step agent workflows, including memory, context persistence, and task chaining

  • Writing and reviewing code daily, conducting thorough code reviews, and raising the technical bar across the team; 

    Mentoring and guiding junior and mid-level engineers through pairing, code reviews, and technical coaching

  • Collaborating with product owners, data scientists, and business stakeholders to translate business requirements into working, production-ready agentic AI solutions; 

    Evaluating and adopting emerging agentic AI frameworks, tools, and data engineering practices to continuously improve the teams development capabilities

 

Required Qualifications, Capabilities, and Skills

  • Formal training or certification on software engineering concepts and 3+ years applied experience
  • Expert-level programming skills in Python/PySpark with a strong portfolio of production-grade code
  • Extensive hands-on experience with Databricks and the AWS cloud ecosystem, including AWS Glue, S3, SQS/SNS, Lambda
  • Deep expertise with Spark and SQL
  • Strong hands-on experience with Lakehouse/Delta Lake architecture, application development, testing, and ensuring operational stability; Snowflake, Terraform and LLMs; Data Observability, Data Quality, Query Optimization & Cost Optimization
  • In-depth knowledge of Big Data and data warehousing concepts at enterprise scale
  • Extensive experience with CI/CD processes and automated testing frameworks
  • Solid understanding of agile methodologies, including DevOps practices, application resiliency, and security measures
  • Understanding of agentic AI concepts how autonomous AI agents plan, reason, use tools, and execute multi-step workflows and the data infrastructure required to support them
  • Experience building APIs, data services, and retrieval systems that serve as the connective tissue between AI agents and enterprise data
  • Demonstrated ability to lead by example through code, mentor engineers, and drive delivery across the team

 

Preferred Qualifications, Capabilities, and Skills

  • Experience with agentic AI frameworks (e.g., LangGraph, AutoGen, CrewAI, OpenAI Assistants API) and understanding of how data engineering underpins agent orchestration
  • Familiarity with tool-use and function-calling patterns for LLM-based agents, including building and exposing APIs and data endpoints that agents can invoke autonomously
  • Experience with vector databases (e.g., Pinecone, FAISS, Chroma) and embedding workflows for powering agent memory, semantic search, and retrieval-augmented generation (RAG)
  • Exposure to agent memory and state management patterns short-term context windows, long-term persistent memory stores, and conversation/task history management
  • Familiarity with guardrails and safety frameworks for autonomous AI systems, including input/output validation, action approval workflows, and human-in-the-loop controls
  • Understanding of observability and monitoring for agentic systems tracing agent decision paths, logging tool invocations, and debugging multi-step autonomous workflows
  • Understanding of responsible AI principles, particularly around autonomous decision-making, data provenance, and auditability of agent actions
Excellent opportunity to make an impact on a mission critical team using Databricks, PySpark and AI!