LOG IN
SIGN UP
Canary Wharfian - Online Investment Banking & Finance Community.
Sign In
or continue with e-mail and password
Forgot password?
Don't have an account?
Create an account
or continue with e-mail and password
By signing up, you agree to our Terms & Conditions and Privacy Policy.

PySpark Data Engineer III - Python/Java/SQL

ExperiencedNo visa sponsorship
J.P. Morgan logo

at J.P. Morgan

Bulge Bracket Investment Banks

Posted 13 days ago

No clicks

PySpark Data Engineer III: Design & deliver secure, scalable data solutions at JPMorgan Chase. Develop, test, and maintain data pipelines & architectures across multiple technical areas. Apply Python, Java, SQL, and Spark skills. Collaborate agilely, foster inclusive team culture. Requires 3+ years' applied software engineering experience.

Compensation
Not specified USD

Currency: $ (USD)

City
Boston
Country
United States

Full Job Description

Location: Boston, MA, United States

Be part of a dynamic team where your distinctive skills will contribute to a winning culture and team.

As a Data Engineer III at JPMorganChase within the Commercial & Investment Bank, you serve as a seasoned member of an agile team to design and deliver trusted data collection, storage, access, and analytics solutions in a secure, stable, and scalable way. You are responsible for developing, testing, and maintaining critical data pipelines and architectures across multiple technical areas within various business functions in support of the firms business objectives.
 

 

Job responsibilities

  • Executes software solutions, designs, develops, and troubleshoots software solutions, applying innovative thinking to solve complex technical challenges
  • Write secure, high-quality production code and maintain robust algorithms that integrate seamlessly with enterprise systems using Python, Java, Agentic AI and coding assistants.
  • Produce architecture and design artifacts for complex applications, ensuring all design constraints are met throughout software development
  • Gather, analyze, and synthesize large, diverse data sets to develop visualizations and reporting to enable data-driven decision-making
  • Design and implement robust data ingestion & curation pipelines to bring diverse datasets into the cloud / Databricks
  • Contribute to software engineering communities of practice and participate in events exploring new and emerging technologies
  • Foster a team culture of diversity, equity, inclusion, and respect

 

Required qualifications, capabilities, and skills

  • Formal training or certification on Software Engineering concepts and 3+ years applied experience
  • Practical experience in system design, application development, testing, and ensuring operational stability
  • Strong in one or more programming languages including Python, Spark, Java and SQL.
  • Experience developing, debugging, and maintaining code in a large corporate environment, using modern programming and database querying languages
  • Comprehensive understanding of the Software Development Life Cycle (SDLC)
  • Solid grasp of agile methodologies, including CI/CD, application resiliency, and security best practices
  • Demonstrated expertise in software applications and technical processes within disciplines such as data platforms, cloud, Agentic AI frameworks and AI/ML

 

Preferred qualifications, capabilities, and skills

  • Experience in data engineering, with a strong understanding of data modeling and ETL processing.
  • Experience with Databricks, CockroachDB, Apache Iceberg, Trino, Springboot, Kafka, RESTful APIs and AWS.
  • Familiarity with agentic AI frameworks such as ADK, LangChain/LangGraph, AutoGen or CrewAI 

 

Develop, test, and maintain critical data pipelines and architectures across multiple technical areas

PySpark Data Engineer III - Python/Java/SQL

Compensation

Not specified USD

City: Boston

Country: United States

J.P. Morgan logo
Bulge Bracket Investment Banks

13 days ago

No clicks

at J.P. Morgan

ExperiencedNo visa sponsorship

PySpark Data Engineer III: Design & deliver secure, scalable data solutions at JPMorgan Chase. Develop, test, and maintain data pipelines & architectures across multiple technical areas. Apply Python, Java, SQL, and Spark skills. Collaborate agilely, foster inclusive team culture. Requires 3+ years' applied software engineering experience.

Full Job Description

Location: Boston, MA, United States

Be part of a dynamic team where your distinctive skills will contribute to a winning culture and team.

As a Data Engineer III at JPMorganChase within the Commercial & Investment Bank, you serve as a seasoned member of an agile team to design and deliver trusted data collection, storage, access, and analytics solutions in a secure, stable, and scalable way. You are responsible for developing, testing, and maintaining critical data pipelines and architectures across multiple technical areas within various business functions in support of the firms business objectives.
 

 

Job responsibilities

  • Executes software solutions, designs, develops, and troubleshoots software solutions, applying innovative thinking to solve complex technical challenges
  • Write secure, high-quality production code and maintain robust algorithms that integrate seamlessly with enterprise systems using Python, Java, Agentic AI and coding assistants.
  • Produce architecture and design artifacts for complex applications, ensuring all design constraints are met throughout software development
  • Gather, analyze, and synthesize large, diverse data sets to develop visualizations and reporting to enable data-driven decision-making
  • Design and implement robust data ingestion & curation pipelines to bring diverse datasets into the cloud / Databricks
  • Contribute to software engineering communities of practice and participate in events exploring new and emerging technologies
  • Foster a team culture of diversity, equity, inclusion, and respect

 

Required qualifications, capabilities, and skills

  • Formal training or certification on Software Engineering concepts and 3+ years applied experience
  • Practical experience in system design, application development, testing, and ensuring operational stability
  • Strong in one or more programming languages including Python, Spark, Java and SQL.
  • Experience developing, debugging, and maintaining code in a large corporate environment, using modern programming and database querying languages
  • Comprehensive understanding of the Software Development Life Cycle (SDLC)
  • Solid grasp of agile methodologies, including CI/CD, application resiliency, and security best practices
  • Demonstrated expertise in software applications and technical processes within disciplines such as data platforms, cloud, Agentic AI frameworks and AI/ML

 

Preferred qualifications, capabilities, and skills

  • Experience in data engineering, with a strong understanding of data modeling and ETL processing.
  • Experience with Databricks, CockroachDB, Apache Iceberg, Trino, Springboot, Kafka, RESTful APIs and AWS.
  • Familiarity with agentic AI frameworks such as ADK, LangChain/LangGraph, AutoGen or CrewAI 

 

Develop, test, and maintain critical data pipelines and architectures across multiple technical areas