LOG IN
SIGN UP
Canary Wharfian - Online Investment Banking & Finance Community.
Sign In
OR continue with e-mail and password
E-mail address
Password
Don't have an account?
Reset password
Join Canary Wharfian
OR continue with e-mail and password
E-mail address
Username
Password
Confirm Password
How did you hear about us?
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Job Details

J.P. Morgan logo
Bulge Bracket Investment Banks

Platform Software Engineer III- ETL / AWS/ Python / Pyspark

at J.P. Morgan

ExperiencedNo visa sponsorship

Posted 17 days ago

No clicks

Senior software engineer on JPMorgan Chase's Asset & Wealth Management Technology team responsible for designing and delivering scalable ETL and data engineering solutions using Python and PySpark on AWS. Build and implement data pipelines, leverage infrastructure-as-code to orchestrate and monitor pipelines, and create programmatic cloud compute resources for large-scale data ingestion and distribution. Produce architecture and design artifacts, write secure production-grade code, and contribute to CI/CD and DevOps practices for data platforms. Role requires strong experience with cloud services (AWS), data modeling, ETL/warehousing, and tools such as Terraform, Snowflake/Databricks, and big data services.

Compensation
Not specified

Currency: Not specified

City
Jersey City
Country
United States

Full Job Description

Location: Jersey City, NJ, United States

We have an exciting and rewarding opportunity for you to take your software engineering career to the next level. 

As a Platform Software Engineer III- ETL / AWS / Python / Pyspark at JPMorganChase within the Asset and Wealth Management Technology Team, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm’s business objectives.

Job responsibilities

 

  • Executes software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems
  • Designs, develops, and implements scalable data pipelines and ETL batches using Python/PySpark on AWS
  • Uses infrastructure as code tools to build applications to orchestrate and monitor data pipelines, create and manage on-demand compute resources on cloud programmatically, and create frameworks to ingest and distribute data at scale
  • Stays up-to-date with emerging technologies and industry trends to drive innovation and continuous improvement
  • Leverage AI coding assist tools to accelerate software development, testing and delivery
  • Creates secure and high-quality production code and maintains algorithms that run synchronously with appropriate systems
  • Produces architecture and design artifacts for complex applications while being accountable for ensuring design constraints are met by software code development
  • Gathers, analyzes, synthesizes, and develops visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systems
  • Proactively identifies hidden problems and patterns in data and uses these insights to drive improvements to coding hygiene and system architecture
  • Contributes to software engineering communities of practice and events that explore new and emerging technologies
  • Adds to team culture of diversity, opportunity, inclusion, and respect

 

 

Required qualifications, capabilities, and skills

 

  • Formal training or certification on software engineering concepts and 3 + years applied experience 
  • Hands-on practical experience in system design, application development, testing, and operational stability
  • Experience in software development and data engineering, with demonstrable hands-on experience in Python and PySpark
  • Proven experience with cloud platforms such as AWS, Azure, or Google Cloud
  • Good understanding of data modeling, data architecture, ETL processes, and data warehousing concepts
  • Experience or good knowledge of cloud native ETL platforms like Snowflake and/or Databricks
  • Experience with big data technologies and services like AWS EMRs, Redshift, Lambda, S3
  • Proven experience with efficient Cloud DevOps practices and CI/CD tools like Jenkins/Gitlab, for data engineering platforms
  • Good knowledge of SQL and NoSQL databases, including performance tuning and optimization
  • Experience with declarative infra provisioning tools like Terraform, Ansible or CloudFormation
  • Experience in developing, debugging, and maintaining code in a large corporate environment with one or more modern programming languages and database querying languages

 

 

Preferred qualifications, capabilities, and skills

 

  • Knowledge of machine learning model lifecycle, AI coding assist tools, and cloud-native MLOps pipelines and frameworks 
  • Familiarity with data visualization tools and data integration patterns

 

Seeking a platform engineer with strong experience in Python, Pyspark, ETL and AWS

Job Details

J.P. Morgan logo
Bulge Bracket Investment Banks

17 days ago

clicks

Platform Software Engineer III- ETL / AWS/ Python / Pyspark

at J.P. Morgan

ExperiencedNo visa sponsorship

Not specified

Currency not set

City: Jersey City

Country: United States

Senior software engineer on JPMorgan Chase's Asset & Wealth Management Technology team responsible for designing and delivering scalable ETL and data engineering solutions using Python and PySpark on AWS. Build and implement data pipelines, leverage infrastructure-as-code to orchestrate and monitor pipelines, and create programmatic cloud compute resources for large-scale data ingestion and distribution. Produce architecture and design artifacts, write secure production-grade code, and contribute to CI/CD and DevOps practices for data platforms. Role requires strong experience with cloud services (AWS), data modeling, ETL/warehousing, and tools such as Terraform, Snowflake/Databricks, and big data services.

Full Job Description

Location: Jersey City, NJ, United States

We have an exciting and rewarding opportunity for you to take your software engineering career to the next level. 

As a Platform Software Engineer III- ETL / AWS / Python / Pyspark at JPMorganChase within the Asset and Wealth Management Technology Team, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm’s business objectives.

Job responsibilities

 

  • Executes software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems
  • Designs, develops, and implements scalable data pipelines and ETL batches using Python/PySpark on AWS
  • Uses infrastructure as code tools to build applications to orchestrate and monitor data pipelines, create and manage on-demand compute resources on cloud programmatically, and create frameworks to ingest and distribute data at scale
  • Stays up-to-date with emerging technologies and industry trends to drive innovation and continuous improvement
  • Leverage AI coding assist tools to accelerate software development, testing and delivery
  • Creates secure and high-quality production code and maintains algorithms that run synchronously with appropriate systems
  • Produces architecture and design artifacts for complex applications while being accountable for ensuring design constraints are met by software code development
  • Gathers, analyzes, synthesizes, and develops visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systems
  • Proactively identifies hidden problems and patterns in data and uses these insights to drive improvements to coding hygiene and system architecture
  • Contributes to software engineering communities of practice and events that explore new and emerging technologies
  • Adds to team culture of diversity, opportunity, inclusion, and respect

 

 

Required qualifications, capabilities, and skills

 

  • Formal training or certification on software engineering concepts and 3 + years applied experience 
  • Hands-on practical experience in system design, application development, testing, and operational stability
  • Experience in software development and data engineering, with demonstrable hands-on experience in Python and PySpark
  • Proven experience with cloud platforms such as AWS, Azure, or Google Cloud
  • Good understanding of data modeling, data architecture, ETL processes, and data warehousing concepts
  • Experience or good knowledge of cloud native ETL platforms like Snowflake and/or Databricks
  • Experience with big data technologies and services like AWS EMRs, Redshift, Lambda, S3
  • Proven experience with efficient Cloud DevOps practices and CI/CD tools like Jenkins/Gitlab, for data engineering platforms
  • Good knowledge of SQL and NoSQL databases, including performance tuning and optimization
  • Experience with declarative infra provisioning tools like Terraform, Ansible or CloudFormation
  • Experience in developing, debugging, and maintaining code in a large corporate environment with one or more modern programming languages and database querying languages

 

 

Preferred qualifications, capabilities, and skills

 

  • Knowledge of machine learning model lifecycle, AI coding assist tools, and cloud-native MLOps pipelines and frameworks 
  • Familiarity with data visualization tools and data integration patterns

 

Seeking a platform engineer with strong experience in Python, Pyspark, ETL and AWS