LOG IN
SIGN UP
Canary Wharfian - Online Investment Banking & Finance Community.
Sign In
or continue with e-mail and password
Forgot password?
Don't have an account?
Create an account
or continue with e-mail and password
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Data Engineer II

ExperiencedNo visa sponsorship
J.P. Morgan logo

at J.P. Morgan

Bulge Bracket Investment Banks

Posted 11 days ago

No clicks

As a Data Engineer II, design, build, and maintain secure, scalable data pipelines using AWS and Python/PySpark, supporting JPMorgan Chase's business objectives. Collaborate within an agile, diverse team to deliver trusted data across multiple business functions.

Compensation
Not specified USD

Currency: $ (USD)

City
Jersey City
Country
United States

Full Job Description

Location: Jersey City, NJ, United States

You thrive on diversity and creativity, and we welcome individuals who share our vision of making a lasting impact. Your unique combination of design thinking and experience will help us achieve new heights.


 

As a Data Engineer III at JPMorgan Chase within the Consumer and Community Bank - Connected Commerce Technology team, you serve as a seasoned member of an agile team to design and deliver trusted data collection, storage, access, and analytics solutions in a secure, stable, and scalable way. You are responsible for developing, testing, and maintaining critical data pipelines and architectures across multiple technical areas within various business functions in support of the firms business objectives.

Job responsibilities

 

  • Supports review of controls to ensure sufficient protection of enterprise data
  • Advises and makes custom configuration changes in one to two tools to generate a product at the business or customer request
  • Updates logical or physical data models based on new use cases
  • Frequently uses SQL and understands NoSQL databases and their niche in the marketplace
  • Adds to team culture of diversity, opportunity, inclusion, and respect

 

Required qualifications, capabilities, and skills

 

  • Formal training or certification on data engineering concepts and 2+ years applied experience
  • Experience across the data lifecycle
  • Experience in ETL process/Advance concepts .
  • Advanced at SQL (e.g., joins and aggregations)
  • Experience in AWS, design, implementation, and maintenance of data pipelines using Python and PySpark (secondary alternative: Java)
  • Working understanding of NoSQL databases
  • Significant experience with statistical data analysis and ability to determine appropriate tools and data patterns to perform analysis
  • Experience customizing changes in a tool to generate product
  • Proficiency in Unix scripting, data structures, data serialization formats such as JSON, AVRO, Protobuf, or similar, big-data storage formats such as Parquet, Iceberg, or similar, data processing methodologies such as batch, micro-batching, or stream, one or more data modelling techniques such as Dimensional, Data Vault, Kimball, Inmon, etc., Agile methodology, TDD or BDD and CI/CD tools

 

Preferred qualifications, capabilities, and skills

  • Python Advance development skills  / Kafka & S3 integration in Performance optimization
  • Experience in carrying out data analysis to support business insights
  • Strong in PySpark, AWS & Snowflake
 

 
Be part of an agile team that works to enhance, design, and deliver data collection, storage, access, and analytics solutions
Apply now

SIMILAR OPPORTUNITIES

No similar opportunities available at the moment.

Data Engineer II

Compensation

Not specified USD

City: Jersey City

Country: United States

J.P. Morgan logo
Bulge Bracket Investment Banks

11 days ago

No clicks

at J.P. Morgan

ExperiencedNo visa sponsorship

As a Data Engineer II, design, build, and maintain secure, scalable data pipelines using AWS and Python/PySpark, supporting JPMorgan Chase's business objectives. Collaborate within an agile, diverse team to deliver trusted data across multiple business functions.

Full Job Description

Location: Jersey City, NJ, United States

You thrive on diversity and creativity, and we welcome individuals who share our vision of making a lasting impact. Your unique combination of design thinking and experience will help us achieve new heights.


 

As a Data Engineer III at JPMorgan Chase within the Consumer and Community Bank - Connected Commerce Technology team, you serve as a seasoned member of an agile team to design and deliver trusted data collection, storage, access, and analytics solutions in a secure, stable, and scalable way. You are responsible for developing, testing, and maintaining critical data pipelines and architectures across multiple technical areas within various business functions in support of the firms business objectives.

Job responsibilities

 

  • Supports review of controls to ensure sufficient protection of enterprise data
  • Advises and makes custom configuration changes in one to two tools to generate a product at the business or customer request
  • Updates logical or physical data models based on new use cases
  • Frequently uses SQL and understands NoSQL databases and their niche in the marketplace
  • Adds to team culture of diversity, opportunity, inclusion, and respect

 

Required qualifications, capabilities, and skills

 

  • Formal training or certification on data engineering concepts and 2+ years applied experience
  • Experience across the data lifecycle
  • Experience in ETL process/Advance concepts .
  • Advanced at SQL (e.g., joins and aggregations)
  • Experience in AWS, design, implementation, and maintenance of data pipelines using Python and PySpark (secondary alternative: Java)
  • Working understanding of NoSQL databases
  • Significant experience with statistical data analysis and ability to determine appropriate tools and data patterns to perform analysis
  • Experience customizing changes in a tool to generate product
  • Proficiency in Unix scripting, data structures, data serialization formats such as JSON, AVRO, Protobuf, or similar, big-data storage formats such as Parquet, Iceberg, or similar, data processing methodologies such as batch, micro-batching, or stream, one or more data modelling techniques such as Dimensional, Data Vault, Kimball, Inmon, etc., Agile methodology, TDD or BDD and CI/CD tools

 

Preferred qualifications, capabilities, and skills

  • Python Advance development skills  / Kafka & S3 integration in Performance optimization
  • Experience in carrying out data analysis to support business insights
  • Strong in PySpark, AWS & Snowflake
 

 
Be part of an agile team that works to enhance, design, and deliver data collection, storage, access, and analytics solutions

SIMILAR OPPORTUNITIES

No similar opportunities available at the moment.