
at J.P. Morgan
Bulge Bracket Investment BanksPosted 11 days ago
No clicks
As a Data Engineer II, design, build, and maintain secure, scalable data pipelines using AWS and Python/PySpark, supporting JPMorgan Chase's business objectives. Collaborate within an agile, diverse team to deliver trusted data across multiple business functions.
- Compensation
- Not specified USD
- City
- Jersey City
- Country
- United States
Currency: $ (USD)
Full Job Description
Location: Jersey City, NJ, United States
You thrive on diversity and creativity, and we welcome individuals who share our vision of making a lasting impact. Your unique combination of design thinking and experience will help us achieve new heights.
As a Data Engineer III at JPMorgan Chase within the Consumer and Community Bank - Connected Commerce Technology team, you serve as a seasoned member of an agile team to design and deliver trusted data collection, storage, access, and analytics solutions in a secure, stable, and scalable way. You are responsible for developing, testing, and maintaining critical data pipelines and architectures across multiple technical areas within various business functions in support of the firms business objectives.
Job responsibilities
- Supports review of controls to ensure sufficient protection of enterprise data
- Advises and makes custom configuration changes in one to two tools to generate a product at the business or customer request
- Updates logical or physical data models based on new use cases
- Frequently uses SQL and understands NoSQL databases and their niche in the marketplace
- Adds to team culture of diversity, opportunity, inclusion, and respect
Required qualifications, capabilities, and skills
- Formal training or certification on data engineering concepts and 2+ years applied experience
- Experience across the data lifecycle
- Experience in ETL process/Advance concepts .
- Advanced at SQL (e.g., joins and aggregations)
- Experience in AWS, design, implementation, and maintenance of data pipelines using Python and PySpark (secondary alternative: Java)
- Working understanding of NoSQL databases
- Significant experience with statistical data analysis and ability to determine appropriate tools and data patterns to perform analysis
- Experience customizing changes in a tool to generate product
- Proficiency in Unix scripting, data structures, data serialization formats such as JSON, AVRO, Protobuf, or similar, big-data storage formats such as Parquet, Iceberg, or similar, data processing methodologies such as batch, micro-batching, or stream, one or more data modelling techniques such as Dimensional, Data Vault, Kimball, Inmon, etc., Agile methodology, TDD or BDD and CI/CD tools
Preferred qualifications, capabilities, and skills
- Python Advance development skills / Kafka & S3 integration in Performance optimization
- Experience in carrying out data analysis to support business insights
- Strong in PySpark, AWS & Snowflake
