
Posted 17 days ago
No clicks
JPMorgan Chase is hiring a Data Engineer II to design, build, and maintain secure, scalable data collection, storage, access, and analytics solutions as part of an agile team. The role focuses on developing and troubleshooting data pipelines using Python and PySpark on AWS, implementing ETL processes, and producing high-quality production code and design artifacts. The engineer will analyze large, diverse datasets to create visualizations and drive performance improvements, ensuring jobs run optimally and meet resiliency and security standards. Preferred skills include advanced Python, Kafka, S3 integration, PySpark, Snowflake, and experience with CI/CD and data modelling techniques.
- Compensation
- Not specified
- City
- Jersey City
- Country
- United States
Currency: Not specified
Full Job Description
Location: Jersey City, NJ, United States
You thrive on diversity and creativity, and we welcome individuals who share our vision of making a lasting impact. Your unique combination of design thinking and experience will help us achieve new heights.
As a Data Engineer II at JPMorganChase within the Consumer & Community Banking, you are part of an agile team that works to enhance, design, and deliver the data collection, storage, access, and analytics solutions in a secure, stable, and scalable way. As an emerging member of a data engineering team, you execute data solutions through the design, development, and technical troubleshooting of multiple components within a technical product, application, or system, while gaining the skills and experience needed to grow within your role.
Job responsibilities
- Executes software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems
- Creates secure and high-quality production code and maintains algorithms that run synchronously with appropriate systems
- Produces architecture and design artifacts for complex applications while being accountable for ensuring design constraints are met by software code development
- Gathers, analyzes, synthesizes, and develops visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systems
- Proactively identifies hidden problems, patterns in data, and uses these insights to drive improvements to coding hygiene and system architecture
- Contributes to software engineering communities of practice and events that explore new and emerging technologies
- Adds to team culture of diversity, opportunity, inclusion, and respect
Required qualifications, capabilities, and skills
- Formal training or certification on Software Engineering concepts and 2+ years applied experience
- Experience in ETL process/Advance concepts
- Hands-on practical experience in system design, application development, testing, and operational stability
- Experience in AWS, design, implementation, and maintenance of data pipelines using Python and PySpark
- Experience in developing, debugging, and maintaining code in a large corporate environment with one or more modern programming languages and database querying languages
- Proven experience in performance and tuning to ensure jobs are running at optimal levels and no performance bottleneck
- Solid understanding of agile methodologies such as CI/CD, Application Resiliency, and Security
- Demonstrated knowledge of software applications and technical processes within a technical discipline (e.g., cloud, artificial intelligence, machine learning, mobile, etc.)
- Proficiency in Unix scripting, data structures, data serialization formats such as JSON, AVRO, Protobuf, or similar, big-data storage formats such as Parquet, Iceberg, or similar, data processing methodologies such as batch, micro-batching, or stream, one or more data modelling techniques such as Dimensional, Data Vault, Kimball, Inmon, etc., Agile methodology, TDD or BDD and CI/CD tools
- Python Advance development skills / Kafka & S3 integration in Performance optimization
- Experience in carrying out data analysis to support business insights
- Strong in PySpark, AWS, & Snowflake




