LOG IN
SIGN UP
Canary Wharfian - Online Investment Banking & Finance Community.
Sign In
OR continue with e-mail and password
E-mail address
Password
Don't have an account?
Reset password
Join Canary Wharfian
OR continue with e-mail and password
E-mail address
Username
Password
Confirm Password
How did you hear about us?
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Job Details

J.P. Morgan logo
Bulge Bracket Investment Banks

Data Engineer II

at J.P. Morgan

ExperiencedNo visa sponsorship

Posted 15 days ago

No clicks

Data Engineer II at JPMorgan Chase's Connected Commerce Travel Technology team, responsible for designing, building, and maintaining large-scale cloud-based data integration and analytics solutions. You will develop and optimize data models and scalable processing pipelines while ensuring data integrity, quality, and performance. The role involves close collaboration with cross-functional teams to translate business requirements into production-grade data engineering solutions and a focus on innovation and continuous improvement. Required skills include proficiency in Python, distributed processing frameworks (e.g., Spark), cloud data lake/lakehouse technologies, orchestration tools like Airflow, and experience with CI/CD and Agile practices.

Compensation
Not specified

Currency: Not specified

City
Pune
Country
India

Full Job Description

Location: Pune, Maharashtra, India

We have an exciting and rewarding opportunity for you to take your Data Engineer career to the next level. 
As a Data Engineer II at JPMorganChase within Consumer & Community Banking in Connected Commerce Travel Technology Team, you will be the part of an agile team that designs, builds and maintains cutting-edge large-scale data integration and analytical solutions on the cloud in a secure, stable, and scalable way.  In this role, you’ll leverage your technical expertise and business acumen to transform complex, high-volume data into powerful, actionable insights, driving strategic value for our stakeholders. This is an exciting opportunity to shape the future of how Chase Travel data is managed for analytical needs.
 

Job responsibilities

  • Design, develop and maintain scalable and large-scale data processing pipelines and infrastructure on the cloud following engineering standards, governance standards and technology best practices
  • Develop and optimize data models for large-scale datasets, ensuring efficient storage, retrieval, and analytics while maintaining data integrity and quality.
  • Collaborate with cross-functional teams to translate business requirements into scalable and effective data engineering solutions.
  • Demonstrate a passion for innovation and continuous improvement in data engineering, proactively identifying opportunities to enhance data infrastructure, data processing and analytics capabilities.

 

Required qualifications, capabilities, and skills

  • Strong analytical problem solving and critical thinking skills
  • Proficiency in at least one programming language ( Python, if not Java or Scala)
  • Proficiency in at least one distributed data processing framework ( Spark or similar)
  • Proficiency in at least one cloud data Lakehouse platforms (AWS Data lake services or Databricks, alternatively Hadoop),
  • Proficiency in at least one scheduling/orchestration tools ( Airflow, if not AWS Step Functions or similar) 
  • Proficiency with relational and NoSQL databases.
  • Proficiency in data structures, data serialization formats (JSON, AVRO, Protobuf, or similar), and big-data storage formats (Parquet, Iceberg, or similar),
  • Experience working in teams following Agile methodology
  • Experience with test-driven development (TDD) or behavior-driven development (BDD) practices, as well as working with continuous integration and continuous deployment (CI/CD) tools.

 

Preferred qualifications, capabilities, and skills
  • Proficiency in Python and Pyspark
  • Proficiency in IaC (preferably Terraform, alternatively AWS cloud formation)
  • Experience with AWS Glue, AWS S3, AWS Lakehouse, AWS Athena, Airflow, Kinesis and Apache Iceberg
  • Experience working with Jenkins
     
Join a world class data engineering team that builds and delivers cutting-edge large-scale data integration and analytical solutions

Job Details

J.P. Morgan logo
Bulge Bracket Investment Banks

15 days ago

clicks

Data Engineer II

at J.P. Morgan

ExperiencedNo visa sponsorship

Not specified

Currency not set

City: Pune

Country: India

Data Engineer II at JPMorgan Chase's Connected Commerce Travel Technology team, responsible for designing, building, and maintaining large-scale cloud-based data integration and analytics solutions. You will develop and optimize data models and scalable processing pipelines while ensuring data integrity, quality, and performance. The role involves close collaboration with cross-functional teams to translate business requirements into production-grade data engineering solutions and a focus on innovation and continuous improvement. Required skills include proficiency in Python, distributed processing frameworks (e.g., Spark), cloud data lake/lakehouse technologies, orchestration tools like Airflow, and experience with CI/CD and Agile practices.

Full Job Description

Location: Pune, Maharashtra, India

We have an exciting and rewarding opportunity for you to take your Data Engineer career to the next level. 
As a Data Engineer II at JPMorganChase within Consumer & Community Banking in Connected Commerce Travel Technology Team, you will be the part of an agile team that designs, builds and maintains cutting-edge large-scale data integration and analytical solutions on the cloud in a secure, stable, and scalable way.  In this role, you’ll leverage your technical expertise and business acumen to transform complex, high-volume data into powerful, actionable insights, driving strategic value for our stakeholders. This is an exciting opportunity to shape the future of how Chase Travel data is managed for analytical needs.
 

Job responsibilities

  • Design, develop and maintain scalable and large-scale data processing pipelines and infrastructure on the cloud following engineering standards, governance standards and technology best practices
  • Develop and optimize data models for large-scale datasets, ensuring efficient storage, retrieval, and analytics while maintaining data integrity and quality.
  • Collaborate with cross-functional teams to translate business requirements into scalable and effective data engineering solutions.
  • Demonstrate a passion for innovation and continuous improvement in data engineering, proactively identifying opportunities to enhance data infrastructure, data processing and analytics capabilities.

 

Required qualifications, capabilities, and skills

  • Strong analytical problem solving and critical thinking skills
  • Proficiency in at least one programming language ( Python, if not Java or Scala)
  • Proficiency in at least one distributed data processing framework ( Spark or similar)
  • Proficiency in at least one cloud data Lakehouse platforms (AWS Data lake services or Databricks, alternatively Hadoop),
  • Proficiency in at least one scheduling/orchestration tools ( Airflow, if not AWS Step Functions or similar) 
  • Proficiency with relational and NoSQL databases.
  • Proficiency in data structures, data serialization formats (JSON, AVRO, Protobuf, or similar), and big-data storage formats (Parquet, Iceberg, or similar),
  • Experience working in teams following Agile methodology
  • Experience with test-driven development (TDD) or behavior-driven development (BDD) practices, as well as working with continuous integration and continuous deployment (CI/CD) tools.

 

Preferred qualifications, capabilities, and skills
  • Proficiency in Python and Pyspark
  • Proficiency in IaC (preferably Terraform, alternatively AWS cloud formation)
  • Experience with AWS Glue, AWS S3, AWS Lakehouse, AWS Athena, Airflow, Kinesis and Apache Iceberg
  • Experience working with Jenkins
     
Join a world class data engineering team that builds and delivers cutting-edge large-scale data integration and analytical solutions