LOG IN
SIGN UP
Canary Wharfian - Online Investment Banking & Finance Community.
Sign In
or continue with e-mail and password
Forgot password?
Don't have an account?
Create an account
or continue with e-mail and password
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Data Engineer III Spark , python , data bricks

ExperiencedNo visa sponsorship
J.P. Morgan logo

at J.P. Morgan

Bulge Bracket Investment Banks

Posted 15 days ago

No clicks

**Data Engineer III - Big Data**: Design, build, and maintain scalable cloud-native data pipelines using Python, Spark, and Data Bricks. Collaborate cross-functionally to drive digital reporting capabilities. Requires 3+ years of software engineering experience and proficiency in Java, SQL, and big data technologies. Troubleshoot data pipeline performance and ensure data integrity. Experience with data mining and infrastructure using Terraform is a plus. Join our agile team to deliver secure, scalable technology products.

Compensation
Not specified

Currency: Not specified

City
Bengaluru
Country
India

Full Job Description

Location: Bengaluru, Karnataka, India

Push the limits of whats possible with us as an experienced member of our Software Engineering team.

 

As Data Engineer III - Big Data,Java, Python at JPMorgan Chase, you will join the Financial Planning and Analysis team to design and implement the next generation buildout of a cloud native, Driver based FP & A platform for JPMC.  The  organization aims to provide comprehensive solutions to managing the firms planning, forecasting & budgeting.   The program will include strategic buildout of systematic sourcing (data lake), driver-based forecasting models and AI first approach to bring digital first reporting capabilities. 

  • Design, develop, and maintain scalable data pipelines for ingesting, processing, and transforming large volumes of structured and unstructured data.
  • Implement data mining techniques to extract valuable insights from complex data sets.
  • Build and optimize data architectures using big data tools and frameworks (e.g., databricks, Spark, Python).
  • Ensure data quality, integrity, and security throughout the data lifecycle.
  • Collaborate with cross-functional teams to understand data requirements and deliver solutions that meet business needs.
  • Monitor and troubleshoot data pipeline performance and resolve issues as they arise.
  • Document data processes, workflows, and best practices.

Required Qualifications, Capabilities, and Skills

  • Formal training or certification on software engineering concepts and 3+ years applied experience
  • Strong  hands-on development experience and in-depth knowledge of Java, Python, Spark, Data bricks & Bigdata related technologies
  • Proven experience in building and maintaining data pipelines and ETL processes.
  • Strong understanding of infrastructure using terraform.
  • Proficiency in SQL and experience with relational and NoSQL databases.
  • Experience with data mining, data wrangling, and data transformation techniques.
  • Knowledge of data modeling, data warehousing, and data governance best practices.
  • Strong problem-solving skills and attention to detail.
  • Strong skills on OLAP (cube like) systems. Eg Atoti.
  • Excellent communication and collaboration skills.

Preferred Qualifications, Capabilities, and Skills:

  • Experience of working in big data solutions with evidence of ability to analyze data to drive solutions

Familiarity with cloud platforms (e.g., AWS) and their big data services is a plus.

Serve as an emerging member of an agile team to design and deliver market-leading technology products in a secure and scalable way

Data Engineer III Spark , python , data bricks

Compensation

Not specified

City: Bengaluru

Country: India

J.P. Morgan logo
Bulge Bracket Investment Banks

15 days ago

No clicks

at J.P. Morgan

ExperiencedNo visa sponsorship

**Data Engineer III - Big Data**: Design, build, and maintain scalable cloud-native data pipelines using Python, Spark, and Data Bricks. Collaborate cross-functionally to drive digital reporting capabilities. Requires 3+ years of software engineering experience and proficiency in Java, SQL, and big data technologies. Troubleshoot data pipeline performance and ensure data integrity. Experience with data mining and infrastructure using Terraform is a plus. Join our agile team to deliver secure, scalable technology products.

Full Job Description

Location: Bengaluru, Karnataka, India

Push the limits of whats possible with us as an experienced member of our Software Engineering team.

 

As Data Engineer III - Big Data,Java, Python at JPMorgan Chase, you will join the Financial Planning and Analysis team to design and implement the next generation buildout of a cloud native, Driver based FP & A platform for JPMC.  The  organization aims to provide comprehensive solutions to managing the firms planning, forecasting & budgeting.   The program will include strategic buildout of systematic sourcing (data lake), driver-based forecasting models and AI first approach to bring digital first reporting capabilities. 

  • Design, develop, and maintain scalable data pipelines for ingesting, processing, and transforming large volumes of structured and unstructured data.
  • Implement data mining techniques to extract valuable insights from complex data sets.
  • Build and optimize data architectures using big data tools and frameworks (e.g., databricks, Spark, Python).
  • Ensure data quality, integrity, and security throughout the data lifecycle.
  • Collaborate with cross-functional teams to understand data requirements and deliver solutions that meet business needs.
  • Monitor and troubleshoot data pipeline performance and resolve issues as they arise.
  • Document data processes, workflows, and best practices.

Required Qualifications, Capabilities, and Skills

  • Formal training or certification on software engineering concepts and 3+ years applied experience
  • Strong  hands-on development experience and in-depth knowledge of Java, Python, Spark, Data bricks & Bigdata related technologies
  • Proven experience in building and maintaining data pipelines and ETL processes.
  • Strong understanding of infrastructure using terraform.
  • Proficiency in SQL and experience with relational and NoSQL databases.
  • Experience with data mining, data wrangling, and data transformation techniques.
  • Knowledge of data modeling, data warehousing, and data governance best practices.
  • Strong problem-solving skills and attention to detail.
  • Strong skills on OLAP (cube like) systems. Eg Atoti.
  • Excellent communication and collaboration skills.

Preferred Qualifications, Capabilities, and Skills:

  • Experience of working in big data solutions with evidence of ability to analyze data to drive solutions

Familiarity with cloud platforms (e.g., AWS) and their big data services is a plus.

Serve as an emerging member of an agile team to design and deliver market-leading technology products in a secure and scalable way