LOG IN
SIGN UP
Canary Wharfian - Online Investment Banking & Finance Community.
Sign In
OR continue with e-mail and password
E-mail address
Password
Don't have an account?
Reset password
Join Canary Wharfian
OR continue with e-mail and password
E-mail address
Username
Password
Confirm Password
How did you hear about us?
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Job Details

J.P. Morgan logo
Bulge Bracket Investment Banks

Lead Software Engineer - Market Risk

at J.P. Morgan

ExperiencedNo visa sponsorship

Posted 14 days ago

No clicks

Lead Software Engineer on the Market Risk MXL DataLake Team building cutting-edge data platforms for market risk and analytics. You will design and implement large-scale historical data stores and robust, scalable PySpark/Spark data pipelines for batch and incremental processing while applying strong data-modeling principles to support long-term historical and regulatory needs. The role emphasizes production-grade engineering—performance optimization, testability, maintainability—and close collaboration with architects, risk technologists, and product owners to evolve platform standards and best practices.

Compensation
Not specified

Currency: Not specified

City
Jersey City
Country
United States

Full Job Description

Location: Jersey City, NJ, United States

We have an opportunity to impact your career and provide an adventure where you can push the limits of what's possible.

As a Lead Software Engineer at JPMorgan Chase within the Market Risk MXL DataLake Team, you will join a strategic initiative building cutting-edge data platforms for market risk and analytics. In this role, you'll design and implement high-volume data pipelines and historical data stores, collaborating closely with architects, risk technologists, and product owners.

Job Responsibilities

  • Design, build, and maintain large-scale historical data stores on modern big-data platforms
  • Develop robust, scalable data pipelines using PySpark / Spark for batch and incremental processing
  • Apply strong data-modelling principles (e.g. dimensional, Data Vault–style, or similar approaches) to support long-term historical analysis and regulatory requirements
  • Engineer high-quality, production-grade code with a focus on correctness, performance, testability, and maintainability
  • Optimize Spark workloads for performance and cost efficiency (partitioning, clustering, file layout, etc.)
  • Collaborate with architects and senior engineers to evolve platform standards, patterns, and best practices
  • Contribute to code reviews, technical design discussions, and continuous improvement of engineering practices

Required qualifications, capabilities and skills

  • Degree-level education in Computer Science, Software Engineering, or a related discipline (or equivalent practical experience)
  • Strong software engineering fundamentals, including data structures, algorithms, and system design
  • Proven experience building large-scale data engineering solutions on big-data platforms
  • Hands-on experience developing PySpark / Spark pipelines in production environments
  • Solid understanding of data modelling for analytical and historical data use cases
  • Experience working with large volumes of structured data over long time horizons
  • Familiarity with distributed systems concepts such as fault tolerance, parallelism, and idempotent processing.

Preferred Qualifications

  • Experience with Databricks, Delta Lake, or similar cloud-based big-data platforms
  • Hands-on experience designing and implementing Data Vault 2.0 models.
  • Exposure to historical / regulatory data platforms, risk data, or financial services
  • Knowledge of append-only data patterns, slowly changing dimensions, or event-driven data models
  • Experience with CI/CD, automated testing, and production monitoring for data pipelines
  • Experience building highly reliable, production-grade risk systems with robust controls and integration with modern SRE tooling.
Software Engineer III - Python, Databrick, AWS, Market Risk

Job Details

J.P. Morgan logo
Bulge Bracket Investment Banks

14 days ago

clicks

Lead Software Engineer - Market Risk

at J.P. Morgan

ExperiencedNo visa sponsorship

Not specified

Currency not set

City: Jersey City

Country: United States

Lead Software Engineer on the Market Risk MXL DataLake Team building cutting-edge data platforms for market risk and analytics. You will design and implement large-scale historical data stores and robust, scalable PySpark/Spark data pipelines for batch and incremental processing while applying strong data-modeling principles to support long-term historical and regulatory needs. The role emphasizes production-grade engineering—performance optimization, testability, maintainability—and close collaboration with architects, risk technologists, and product owners to evolve platform standards and best practices.

Full Job Description

Location: Jersey City, NJ, United States

We have an opportunity to impact your career and provide an adventure where you can push the limits of what's possible.

As a Lead Software Engineer at JPMorgan Chase within the Market Risk MXL DataLake Team, you will join a strategic initiative building cutting-edge data platforms for market risk and analytics. In this role, you'll design and implement high-volume data pipelines and historical data stores, collaborating closely with architects, risk technologists, and product owners.

Job Responsibilities

  • Design, build, and maintain large-scale historical data stores on modern big-data platforms
  • Develop robust, scalable data pipelines using PySpark / Spark for batch and incremental processing
  • Apply strong data-modelling principles (e.g. dimensional, Data Vault–style, or similar approaches) to support long-term historical analysis and regulatory requirements
  • Engineer high-quality, production-grade code with a focus on correctness, performance, testability, and maintainability
  • Optimize Spark workloads for performance and cost efficiency (partitioning, clustering, file layout, etc.)
  • Collaborate with architects and senior engineers to evolve platform standards, patterns, and best practices
  • Contribute to code reviews, technical design discussions, and continuous improvement of engineering practices

Required qualifications, capabilities and skills

  • Degree-level education in Computer Science, Software Engineering, or a related discipline (or equivalent practical experience)
  • Strong software engineering fundamentals, including data structures, algorithms, and system design
  • Proven experience building large-scale data engineering solutions on big-data platforms
  • Hands-on experience developing PySpark / Spark pipelines in production environments
  • Solid understanding of data modelling for analytical and historical data use cases
  • Experience working with large volumes of structured data over long time horizons
  • Familiarity with distributed systems concepts such as fault tolerance, parallelism, and idempotent processing.

Preferred Qualifications

  • Experience with Databricks, Delta Lake, or similar cloud-based big-data platforms
  • Hands-on experience designing and implementing Data Vault 2.0 models.
  • Exposure to historical / regulatory data platforms, risk data, or financial services
  • Knowledge of append-only data patterns, slowly changing dimensions, or event-driven data models
  • Experience with CI/CD, automated testing, and production monitoring for data pipelines
  • Experience building highly reliable, production-grade risk systems with robust controls and integration with modern SRE tooling.
Software Engineer III - Python, Databrick, AWS, Market Risk