
Senior Lead Software Engineer, SRE - Data Platforms
at J.P. Morgan
Posted 17 days ago
No clicks
As a Senior Lead Software Engineer - SRE on the AIML Data Platforms and Chief Data and Analytics team, you will design, implement, and operate a managed AWS Databricks platform and related data infrastructure. The role focuses on SRE best practices (SLIs/SLOs, incident management, observability), automation, capacity planning, and collaborating with data engineering and ML teams. You will develop production-quality Python code, build and maintain terraform-based infrastructure, and support big data frameworks like Spark. Responsibilities also include vendor evaluations, troubleshooting, incident response, and continuous improvement of platform reliability and operational processes.
- Compensation
- Not specified
- City
- Jersey City
- Country
- United States
Currency: Not specified
Full Job Description
Location: Jersey City, NJ, United States
We have an opportunity to impact your career and provide an adventure where you can push the limits of what's possible.
The Chief Data & Analytics Office (CDAO) at JPMorgan Chase is responsible for accelerating the firm’s data and analytics journey. This includes ensuring the quality, integrity, and security of the company's data, as well as leveraging this data to generate insights and drive decision-making. The CDAO is also responsible for developing and implementing solutions that support the firm’s commercial goals by harnessing artificial intelligence and machine learning technologies to develop new products, improve productivity, and enhance risk management effectively and responsibly.
As a Senior Lead Software Engineer - SRE at JPMorgan Chase within the AIML Data Platforms and Chief Data and Analytics Team, you will develop and deliver advanced technology products focused on data and analytics. Tackle complex cloud data platform challenges, especially around Datalakes Tools. In this role you will work in an agile environment, collaborating with cross-functional teams.
Job responsibilities
- Designs, implements, and maintains a managed AWS Databricks platform, and provides engineering and operational support for the platform to SRE and app teams.
- Performs platform design, set-up and configuration, workspace administration, resource monitoring, providing engineering support to data engineering teams, Data Science/ML, and Application/integration teams.
- Leads evaluation sessions with external vendors, startups, and internal teams to drive outcomes-oriented probing of architectural designs, technical credentials, and applicability for use within existing systems and information architecture.
- Drives continuous improvement in system observability, alerting, and capacity planning.
- Collaborates with engineering and data teams to optimize infrastructure and deployment processes, focusing on automation and operational excellence.
- Executes creative software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems.
- Develops secure high-quality production code, and reviews and debugs code written by others.
- Identifies opportunities to eliminate or automate remediation of recurring issues to improve overall operational stability of software applications and systems.
- Adds to team culture of diversity, opportunity, and respect.
- Implements Site Reliability Engineering (SRE) best practices to ensure reliability, scalability, and performance of data platforms.
- Develops and maintains incident response procedures, including root cause analysis and postmortem documentation.
Required qualifications, capabilities, and skills
- Formal training or certification on software engineering concepts and 5+ years applied experience.
- Extensive experience with AWS Databricks platform administration and engineering support is a MUST.
- Strong understanding of SRE principles, including SLIs, SLOs, error budgets, and incident management.
- Experience with monitoring tools, automation frameworks, and CI/CD pipelines.
- Proficient in Python application program development with use of automated unit testing.
- Experience with terraform development and understanding of terraform enterprise.
- Experience in delivering system design, application development, testing, and operational stability.
- Knowledge of Big Data distributed compute frameworks like Spark, Glue, MapReduce etc.
- Excellent troubleshooting, analytical, and communication skills.
Preferred qualifications, capabilities, and skills
- Experience in Data pipelines using Spark.
- Exposure to AWS & Databricks Platform administration.
- Knowledge of containerization (Docker, Kubernetes) and orchestration.
- Familiarity with distributed systems and large-scale data processing.
#CDAOdp
Drive significant business impact and tackle a diverse array of challenges that span multiple technologies and applications




