LOG IN
SIGN UP
Canary Wharfian - Online Investment Banking & Finance Community.
Sign In
or continue with e-mail and password
Forgot password?
Don't have an account?
Create an account
or continue with e-mail and password
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Software Engineer III - Python, PySpark, Databricks, AWS

ExperiencedNo visa sponsorship
J.P. Morgan logo

at J.P. Morgan

Bulge Bracket Investment Banks

Posted 8 days ago

No clicks

**Software Engineer III (Python, PySpark, Databricks, AWS)** Master data engineering, architecting ETL pipelines in Python and PySpark. Utilize Databricks for robust workflows. Leverage AWS services (S3, Lambda) for scalable solutions. Write secure, optimized code and design efficient SQL data models. Operate production pipelines, monitor performance. 7+yrs Python exp. required, 3+yrs PySpark and AWS knowledge essential. Proficient SQL, CI/CD exposure, agile methodologies. Preferred: Terraform, data governance, streaming tech, mentoring.

Compensation
Not specified

Currency: Not specified

City
Bengaluru
Country
India

Full Job Description

Location: Bengaluru, Karnataka, India

We have an exciting opportunity for you to advance your data engineering career and make a meaningful impact at JPMorganChase.

As a Software Engineer III at JPMorgan Chase within Corporate Technology, you design and deliver high-performance data solutions that power the firms technology products. 

Job responsibilities

  • Architect, develop, and maintain high-performance ETL pipelines and data workflows using Python, PySpark, and Databricks
  • Design and implement scalable, fault-tolerant data solutions on AWS, leveraging services such as S3 and Lambda
  • Write secure, optimized code in Python and PySpark with a focus on performance and reliability
  • Develop and optimize SQL-based data models, queries, and transformations to support analytical and operational needs
  • Own and operate production data pipelines end-to-end, including monitoring, alerting, and performance optimization
  • Apply knowledge of the Software Development Life Cycle toolchain, including Git and CI/CD, to maximize automation and delivery velocity
  • Gather, analyze, and synthesize large, diverse data sets to drive data-driven decision-making

 

Required qualifications, capabilities, and skills

  • Formal training 3 years or certification in software engineering, data engineering, or a related technical discipline
  • Seven years of hands-on experience developing production-grade applications and data solutions in Python
  • Three years of experience building and optimizing large-scale data pipelines using PySpark
  • Proven experience designing, deploying, and managing data engineering workflows on Databricks, including Delta Lake and Unity Catalog
  • Strong hands-on experience with AWS cloud services, including S3 and Lambda
  • Proficiency in SQL for complex data querying, transformation, and performance tuning
  • Experience across the Software Development Life Cycle with exposure to agile methodologies such as CI/CD, Application Resiliency, and Security

 

Preferred qualifications, capabilities, and skills

  • Experience with infrastructure-as-code tools such as Terraform
  • Familiarity with data governance, data quality frameworks, and data cataloging practices
  • Exposure to real-time streaming technologies such as Kafka or Kinesis
  • Experience mentoring junior engineers and contributing to engineering best practices
Design scalable, high performance data pipelines and solutions using Python and cloud platforms within a collaborative cross functional data team

Software Engineer III - Python, PySpark, Databricks, AWS

Compensation

Not specified

City: Bengaluru

Country: India

J.P. Morgan logo
Bulge Bracket Investment Banks

8 days ago

No clicks

at J.P. Morgan

ExperiencedNo visa sponsorship

**Software Engineer III (Python, PySpark, Databricks, AWS)** Master data engineering, architecting ETL pipelines in Python and PySpark. Utilize Databricks for robust workflows. Leverage AWS services (S3, Lambda) for scalable solutions. Write secure, optimized code and design efficient SQL data models. Operate production pipelines, monitor performance. 7+yrs Python exp. required, 3+yrs PySpark and AWS knowledge essential. Proficient SQL, CI/CD exposure, agile methodologies. Preferred: Terraform, data governance, streaming tech, mentoring.

Full Job Description

Location: Bengaluru, Karnataka, India

We have an exciting opportunity for you to advance your data engineering career and make a meaningful impact at JPMorganChase.

As a Software Engineer III at JPMorgan Chase within Corporate Technology, you design and deliver high-performance data solutions that power the firms technology products. 

Job responsibilities

  • Architect, develop, and maintain high-performance ETL pipelines and data workflows using Python, PySpark, and Databricks
  • Design and implement scalable, fault-tolerant data solutions on AWS, leveraging services such as S3 and Lambda
  • Write secure, optimized code in Python and PySpark with a focus on performance and reliability
  • Develop and optimize SQL-based data models, queries, and transformations to support analytical and operational needs
  • Own and operate production data pipelines end-to-end, including monitoring, alerting, and performance optimization
  • Apply knowledge of the Software Development Life Cycle toolchain, including Git and CI/CD, to maximize automation and delivery velocity
  • Gather, analyze, and synthesize large, diverse data sets to drive data-driven decision-making

 

Required qualifications, capabilities, and skills

  • Formal training 3 years or certification in software engineering, data engineering, or a related technical discipline
  • Seven years of hands-on experience developing production-grade applications and data solutions in Python
  • Three years of experience building and optimizing large-scale data pipelines using PySpark
  • Proven experience designing, deploying, and managing data engineering workflows on Databricks, including Delta Lake and Unity Catalog
  • Strong hands-on experience with AWS cloud services, including S3 and Lambda
  • Proficiency in SQL for complex data querying, transformation, and performance tuning
  • Experience across the Software Development Life Cycle with exposure to agile methodologies such as CI/CD, Application Resiliency, and Security

 

Preferred qualifications, capabilities, and skills

  • Experience with infrastructure-as-code tools such as Terraform
  • Familiarity with data governance, data quality frameworks, and data cataloging practices
  • Exposure to real-time streaming technologies such as Kafka or Kinesis
  • Experience mentoring junior engineers and contributing to engineering best practices
Design scalable, high performance data pipelines and solutions using Python and cloud platforms within a collaborative cross functional data team