LOG IN
SIGN UP
Canary Wharfian - Online Investment Banking & Finance Community.
Sign In
or continue with e-mail and password
Forgot password?
Don't have an account?
Create an account
or continue with e-mail and password
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Software Engineer III - AWS Data Engineer

ExperiencedNo visa sponsorship
J.P. Morgan logo

at J.P. Morgan

Bulge Bracket Investment Banks

Posted 7 days ago

No clicks

**Software Engineer III - AWS Data Engineer**: Design and deliver scalable data products using Databricks/Spark on AWS. Responsibilities include creating Delta Lake/Lakehouse patterns, securing code in Python/PySpark, implementing CI/CD with Terraform/CloudFormation, and driving continuous improvement through data analysis. Requires eight years of software engineering experience, proficiency in Python and PySpark, and understanding of agile methodologies and security. Preferred is experience with LLM-assisted workflows and AWS/Databricks certifications.

Compensation
Not specified

Currency: Not specified

City
Hyderabad
Country
India

Full Job Description

Location: Hyderabad, Telangana, India

We have an exciting and rewarding opportunity for you to advance your software engineering career. Join us to build innovative data solutions and accelerate engineering productivity Guidelines.docx.

As a Senior Data Engineer at JPMorgan Chase within the Corporate Technology team, you design and deliver robust, scalable data products using Databricks/Spark on AWS.. You help modernize data processing platforms and pioneer LLM-assisted development, contributing to secure and reliable solutions that drive business impact Guidelines.docx.

Job responsibilities

  • Design and deliver Databricks/Spark pipelines on AWS using Delta Lake/Lakehouse patterns
  • Build and maintain secure, high-quality production code in Python/PySpark aligned to enterprise security best practices
  • Implement CI/CD-first delivery using infrastructure-as-code (Terraform/CloudFormation) and automated testing for repeatable deployments 
  • Produce architecture and design artifacts for complex applications, ensuring performance, resiliency, and security constraints are met 
  • Gather, analyze, and synthesize data to drive continuous improvement of software applications and systems 
  • Proactively identify hidden problems and patterns in data to improve coding hygiene and system architecture

Required qualifications, capabilities, and skills

  • Formal training or certification on software engineering concepts and eight years applied experience 
  • Hands-on practical experience in system design, application development, testing, and operational stability 
  • Proficient in coding in Python and PySpark 
  • Experience in developing, debugging, and maintaining code in a large corporate environment with Databricks/Spark and database querying languages 
  • Overall knowledge of the Software Development Life Cycle
  • Solid understanding of agile methodologies, CI/CD, application resiliency, and security 
  • Demonstrated ability to use Copilot/LLM-assisted workflows effectively while maintaining quality and security through review and automation 

Preferred qualifications, capabilities, and skills

  • Experience with agentic automation or LLM tooling patterns (e.g., task-oriented agents for incident triage, deployment validation, data quality checks, or developer enablement) Raw Posting.docx
  • Familiarity with Data Mesh, Airflow, and/or ThoughtSpot Raw Posting.docx
  • AWS and/or Databricks certifications (e.g., AWS SAA / Developer Associate / Data Analytics Specialty, Databricks certification) Raw Posting.docx
  • Design, build, and operate scalable Databricks/Spark data products on AWS, with a strong emphasis on Agentic AI patterns, LLM-enabled developer workflows, and Copilot-assisted delivery
Modernize data platforms with Databricks/Spark on AWS, leveraging LLM-assisted engineering and secure delivery

Software Engineer III - AWS Data Engineer

Compensation

Not specified

City: Hyderabad

Country: India

J.P. Morgan logo
Bulge Bracket Investment Banks

7 days ago

No clicks

at J.P. Morgan

ExperiencedNo visa sponsorship

**Software Engineer III - AWS Data Engineer**: Design and deliver scalable data products using Databricks/Spark on AWS. Responsibilities include creating Delta Lake/Lakehouse patterns, securing code in Python/PySpark, implementing CI/CD with Terraform/CloudFormation, and driving continuous improvement through data analysis. Requires eight years of software engineering experience, proficiency in Python and PySpark, and understanding of agile methodologies and security. Preferred is experience with LLM-assisted workflows and AWS/Databricks certifications.

Full Job Description

Location: Hyderabad, Telangana, India

We have an exciting and rewarding opportunity for you to advance your software engineering career. Join us to build innovative data solutions and accelerate engineering productivity Guidelines.docx.

As a Senior Data Engineer at JPMorgan Chase within the Corporate Technology team, you design and deliver robust, scalable data products using Databricks/Spark on AWS.. You help modernize data processing platforms and pioneer LLM-assisted development, contributing to secure and reliable solutions that drive business impact Guidelines.docx.

Job responsibilities

  • Design and deliver Databricks/Spark pipelines on AWS using Delta Lake/Lakehouse patterns
  • Build and maintain secure, high-quality production code in Python/PySpark aligned to enterprise security best practices
  • Implement CI/CD-first delivery using infrastructure-as-code (Terraform/CloudFormation) and automated testing for repeatable deployments 
  • Produce architecture and design artifacts for complex applications, ensuring performance, resiliency, and security constraints are met 
  • Gather, analyze, and synthesize data to drive continuous improvement of software applications and systems 
  • Proactively identify hidden problems and patterns in data to improve coding hygiene and system architecture

Required qualifications, capabilities, and skills

  • Formal training or certification on software engineering concepts and eight years applied experience 
  • Hands-on practical experience in system design, application development, testing, and operational stability 
  • Proficient in coding in Python and PySpark 
  • Experience in developing, debugging, and maintaining code in a large corporate environment with Databricks/Spark and database querying languages 
  • Overall knowledge of the Software Development Life Cycle
  • Solid understanding of agile methodologies, CI/CD, application resiliency, and security 
  • Demonstrated ability to use Copilot/LLM-assisted workflows effectively while maintaining quality and security through review and automation 

Preferred qualifications, capabilities, and skills

  • Experience with agentic automation or LLM tooling patterns (e.g., task-oriented agents for incident triage, deployment validation, data quality checks, or developer enablement) Raw Posting.docx
  • Familiarity with Data Mesh, Airflow, and/or ThoughtSpot Raw Posting.docx
  • AWS and/or Databricks certifications (e.g., AWS SAA / Developer Associate / Data Analytics Specialty, Databricks certification) Raw Posting.docx
  • Design, build, and operate scalable Databricks/Spark data products on AWS, with a strong emphasis on Agentic AI patterns, LLM-enabled developer workflows, and Copilot-assisted delivery
Modernize data platforms with Databricks/Spark on AWS, leveraging LLM-assisted engineering and secure delivery