LOG IN
SIGN UP
Canary Wharfian - Online Investment Banking & Finance Community.
Sign In
or continue with e-mail and password
Forgot password?
Don't have an account?
Create an account
or continue with e-mail and password
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Software Engineer III - Big Data & AWS

ExperiencedNo visa sponsorship
J.P. Morgan logo

at J.P. Morgan

Bulge Bracket Investment Banks

Posted 13 days ago

No clicks

**Software Engineer III - Big Data & AWS** Drive tech innovation in our Consumer & Investment Banking sector. As an experienced Software Engineer III, you'll design, develop, and maintain big data solutions using AWS services and tools like Apache Spark, Redshift, and Terraform. Apply your strong data engineering skills to process and analyze large datasets, optimize application performance, and contribute to a diverse team culture. Bring your 3+ years of software engineering experience and applicable certifications to deliver trusted, market-leading technology products.

Compensation
Not specified USD

Currency: $ (USD)

City
Not specified
Country
United States

Full Job Description

Location: Plano, TX, United States

We have an exciting and rewarding opportunity for you to take your software engineering career to the next level. 

As a Software Engineer III at JPMorganChase within the Consumer and Investment Banking, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firms business objectives.

Job responsibilities

 

  • Executes software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems
  • Creates secure and high-quality production code and maintains algorithms that run synchronously with appropriate systems
  • Produces architecture and design artifacts for complex applications while being accountable for ensuring design constraints are met by software code development
  • Gathers, analyzes, synthesizes, and develops visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systems
  • Proactively identifies hidden problems and patterns in data and uses these insights to drive improvements to coding hygiene and system architecture
  • Contributes to software engineering communities of practice and events that explore new and emerging technologies
  • Adds to team culture of diversity, opportunity, inclusion, and respect

 

 

Required qualifications, capabilities, and skills

 

  • Formal training or certification on software engineering concepts and 3+ years applied experience
  • Processing and analyzing large datasets using Big Data technologies, including Apache Spark, across data platforms comprising of Redshift, SQL DB, RDS and Databricks.
  • Design, development, and implementation of scalable data pipelines and data matching solutions on AWS, utilizing Python for data engineering tasks.
  • Setup and Configuration of AWS services such as S3, Databricks, Step Functions, EMR, Glue, Lambda, RDS, EventBridge, Step-Functions and API Gateway for scripting, automation, and orchestration of data workflows.
  • Development of semantic layer for data consumption using Iceberg tables and APIs. 
  • Implementation of continuous deployment and integration processes using Infrastructure as Code tools, including Terraform.
  • Apply application, data, and infrastructure architecture disciplines to optimize data-intensive applications for performance and scalability.
  • Troubleshoot and resolve issues related to application code and data workflows.

     

     

     

    Preferred qualifications, capabilities, and skills

     

  • Familiarity with modern front-end technologies
  • Exposure to cloud technologies
Design and deliver market-leading technology products in a secure and scalable way as a seasoned member of an agile team

Software Engineer III - Big Data & AWS

Compensation

Not specified USD

City: Not specified

Country: United States

J.P. Morgan logo
Bulge Bracket Investment Banks

13 days ago

No clicks

at J.P. Morgan

ExperiencedNo visa sponsorship

**Software Engineer III - Big Data & AWS** Drive tech innovation in our Consumer & Investment Banking sector. As an experienced Software Engineer III, you'll design, develop, and maintain big data solutions using AWS services and tools like Apache Spark, Redshift, and Terraform. Apply your strong data engineering skills to process and analyze large datasets, optimize application performance, and contribute to a diverse team culture. Bring your 3+ years of software engineering experience and applicable certifications to deliver trusted, market-leading technology products.

Full Job Description

Location: Plano, TX, United States

We have an exciting and rewarding opportunity for you to take your software engineering career to the next level. 

As a Software Engineer III at JPMorganChase within the Consumer and Investment Banking, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firms business objectives.

Job responsibilities

 

  • Executes software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems
  • Creates secure and high-quality production code and maintains algorithms that run synchronously with appropriate systems
  • Produces architecture and design artifacts for complex applications while being accountable for ensuring design constraints are met by software code development
  • Gathers, analyzes, synthesizes, and develops visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systems
  • Proactively identifies hidden problems and patterns in data and uses these insights to drive improvements to coding hygiene and system architecture
  • Contributes to software engineering communities of practice and events that explore new and emerging technologies
  • Adds to team culture of diversity, opportunity, inclusion, and respect

 

 

Required qualifications, capabilities, and skills

 

  • Formal training or certification on software engineering concepts and 3+ years applied experience
  • Processing and analyzing large datasets using Big Data technologies, including Apache Spark, across data platforms comprising of Redshift, SQL DB, RDS and Databricks.
  • Design, development, and implementation of scalable data pipelines and data matching solutions on AWS, utilizing Python for data engineering tasks.
  • Setup and Configuration of AWS services such as S3, Databricks, Step Functions, EMR, Glue, Lambda, RDS, EventBridge, Step-Functions and API Gateway for scripting, automation, and orchestration of data workflows.
  • Development of semantic layer for data consumption using Iceberg tables and APIs. 
  • Implementation of continuous deployment and integration processes using Infrastructure as Code tools, including Terraform.
  • Apply application, data, and infrastructure architecture disciplines to optimize data-intensive applications for performance and scalability.
  • Troubleshoot and resolve issues related to application code and data workflows.

     

     

     

    Preferred qualifications, capabilities, and skills

     

  • Familiarity with modern front-end technologies
  • Exposure to cloud technologies
Design and deliver market-leading technology products in a secure and scalable way as a seasoned member of an agile team