LOG IN
SIGN UP
Canary Wharfian - Online Investment Banking & Finance Community.
Sign In
or continue with e-mail and password
Forgot password?
Don't have an account?
Create an account
or continue with e-mail and password
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Software Engineer III - Python, Spark, AWS

ExperiencedNo visa sponsorship
J.P. Morgan logo

at J.P. Morgan

Bulge Bracket Investment Banks

Posted 15 days ago

No clicks

**Software Engineer III - Python, Spark, AWS** As a Software Engineer III at JPMorgan Chase, you'll design and deliver secure, scalable products using Python, Spark, and AWS. Your role involves executing innovative software solutions, optimizing complex data pipelines, and supporting cross-functional teams. With a degree in software engineering and 3+ years of hands-on experience, ideally as a Data Engineer, you'll thrive in a high-performing agile environment. Your expertise in AWS services (EMR, Terraform, Cloudwatch, Redshift) and proficiency in SQL and NoSQL databases will be crucial in this role. Demonstrable experience in data workflows, process improvements, and collaboration skills are essential for success.

Compensation
Not specified

Currency: Not specified

City
Bengaluru
Country
India

Full Job Description

Location: Bengaluru, Karnataka, India

We have an exciting and rewarding opportunity for you to take your software engineering career to the next level. 

As a Software Engineer III at JPMorgan Chase within the Asset & Wealth Management, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firms business objectives.

Job responsibilities

 

  • Executes software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems
  • Develops and optimizes our data pipeline architecture.
  • Optimizes data flow and collection across cross functional teams
  • Builds data pipelines and enjoys optimizing data systems and building them from the scratch.
  • Supports the data needs of multiple teams, systems and products
  • Optimizes and re-designes the data architecture to support our next generation of products and data initiatives.

 

Required qualifications, capabilities, and skills

 

  • Formal training or certification on software engineering concepts and 3+ years applied experience
  • Experience as a Data Engineer
  • Experience with Python, Spark and AWS. 

    Strong Hands-on with AWS cloud services: EMR, Terraform, Cloudwatch, Redshift

  • Experience with relational SQL and NoSQL databases. 

    Familiarity with Hadoop or suitable equivalent

     

  • Create and maintain optimal data workflow architecture. 

    Assemble large, complex data sets that meet functional / non-functional business requirements.

  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using Python, Spark and AWS.
  • Build data pipeline for batch as well as real time client data. 

    Keep our data separated and secure in AWS regions.

  • Superb interpersonal, communication, and collaboration skills. 

    Exceptional analytical and problem-solving aptitude.

  • Outstanding organizational and time management skills.

 

 

Preferred qualifications, capabilities, and skills

 

  • Good to have knowledge in Data visualization tools like Microsoft BI/Qliksense
  • Superb interpersonal, communication, and collaboration skills.
  • Exceptional analytical and problem-solving aptitude.
  • Outstanding organizational and time management skills.
Design and deliver market-leading technology products in a secure and scalable way as a seasoned member of an agile team

Software Engineer III - Python, Spark, AWS

Compensation

Not specified

City: Bengaluru

Country: India

J.P. Morgan logo
Bulge Bracket Investment Banks

15 days ago

No clicks

at J.P. Morgan

ExperiencedNo visa sponsorship

**Software Engineer III - Python, Spark, AWS** As a Software Engineer III at JPMorgan Chase, you'll design and deliver secure, scalable products using Python, Spark, and AWS. Your role involves executing innovative software solutions, optimizing complex data pipelines, and supporting cross-functional teams. With a degree in software engineering and 3+ years of hands-on experience, ideally as a Data Engineer, you'll thrive in a high-performing agile environment. Your expertise in AWS services (EMR, Terraform, Cloudwatch, Redshift) and proficiency in SQL and NoSQL databases will be crucial in this role. Demonstrable experience in data workflows, process improvements, and collaboration skills are essential for success.

Full Job Description

Location: Bengaluru, Karnataka, India

We have an exciting and rewarding opportunity for you to take your software engineering career to the next level. 

As a Software Engineer III at JPMorgan Chase within the Asset & Wealth Management, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firms business objectives.

Job responsibilities

 

  • Executes software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems
  • Develops and optimizes our data pipeline architecture.
  • Optimizes data flow and collection across cross functional teams
  • Builds data pipelines and enjoys optimizing data systems and building them from the scratch.
  • Supports the data needs of multiple teams, systems and products
  • Optimizes and re-designes the data architecture to support our next generation of products and data initiatives.

 

Required qualifications, capabilities, and skills

 

  • Formal training or certification on software engineering concepts and 3+ years applied experience
  • Experience as a Data Engineer
  • Experience with Python, Spark and AWS. 

    Strong Hands-on with AWS cloud services: EMR, Terraform, Cloudwatch, Redshift

  • Experience with relational SQL and NoSQL databases. 

    Familiarity with Hadoop or suitable equivalent

     

  • Create and maintain optimal data workflow architecture. 

    Assemble large, complex data sets that meet functional / non-functional business requirements.

  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using Python, Spark and AWS.
  • Build data pipeline for batch as well as real time client data. 

    Keep our data separated and secure in AWS regions.

  • Superb interpersonal, communication, and collaboration skills. 

    Exceptional analytical and problem-solving aptitude.

  • Outstanding organizational and time management skills.

 

 

Preferred qualifications, capabilities, and skills

 

  • Good to have knowledge in Data visualization tools like Microsoft BI/Qliksense
  • Superb interpersonal, communication, and collaboration skills.
  • Exceptional analytical and problem-solving aptitude.
  • Outstanding organizational and time management skills.
Design and deliver market-leading technology products in a secure and scalable way as a seasoned member of an agile team