LOG IN
SIGN UP
Canary Wharfian - Online Investment Banking & Finance Community.
Sign In
OR continue with e-mail and password
E-mail address
Password
Don't have an account?
Reset password
Join Canary Wharfian
OR continue with e-mail and password
E-mail address
Username
Password
Confirm Password
How did you hear about us?
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Job Details

J.P. Morgan logo
Bulge Bracket Investment Banks

Software Engineer III- ETL/ELT Pipelines / Python / Pyspark/ AWS

at J.P. Morgan

ExperiencedNo visa sponsorship

Posted 17 days ago

No clicks

Senior software engineer within Asset & Wealth Management Technology, responsible for designing and delivering scalable ETL/ELT data solutions using Python, PySpark, SQL and AWS. Build, optimize and monitor large-scale data pipelines, support cloud migrations and modernize legacy systems while ensuring data quality and reliability. Collaborate with cross-functional stakeholders to translate business requirements into technical specifications and document data flows and transformation logic.

Compensation
Not specified

Currency: Not specified

City
New York City
Country
United States

Full Job Description

Location: New York, NY, United States

We have an exciting and rewarding opportunity for you to take your software engineering career to the next level. 

As a Software Engineer III- ETL/ELT Pipelines / Python / Pyspark / AWS at JPMorganChase within the Asset and Wealth Management Technology Team, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm’s business objectives.

Job responsibilities

 

  • Design and implement scalable data solutions that align with business objectives and technology strategies and technical troubleshooting with ability to think beyond routine or conventional approaches to build and support solutions or break down technical problems
  • Design, develop, and optimize robust ETL/ELT pipelines using SQL, Python, and PySpark for large-scale, complex data environments
  • Develop and support secure high-quality production code, and review and debug code written by others
  • Support data migration and modernization initiatives, transitioning legacy systems to cloud-based data warehouses
  • Identify opportunities to eliminate or automate remediation of recurring issues to improve overall operational stability of software applications and systems
  • Collaborate with cross-functional teams to understand data requirements and translate them into technical specifications
  • Monitor and tune ETL processes for efficiency, resilience and scalability, including alerting for data quality issues
  • Work closely with stakeholders to identify opportunities for data-driven improvements and efficiencies
  • Document data flows, logic, and transformation rules to maintain transparency and facilitate knowledge sharing across teams
  • Stay current on emerging ETL and data engineering technologies with industry trends to drive innovation

 

 

Required qualifications, capabilities, and skills

 

  • Formal training or certification in software engineering with 3+ years of applied experience
  • Proficient in coding in one or more languages including Python
  • Strong hands-on coding proficiency in Python, PySpark, Apache Spark, SQL, and with AWS cloud services such as AWS EMR, S3, Athena, Redshift
  • Hands-on experience with AWS cloud and data lake platforms, Snowflake, Databricks etc 
  • Proven experience in ETL/ELT pipeline development and with large-scale data processing with SQL
  • Practical experience implementing data validation, cleansing, transformation, and reconciliation processes to ensure high-quality, trustworthy datasets
  • Experience with cloud-based data warehouse migration and modernization
  • Proficiency in automation and continuous delivery methods and understanding of agile methodologies such as CI/CD, Application Resiliency, and Security
  • Excellent problem-solving and troubleshooting skills, with ability to optimize performance and troubleshoot complex data pipelines
  • Strong communication and documentation abilities
  • Ability to collaborate effectively with business and technical stakeholders

 

 

Preferred qualifications, capabilities, and skills

 

  • Knowledge of Apache Iceberg
  • Knowledge of the financial services industry and IT systems
Seeking an engineer strong with Python and Pyspark development, as well as with AWS and ETL/ELT Pipelines

Job Details

J.P. Morgan logo
Bulge Bracket Investment Banks

17 days ago

clicks

Software Engineer III- ETL/ELT Pipelines / Python / Pyspark/ AWS

at J.P. Morgan

ExperiencedNo visa sponsorship

Not specified

Currency not set

City: New York City

Country: United States

Senior software engineer within Asset & Wealth Management Technology, responsible for designing and delivering scalable ETL/ELT data solutions using Python, PySpark, SQL and AWS. Build, optimize and monitor large-scale data pipelines, support cloud migrations and modernize legacy systems while ensuring data quality and reliability. Collaborate with cross-functional stakeholders to translate business requirements into technical specifications and document data flows and transformation logic.

Full Job Description

Location: New York, NY, United States

We have an exciting and rewarding opportunity for you to take your software engineering career to the next level. 

As a Software Engineer III- ETL/ELT Pipelines / Python / Pyspark / AWS at JPMorganChase within the Asset and Wealth Management Technology Team, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm’s business objectives.

Job responsibilities

 

  • Design and implement scalable data solutions that align with business objectives and technology strategies and technical troubleshooting with ability to think beyond routine or conventional approaches to build and support solutions or break down technical problems
  • Design, develop, and optimize robust ETL/ELT pipelines using SQL, Python, and PySpark for large-scale, complex data environments
  • Develop and support secure high-quality production code, and review and debug code written by others
  • Support data migration and modernization initiatives, transitioning legacy systems to cloud-based data warehouses
  • Identify opportunities to eliminate or automate remediation of recurring issues to improve overall operational stability of software applications and systems
  • Collaborate with cross-functional teams to understand data requirements and translate them into technical specifications
  • Monitor and tune ETL processes for efficiency, resilience and scalability, including alerting for data quality issues
  • Work closely with stakeholders to identify opportunities for data-driven improvements and efficiencies
  • Document data flows, logic, and transformation rules to maintain transparency and facilitate knowledge sharing across teams
  • Stay current on emerging ETL and data engineering technologies with industry trends to drive innovation

 

 

Required qualifications, capabilities, and skills

 

  • Formal training or certification in software engineering with 3+ years of applied experience
  • Proficient in coding in one or more languages including Python
  • Strong hands-on coding proficiency in Python, PySpark, Apache Spark, SQL, and with AWS cloud services such as AWS EMR, S3, Athena, Redshift
  • Hands-on experience with AWS cloud and data lake platforms, Snowflake, Databricks etc 
  • Proven experience in ETL/ELT pipeline development and with large-scale data processing with SQL
  • Practical experience implementing data validation, cleansing, transformation, and reconciliation processes to ensure high-quality, trustworthy datasets
  • Experience with cloud-based data warehouse migration and modernization
  • Proficiency in automation and continuous delivery methods and understanding of agile methodologies such as CI/CD, Application Resiliency, and Security
  • Excellent problem-solving and troubleshooting skills, with ability to optimize performance and troubleshoot complex data pipelines
  • Strong communication and documentation abilities
  • Ability to collaborate effectively with business and technical stakeholders

 

 

Preferred qualifications, capabilities, and skills

 

  • Knowledge of Apache Iceberg
  • Knowledge of the financial services industry and IT systems
Seeking an engineer strong with Python and Pyspark development, as well as with AWS and ETL/ELT Pipelines