
Software Engineer III - Python, AWS
at J.P. Morgan
Posted 17 days ago
No clicks
Senior software engineer role within JPMorgan Chase Asset & Wealth Management to design and deliver scalable, secure data solutions. Responsible for building and optimizing ETL/ELT pipelines using Python, PySpark, SQL and AWS services, migrating legacy systems to cloud data platforms, and ensuring data quality and operational reliability. Works in an agile team, collaborates with cross-functional stakeholders, and participates in code review, automation, and production support. Preferred experience includes data lake/warehouse technologies (Snowflake, Databricks) and knowledge of Apache Iceberg and financial services IT systems.
- Compensation
- Not specified
- City
- Mumbai
- Country
- India
Currency: Not specified
Full Job Description
Location: Mumbai, Maharashtra, India
We have an exciting and rewarding opportunity for you to take your software engineering career to the next level.
As a Software Engineer III at JPMorgan Chase within the Asset & Wealth Management, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm’s business objectives.
Job responsibilities
- Designs and implement scalable data solutions that align with business objectives and technology strategies and technical troubleshooting with ability to think beyond routine or conventional approaches to build and support solutions or break down technical problems
- Designs, develop, and optimize robust ETL/ELT pipelines using SQL, Python, and PySpark for large-scale, complex data environments.
- Develops and support of secure high-quality production code, and review and debug code written by others.
- Supports data migration and modernization initiatives, transitioning legacy systems to cloud-based data warehouses.
- Identifies opportunities to eliminate or automate remediation of recurring issues to improve overall operational stability of software applications and systems.
- Collaborates with cross-functional teams to understand data requirements and translate them into technical specifications.
- Monitors and tune ETL processes for efficiency, resilience and scalability, including alerting for data quality issues.
- Works closely with stakeholders to identify opportunities for data-driven improvements and efficiencies.
- Documents data flows, logic, and transformation rules to maintain transparency and facilitate knowledge sharing across teams.
- Stays current on emerging ETL and data engineering technologies with industry trends to drive innovation.
Required qualifications, capabilities, and skills
- Formal training or certification on software engineering concepts and 3+ years applied experience
- Proven experience in data management, ETL/ELT pipeline development, and large-scale data processing with strong hands-on coding proficiency in Python, PySpark, Apache Spark, SQL, and AWS cloud services such as AWS EMR, S3, Athena, Redshift.
- Hands-on experience with AWS cloud and data lake platforms, Snowflake, Databricks etc
- Practical experience implementing data validation, cleansing, transformation, and reconciliation processes to ensure high-quality, trustworthy datasets.
- Experience with cloud-based data warehouse migration and modernization.
- Proficiency in automation and continuous delivery methods and understanding of agile methodologies such as CI/CD, Application Resiliency, and Security.
- Excellent problem-solving and troubleshooting skills, with ability to optimize performance and troubleshoot complex data pipelines.
- Strong communication and documentation abilities.
- Ability to collaborate effectively with business and technical stakeholders.
Preferred qualifications, capabilities, and skills
- Knowledge of Apache Iceberg
- In-depth knowledge of the financial services industry and IT systems




