LOG IN
SIGN UP
Canary Wharfian - Online Investment Banking & Finance Community.
Sign In
or continue with e-mail and password
Forgot password?
Don't have an account?
Create an account
or continue with e-mail and password
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Software Engineer III- Python,GenAI,AWS

ExperiencedNo visa sponsorship
J.P. Morgan logo

at J.P. Morgan

Bulge Bracket Investment Banks

Posted 4 days ago

No clicks

**Software Engineer III - Python, GenAI, AWS** in Hyderabad: Senior role designs & delivers scalable, secure tech solutions across multiple business functions. Key responsibilities include collaborating with teams to develop data-driven solutions using GenAI frameworks, conducting research on prompt engineering techniques, designing reliable data processing pipelines, and building/maintaining Data Lakes using Databricks. Required skills: Advanced degree, 3+ years in data science/machine learning, strong Python (PySpark, Spark SQL), proficiency in GenAI models, LLM orchestration, Databricks expertise, and AWS services (S3, Lambda, Redshift).

Compensation
Not specified

Currency: Not specified

City
Hyderabad
Country
India

Full Job Description

Location: Hyderabad, Telangana, India

Description for Internal Candidates    


Job Overview


We have an exciting and rewarding opportunity for you to take your software engineering career to the next level. As a Software Engineer III at JPMorgan Chase within Consumer and Community Banking, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm's business objectives.

Job Responsibilities

  • Collaborate with cross-functional teams to identify business requirements and develop data-driven solutions using Agentic/GenAI frameworks in a fast-paced environment.
  • Conduct research on prompt and context engineering techniques to enhance the performance of LLM-based solutions.
  • Design and implement scalable and reliable data processing pipelines, performing analysis and deriving insights to optimize business outcomes.
  • Build and maintain Data Lakes and data processing workflows using Databricks to support machine learning operations.
  • Communicate technical concepts and results effectively to both technical and non-technical stakeholders.
  • Utilize AWS services including S3, Lambda, Redshift, Athena, Step Functions, MSK, EKS, and Data Lake architectures.
  • Collaborate with data scientists, engineers, and business stakeholders to deliver high-quality data solutions.
  • Act as a self-starter, independently taking initiative in driving assignments to completion and solving problems without the need for escalation.

 

Required Qualifications, Capabilities, and Skills

  • Advanced degree in Computer Science, Data Science, Mathematics, or related field.
  • 3+ years of applied experience in data science, machine learning, or related areas.
  • Strong Python skills with PySpark, Spark SQL, and Datagrams for large-scale data processing.
  • Proficiency with GenAI models (e.g., OpenAI) to solve business problems, including RAG/fine-tuning when appropriate.
  • Experience with LLM orchestration, building AI agents, agentic frameworks, and MCP servers.
  • Databricks expertise building and managing data lakes and end-to-end data processing workflows.
  • Strong problem-solving, troubleshooting, clear stakeholder communication, rapid POC-to-production delivery, and mentoring abilities.

 

Preferred Qualifications, Capabilities, and Skills

  • Proficiency in all other AWS componentspreferably AWS certified.
  • Experience integrating AI/ML models into data pipelines is a plus.
  • Experience with version control (Git) and CI/CD pipelines.
     
Software Engineer (Senior Associate) - Python,GenAI,AWS

Software Engineer III- Python,GenAI,AWS

Compensation

Not specified

City: Hyderabad

Country: India

J.P. Morgan logo
Bulge Bracket Investment Banks

4 days ago

No clicks

at J.P. Morgan

ExperiencedNo visa sponsorship

**Software Engineer III - Python, GenAI, AWS** in Hyderabad: Senior role designs & delivers scalable, secure tech solutions across multiple business functions. Key responsibilities include collaborating with teams to develop data-driven solutions using GenAI frameworks, conducting research on prompt engineering techniques, designing reliable data processing pipelines, and building/maintaining Data Lakes using Databricks. Required skills: Advanced degree, 3+ years in data science/machine learning, strong Python (PySpark, Spark SQL), proficiency in GenAI models, LLM orchestration, Databricks expertise, and AWS services (S3, Lambda, Redshift).

Full Job Description

Location: Hyderabad, Telangana, India

Description for Internal Candidates    


Job Overview


We have an exciting and rewarding opportunity for you to take your software engineering career to the next level. As a Software Engineer III at JPMorgan Chase within Consumer and Community Banking, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm's business objectives.

Job Responsibilities

  • Collaborate with cross-functional teams to identify business requirements and develop data-driven solutions using Agentic/GenAI frameworks in a fast-paced environment.
  • Conduct research on prompt and context engineering techniques to enhance the performance of LLM-based solutions.
  • Design and implement scalable and reliable data processing pipelines, performing analysis and deriving insights to optimize business outcomes.
  • Build and maintain Data Lakes and data processing workflows using Databricks to support machine learning operations.
  • Communicate technical concepts and results effectively to both technical and non-technical stakeholders.
  • Utilize AWS services including S3, Lambda, Redshift, Athena, Step Functions, MSK, EKS, and Data Lake architectures.
  • Collaborate with data scientists, engineers, and business stakeholders to deliver high-quality data solutions.
  • Act as a self-starter, independently taking initiative in driving assignments to completion and solving problems without the need for escalation.

 

Required Qualifications, Capabilities, and Skills

  • Advanced degree in Computer Science, Data Science, Mathematics, or related field.
  • 3+ years of applied experience in data science, machine learning, or related areas.
  • Strong Python skills with PySpark, Spark SQL, and Datagrams for large-scale data processing.
  • Proficiency with GenAI models (e.g., OpenAI) to solve business problems, including RAG/fine-tuning when appropriate.
  • Experience with LLM orchestration, building AI agents, agentic frameworks, and MCP servers.
  • Databricks expertise building and managing data lakes and end-to-end data processing workflows.
  • Strong problem-solving, troubleshooting, clear stakeholder communication, rapid POC-to-production delivery, and mentoring abilities.

 

Preferred Qualifications, Capabilities, and Skills

  • Proficiency in all other AWS componentspreferably AWS certified.
  • Experience integrating AI/ML models into data pipelines is a plus.
  • Experience with version control (Git) and CI/CD pipelines.
     
Software Engineer (Senior Associate) - Python,GenAI,AWS