LOG IN
SIGN UP
Canary Wharfian - Online Investment Banking & Finance Community.
Sign In
or continue with e-mail and password
Forgot password?
Don't have an account?
Create an account
or continue with e-mail and password
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Software Engineer III-AWS Data Engineer (AI, Data Lake, Snowflake, Python, Spark, Copilot & Claude)

ExperiencedNo visa sponsorship
J.P. Morgan logo

at J.P. Morgan

Bulge Bracket Investment Banks

Posted 4 days ago

No clicks

**Software Engineer III-AWS Data Engineer** Lead the design, development, and maintenance of scalable data pipelines and architectures on AWS, utilizing Data Lake, Snowflake, and distributed processing technologies. Proficiency in Python, Spark, and AI-driven productivity tools like Copilot and Claude is required. Collaborate with data science teams to deliver AI-driven data solutions. Ensure data quality, governance, and security while optimizing workflows for performance and cost-efficiency. Senior-level role with 8+ years of experience in data engineering and advanced AWS skills.

Compensation
Not specified

Currency: Not specified

City
Hyderabad
Country
India

Full Job Description

Location: Hyderabad, Telangana, India

 

Job Summary

We are seeking a highly skilled AWS Data Engineer with expertise in AI, Python, Spark, and modern AI productivity tools such as Copilot and Claude. The ideal candidate will design, build, and optimize scalable data pipelines and architectures on AWS, leveraging Data Lake, Snowflake, and distributed processing technologies. You will collaborate with data scientists, analysts, and business stakeholders to deliver robust, AI-driven data solutions, and leverage AI assistants to enhance productivity and code quality.

Key Responsibilities

  • Design, develop, and maintain scalable data pipelines and ETL processes on AWS (S3, Glue, Lambda, Redshift, etc.) using Python and Spark.
  • Architect and implement Data Lake solutions, ensuring efficient data ingestion, storage, and retrieval.
  • Integrate and manage Snowflake environments for data warehousing and analytics.
  • Develop and optimize distributed data processing workflows using Apache Spark (PySpark).
  • Collaborate with AI/ML teams to enable data-driven models and solutions, supporting feature engineering and model deployment.
  • Leverage AI coding assistants such as Copilot and Claude to accelerate development, improve code quality, and automate repetitive tasks.
  • Optimize data workflows for performance, reliability, and cost efficiency.
  • Ensure data quality, governance, and security across all platforms.
  • Automate data processing tasks using Python, Spark, and AWS-native tools.
  • Monitor, troubleshoot, and resolve issues in data pipelines and infrastructure.
  • Document technical solutions and provide knowledge transfer to team members.
  • Required Skills & Qualifications

  • Bachelors or Masters degree in Computer Science, Engineering, or related field.
  • 8+ years of experience in data engineering, with hands-on AWS experience.
  • Strong proficiency in AWS services: S3, Glue, Lambda, Redshift, IAM, etc.
  • Experience with Data Lake architecture and implementation.
  • Expertise in Snowflake data warehousing, including schema design, performance tuning, and security.
  • Advanced programming skills in Python and SQL.
  • Hands-on experience with Apache Spark (preferably PySpark) for large-scale data processing.
  • Familiarity with AI/ML concepts and workflows; experience supporting data science teams.
  • Experience using AI coding assistants such as GitHub Copilot and Claude to enhance productivity and code quality.
  • Knowledge of data governance, security, and compliance best practices.
  • Excellent problem-solving and communication skills.
  • Preferred Skills

  • Experience with workflow/orchestration tools such as Apache Airflow.
  • Exposure to DevOps practices and CI/CD pipelines.
  • AWS certification (e.g., AWS Certified Data Analytics, Solutions Architect).
  • Experience with real-time data processing and streaming (e.g., Kinesis, Kafka).
  • Familiarity with BI tools (e.g., Tableau, Power BI).
  • We are looking for an experienced AWS Data Engineer skilled in Data Lake, Snowflake, Python, and Spark. In This role, you will design, build, and optimize scalable data pipelines and architectures on AWS, enabling advanced analytics and AI/ML solutions. You will collaborate with data science teams, utilize AI tools such as Copilot and Claude to accelerate development and improve code quality, and ensure robust data governance and security. Strong experience in distributed data processing, cloud data warehousing, and automation is essential.

    Software Engineer III-AWS Data Engineer (AI, Data Lake, Snowflake, Python, Spark, Copilot & Claude)

    Compensation

    Not specified

    City: Hyderabad

    Country: India

    J.P. Morgan logo
    Bulge Bracket Investment Banks

    4 days ago

    No clicks

    at J.P. Morgan

    ExperiencedNo visa sponsorship

    **Software Engineer III-AWS Data Engineer** Lead the design, development, and maintenance of scalable data pipelines and architectures on AWS, utilizing Data Lake, Snowflake, and distributed processing technologies. Proficiency in Python, Spark, and AI-driven productivity tools like Copilot and Claude is required. Collaborate with data science teams to deliver AI-driven data solutions. Ensure data quality, governance, and security while optimizing workflows for performance and cost-efficiency. Senior-level role with 8+ years of experience in data engineering and advanced AWS skills.

    Full Job Description

    Location: Hyderabad, Telangana, India

     

    Job Summary

    We are seeking a highly skilled AWS Data Engineer with expertise in AI, Python, Spark, and modern AI productivity tools such as Copilot and Claude. The ideal candidate will design, build, and optimize scalable data pipelines and architectures on AWS, leveraging Data Lake, Snowflake, and distributed processing technologies. You will collaborate with data scientists, analysts, and business stakeholders to deliver robust, AI-driven data solutions, and leverage AI assistants to enhance productivity and code quality.

    Key Responsibilities

  • Design, develop, and maintain scalable data pipelines and ETL processes on AWS (S3, Glue, Lambda, Redshift, etc.) using Python and Spark.
  • Architect and implement Data Lake solutions, ensuring efficient data ingestion, storage, and retrieval.
  • Integrate and manage Snowflake environments for data warehousing and analytics.
  • Develop and optimize distributed data processing workflows using Apache Spark (PySpark).
  • Collaborate with AI/ML teams to enable data-driven models and solutions, supporting feature engineering and model deployment.
  • Leverage AI coding assistants such as Copilot and Claude to accelerate development, improve code quality, and automate repetitive tasks.
  • Optimize data workflows for performance, reliability, and cost efficiency.
  • Ensure data quality, governance, and security across all platforms.
  • Automate data processing tasks using Python, Spark, and AWS-native tools.
  • Monitor, troubleshoot, and resolve issues in data pipelines and infrastructure.
  • Document technical solutions and provide knowledge transfer to team members.
  • Required Skills & Qualifications

  • Bachelors or Masters degree in Computer Science, Engineering, or related field.
  • 8+ years of experience in data engineering, with hands-on AWS experience.
  • Strong proficiency in AWS services: S3, Glue, Lambda, Redshift, IAM, etc.
  • Experience with Data Lake architecture and implementation.
  • Expertise in Snowflake data warehousing, including schema design, performance tuning, and security.
  • Advanced programming skills in Python and SQL.
  • Hands-on experience with Apache Spark (preferably PySpark) for large-scale data processing.
  • Familiarity with AI/ML concepts and workflows; experience supporting data science teams.
  • Experience using AI coding assistants such as GitHub Copilot and Claude to enhance productivity and code quality.
  • Knowledge of data governance, security, and compliance best practices.
  • Excellent problem-solving and communication skills.
  • Preferred Skills

  • Experience with workflow/orchestration tools such as Apache Airflow.
  • Exposure to DevOps practices and CI/CD pipelines.
  • AWS certification (e.g., AWS Certified Data Analytics, Solutions Architect).
  • Experience with real-time data processing and streaming (e.g., Kinesis, Kafka).
  • Familiarity with BI tools (e.g., Tableau, Power BI).
  • We are looking for an experienced AWS Data Engineer skilled in Data Lake, Snowflake, Python, and Spark. In This role, you will design, build, and optimize scalable data pipelines and architectures on AWS, enabling advanced analytics and AI/ML solutions. You will collaborate with data science teams, utilize AI tools such as Copilot and Claude to accelerate development and improve code quality, and ensure robust data governance and security. Strong experience in distributed data processing, cloud data warehousing, and automation is essential.