LOG IN
SIGN UP
Canary Wharfian - Online Investment Banking & Finance Community.
Sign In
or continue with e-mail and password
Forgot password?
Don't have an account?
Create an account
or continue with e-mail and password
By signing up, you agree to our Terms & Conditions and Privacy Policy.

Lead Software Engineer - Python, Elastic Search, Druid

ExperiencedNo visa sponsorship
Societe Generale logo

at Societe Generale

Investment Banking

Posted 4 hours ago

No clicks

**Lead Software Engineer - Python, Elastic Search, Druid** - **Lead Software Engineer** responsible for designing, developing, and maintaining scalable data processing systems using **Big Data technologies**, **data pipeline orchestration**, and **observability tooling**. - **Proficient in Python 3**, **Shell scripting**, and experienced in **oruccioning and optimizing data processing systems**, as well as integrating **elasticSE, Kibana, Grafana, Logstash, Fluentd, and Telegraf** for monitoring and visualization. - **Experience with** technologies including **Apache Kafka, Apache NiFi, Apache Spark, Sqoop, Hadoop, HDFS, Druid**, and **containerization tools** such as **GitHub Actions** and **basic DevOps practices**. - **Responsible for** developing and implementing robust data pipelines, managing distributed data storage, and collaborating with cross-functional teams to integrate observability solutions. - **Permanent contract** in **Bangalore, India**, with a **hybrid work model**.

Compensation
Not specified INR

Currency: INR

City
Bengaluru
Country
India

Full Job Description

window.dataLayer = window.dataLayer || []; var aData = { customVarPage1: "Lead Software Engineer - Python, Elastic Search, Druid", customVarPage2: "Bangalore", customVarPage3: "Permanent contract", customVarPage4: "260006JV", customVarPage5: "SG Global Solution Centre", customVarPage6: "IT (Information Technology)", customVarPage7: "2026/04/08" } window.dataLayer.push(aData);
Back to offers

Lead Software Engineer - Python, Elastic Search, Druid

IT (Information Technology)
Apply
Add to favorites
Permanent contract
Bangalore, India
Hybrid
Reference 260006JV
Start date 2026/05/18
Publication date 2026/04/08

Responsibilities

Job Summary:

We are seeking a highly skilled and motivated Specialist Software Engineer with deep expertise in Big Data technologies, data pipeline orchestration, and observability tooling. The ideal candidate will be responsible for designing, developing, and maintaining scalable data processing systems and integrating observability solutions to ensure system reliability and performance.

Key Responsibilities:Big Data Engineering:
  • Design and implement robust data pipelines using Apache Kafka, Apache NiFi, Apache Spark, and Sqoop.
  • Manage and optimize distributed data storage systems including Hadoop, HDFS, Druid, and ElasticSearch.
  • Integrate and maintain data visualization and monitoring tools like Kibana, Grafana, and Logstash.
  • Ensure efficient data ingestion, transformation, and delivery across various platforms.
Programming & Scripting:
  • Develop automation scripts and data processing utilities using Python 3 and Shell scripting.
  • Build reusable components and libraries for data manipulation and system integration.
Observability & Monitoring:
  • Implement and configure observability agents such as Fluentd, Telegraf, and Logstash.
  • Collaborate with platform teams to integrate OpenTelemetry for distributed tracing and metrics collection (good to have).
  • Maintain dashboards and alerts for system health and performance monitoring.
DevOps & CI/CD:
  • Contribute to CI/CD pipeline development using GitHub Actions.
  • Collaborate with DevOps teams to ensure seamless deployment and integration of data services.
Collaboration & Documentation:
  • Work closely with cross-functional teams including data scientists, platform engineers, and product managers.
  • Document system architecture, data flows, and operational procedures.
  • Participate in code reviews, knowledge sharing sessions, and technical mentoring.
Required Skills & Qualifications:
  • Bachelors or Masters degree in Computer Science, Information Technology, or related field.
  • 5+ years of experience in Big Data engineering and scripting.
  • Strong hands-on experience with:
    • Kafka, NiFi, Hadoop, HDFS, Spark, Sqoop
    • ElasticSearch, Druid, Kibana, Grafana
    • Python3, Shell scripting
    • Logstash, Fluentd, Telegraf
  • Familiarity with GitHub Actions and basic DevOps practices.
  • Exposure to OpenTelemetry is a plus.
  • Excellent problem-solving, analytical, and communication skills.
Preferred Qualifications:
  • Experience in building real-time data streaming applications.
  • Knowledge of data governance, security, and compliance in Big Data environments.
  • Certifications in Big Data technologies or cloud platforms (AWS/GCP/Azure) are a plus.

Profile required

Job Summary:

We are seeking a highly skilled and motivated Specialist Software Engineer with deep expertise in Big Data technologies, data pipeline orchestration, and observability tooling. The ideal candidate will be responsible for designing, developing, and maintaining scalable data processing systems and integrating observability solutions to ensure system reliability and performance.

Key Responsibilities:Big Data Engineering:
  • Design and implement robust data pipelines using Apache Kafka, Apache NiFi, Apache Spark, and Sqoop.
  • Manage and optimize distributed data storage systems including Hadoop, HDFS, Druid, and ElasticSearch.
  • Integrate and maintain data visualization and monitoring tools like Kibana, Grafana, and Logstash.
  • Ensure efficient data ingestion, transformation, and delivery across various platforms.
Programming & Scripting:
  • Develop automation scripts and data processing utilities using Python 3 and Shell scripting.
  • Build reusable components and libraries for data manipulation and system integration.
Observability & Monitoring:
  • Implement and configure observability agents such as Fluentd, Telegraf, and Logstash.
  • Collaborate with platform teams to integrate OpenTelemetry for distributed tracing and metrics collection (good to have).
  • Maintain dashboards and alerts for system health and performance monitoring.
DevOps & CI/CD:
  • Contribute to CI/CD pipeline development using GitHub Actions.
  • Collaborate with DevOps teams to ensure seamless deployment and integration of data services.
Collaboration & Documentation:
  • Work closely with cross-functional teams including data scientists, platform engineers, and product managers.
  • Document system architecture, data flows, and operational procedures.
  • Participate in code reviews, knowledge sharing sessions, and technical mentoring.
Required Skills & Qualifications:
  • Bachelors or Masters degree in Computer Science, Information Technology, or related field.
  • 5+ years of experience in Big Data engineering and scripting.
  • Strong hands-on experience with:
    • Kafka, NiFi, Hadoop, HDFS, Spark, Sqoop
    • ElasticSearch, Druid, Kibana, Grafana
    • Python3, Shell scripting
    • Logstash, Fluentd, Telegraf
  • Familiarity with GitHub Actions and basic DevOps practices.
  • Exposure to OpenTelemetry is a plus.
  • Excellent problem-solving, analytical, and communication skills.
Preferred Qualifications:
  • Experience in building real-time data streaming applications.
  • Knowledge of data governance, security, and compliance in Big Data environments.
  • Certifications in Big Data technologies or cloud platforms (AWS/GCP/Azure) are a plus.

Why join us

We are committed to support accelerating our Groups ESG strategy by implementing ESG principles in all our activities and policies. They are translated in our business activity (ESG assessment, reporting, project management or IT activities), our work environment and in our responsible practices for environment protection.

Business insight

At Societe Generale, we are convinced that people are drivers of change, and that the world of tomorrow will be shaped by all their initiatives, from the smallest to the most ambitious.

Whether youre joining us for a period of months, years or your entire career, together we can have a positive impact on the future. Creating, daring, innovating and taking action are part of our DNA.

If you too want to be directly involved, grow in a stimulating and caring environment, feel useful on a daily basis and develop or strengthen your expertise, you will feel right at home with us!

Still hesitating?

You should know that our employees can dedicate several days per year to solidarity actions during their working hours, including sponsoring people struggling with their orientation or professional integration, participating in the financial education of young apprentices, and sharing their skills with charities. There are many ways to get involved.

Diversity and Inclusion

We are an equal opportunities employer and we are proud to make diversity a strength for our company. Societe Generale is committed to recognizing and promoting all talents, regardless of their beliefs, age, disability, parental status, ethnic origin, nationality, gender identity, sexual orientation, membership of a political, religious, trade union or minority organisation, or any other characteristic that could be subject to discrimination.
Share
Lead Software Engineer - Python, Elastic Search, Druid
Permanent contract
Bangalore, India
Hybrid
Responsibilities Profile required Why join us Business insight Diversity and Inclusion
Apply
Add to favorites

Titre
Similar jobs

Lead Software Engineer - Python, Elastic Search, Druid

Permanent contract
Bangalore, India

Specialist Software Engineer - Python, Elastic Search., Druid

Permanent contract
Bangalore, India

Lead Software Engineer - Python, Elastic

Permanent contract
Bangalore, India

Titre
Jobs & contracts

Read more
  • Home
  • Job offers
  • Lead Software Engineer - Python, Elastic Search, Druid
Apply now

SIMILAR OPPORTUNITIES

No similar opportunities available at the moment.

Lead Software Engineer - Python, Elastic Search, Druid

Compensation

Not specified INR

City: Bengaluru

Country: India

Societe Generale logo
Investment Banking

4 hours ago

No clicks

at Societe Generale

ExperiencedNo visa sponsorship

**Lead Software Engineer - Python, Elastic Search, Druid** - **Lead Software Engineer** responsible for designing, developing, and maintaining scalable data processing systems using **Big Data technologies**, **data pipeline orchestration**, and **observability tooling**. - **Proficient in Python 3**, **Shell scripting**, and experienced in **oruccioning and optimizing data processing systems**, as well as integrating **elasticSE, Kibana, Grafana, Logstash, Fluentd, and Telegraf** for monitoring and visualization. - **Experience with** technologies including **Apache Kafka, Apache NiFi, Apache Spark, Sqoop, Hadoop, HDFS, Druid**, and **containerization tools** such as **GitHub Actions** and **basic DevOps practices**. - **Responsible for** developing and implementing robust data pipelines, managing distributed data storage, and collaborating with cross-functional teams to integrate observability solutions. - **Permanent contract** in **Bangalore, India**, with a **hybrid work model**.

Full Job Description

window.dataLayer = window.dataLayer || []; var aData = { customVarPage1: "Lead Software Engineer - Python, Elastic Search, Druid", customVarPage2: "Bangalore", customVarPage3: "Permanent contract", customVarPage4: "260006JV", customVarPage5: "SG Global Solution Centre", customVarPage6: "IT (Information Technology)", customVarPage7: "2026/04/08" } window.dataLayer.push(aData);
Back to offers

Lead Software Engineer - Python, Elastic Search, Druid

IT (Information Technology)
Apply
Add to favorites
Permanent contract
Bangalore, India
Hybrid
Reference 260006JV
Start date 2026/05/18
Publication date 2026/04/08

Responsibilities

Job Summary:

We are seeking a highly skilled and motivated Specialist Software Engineer with deep expertise in Big Data technologies, data pipeline orchestration, and observability tooling. The ideal candidate will be responsible for designing, developing, and maintaining scalable data processing systems and integrating observability solutions to ensure system reliability and performance.

Key Responsibilities:Big Data Engineering:
  • Design and implement robust data pipelines using Apache Kafka, Apache NiFi, Apache Spark, and Sqoop.
  • Manage and optimize distributed data storage systems including Hadoop, HDFS, Druid, and ElasticSearch.
  • Integrate and maintain data visualization and monitoring tools like Kibana, Grafana, and Logstash.
  • Ensure efficient data ingestion, transformation, and delivery across various platforms.
Programming & Scripting:
  • Develop automation scripts and data processing utilities using Python 3 and Shell scripting.
  • Build reusable components and libraries for data manipulation and system integration.
Observability & Monitoring:
  • Implement and configure observability agents such as Fluentd, Telegraf, and Logstash.
  • Collaborate with platform teams to integrate OpenTelemetry for distributed tracing and metrics collection (good to have).
  • Maintain dashboards and alerts for system health and performance monitoring.
DevOps & CI/CD:
  • Contribute to CI/CD pipeline development using GitHub Actions.
  • Collaborate with DevOps teams to ensure seamless deployment and integration of data services.
Collaboration & Documentation:
  • Work closely with cross-functional teams including data scientists, platform engineers, and product managers.
  • Document system architecture, data flows, and operational procedures.
  • Participate in code reviews, knowledge sharing sessions, and technical mentoring.
Required Skills & Qualifications:
  • Bachelors or Masters degree in Computer Science, Information Technology, or related field.
  • 5+ years of experience in Big Data engineering and scripting.
  • Strong hands-on experience with:
    • Kafka, NiFi, Hadoop, HDFS, Spark, Sqoop
    • ElasticSearch, Druid, Kibana, Grafana
    • Python3, Shell scripting
    • Logstash, Fluentd, Telegraf
  • Familiarity with GitHub Actions and basic DevOps practices.
  • Exposure to OpenTelemetry is a plus.
  • Excellent problem-solving, analytical, and communication skills.
Preferred Qualifications:
  • Experience in building real-time data streaming applications.
  • Knowledge of data governance, security, and compliance in Big Data environments.
  • Certifications in Big Data technologies or cloud platforms (AWS/GCP/Azure) are a plus.

Profile required

Job Summary:

We are seeking a highly skilled and motivated Specialist Software Engineer with deep expertise in Big Data technologies, data pipeline orchestration, and observability tooling. The ideal candidate will be responsible for designing, developing, and maintaining scalable data processing systems and integrating observability solutions to ensure system reliability and performance.

Key Responsibilities:Big Data Engineering:
  • Design and implement robust data pipelines using Apache Kafka, Apache NiFi, Apache Spark, and Sqoop.
  • Manage and optimize distributed data storage systems including Hadoop, HDFS, Druid, and ElasticSearch.
  • Integrate and maintain data visualization and monitoring tools like Kibana, Grafana, and Logstash.
  • Ensure efficient data ingestion, transformation, and delivery across various platforms.
Programming & Scripting:
  • Develop automation scripts and data processing utilities using Python 3 and Shell scripting.
  • Build reusable components and libraries for data manipulation and system integration.
Observability & Monitoring:
  • Implement and configure observability agents such as Fluentd, Telegraf, and Logstash.
  • Collaborate with platform teams to integrate OpenTelemetry for distributed tracing and metrics collection (good to have).
  • Maintain dashboards and alerts for system health and performance monitoring.
DevOps & CI/CD:
  • Contribute to CI/CD pipeline development using GitHub Actions.
  • Collaborate with DevOps teams to ensure seamless deployment and integration of data services.
Collaboration & Documentation:
  • Work closely with cross-functional teams including data scientists, platform engineers, and product managers.
  • Document system architecture, data flows, and operational procedures.
  • Participate in code reviews, knowledge sharing sessions, and technical mentoring.
Required Skills & Qualifications:
  • Bachelors or Masters degree in Computer Science, Information Technology, or related field.
  • 5+ years of experience in Big Data engineering and scripting.
  • Strong hands-on experience with:
    • Kafka, NiFi, Hadoop, HDFS, Spark, Sqoop
    • ElasticSearch, Druid, Kibana, Grafana
    • Python3, Shell scripting
    • Logstash, Fluentd, Telegraf
  • Familiarity with GitHub Actions and basic DevOps practices.
  • Exposure to OpenTelemetry is a plus.
  • Excellent problem-solving, analytical, and communication skills.
Preferred Qualifications:
  • Experience in building real-time data streaming applications.
  • Knowledge of data governance, security, and compliance in Big Data environments.
  • Certifications in Big Data technologies or cloud platforms (AWS/GCP/Azure) are a plus.

Why join us

We are committed to support accelerating our Groups ESG strategy by implementing ESG principles in all our activities and policies. They are translated in our business activity (ESG assessment, reporting, project management or IT activities), our work environment and in our responsible practices for environment protection.

Business insight

At Societe Generale, we are convinced that people are drivers of change, and that the world of tomorrow will be shaped by all their initiatives, from the smallest to the most ambitious.

Whether youre joining us for a period of months, years or your entire career, together we can have a positive impact on the future. Creating, daring, innovating and taking action are part of our DNA.

If you too want to be directly involved, grow in a stimulating and caring environment, feel useful on a daily basis and develop or strengthen your expertise, you will feel right at home with us!

Still hesitating?

You should know that our employees can dedicate several days per year to solidarity actions during their working hours, including sponsoring people struggling with their orientation or professional integration, participating in the financial education of young apprentices, and sharing their skills with charities. There are many ways to get involved.

Diversity and Inclusion

We are an equal opportunities employer and we are proud to make diversity a strength for our company. Societe Generale is committed to recognizing and promoting all talents, regardless of their beliefs, age, disability, parental status, ethnic origin, nationality, gender identity, sexual orientation, membership of a political, religious, trade union or minority organisation, or any other characteristic that could be subject to discrimination.
Share
Lead Software Engineer - Python, Elastic Search, Druid
Permanent contract
Bangalore, India
Hybrid
Responsibilities Profile required Why join us Business insight Diversity and Inclusion
Apply
Add to favorites

Titre
Similar jobs

Lead Software Engineer - Python, Elastic Search, Druid

Permanent contract
Bangalore, India

Specialist Software Engineer - Python, Elastic Search., Druid

Permanent contract
Bangalore, India

Lead Software Engineer - Python, Elastic

Permanent contract
Bangalore, India

Titre
Jobs & contracts

Read more
  • Home
  • Job offers
  • Lead Software Engineer - Python, Elastic Search, Druid

SIMILAR OPPORTUNITIES

No similar opportunities available at the moment.