5,241 Lead Data Engineer jobs in India

Lead Data Engineer

Hyderabad, Andhra Pradesh ThermoFisher Scientific

Posted 1 day ago

Job Viewed

Tap Again To Close

Job Description

**Work Schedule**
Standard (Mon-Fri)
**Environmental Conditions**
Office
**Job Description**
**Position Overview**
As a Lead Data Engineer, your pivotal responsibility involves spearheading data initiatives and integrating top-tier data solutions. This role presents a thrilling chance to contribute to innovative projects and engage with a team of committed experts. Your proficiency is instrumental in elevating our competitiveness and ensuring seamless implementation of our data strategies!
**Key Responsibilities**
+ Lead the build, development, and optimization of scalable, secure, and high-performance data pipelines and architectures.
+ Lead and guide a team of data engineers and analysts, encouraging teamwork and technical excellence.
+ Implement standard methodologies for DataOps across the platform to uphold 99.99%+ availability, reliability, and recovery standards.
+ Build and maintain robust data models and frameworks, ensuring they align with enterprise architecture and governance standards.
+ Collaborate with cross-functional teams, including analytics, product, and business units to deliver high-impact, data-driven solutions.
+ Ensure data quality and validation through automated testing, monitoring, and anomaly detection techniques.
+ Own planning, prioritization, and execution of data engineering projects - delivering on time and within scope.
+ Contribute to platform scalability, performance tuning, and migration initiatives, including support for real-time data flows.
+ Deliver technical documentation and code reviews to uphold engineering standards and maintain knowledge continuity.
+ Support data privacy, security, and compliance requirements in line with global regulatory standards.
**Qualifications**
+ Bachelor's degree or equivalent experience in Computer Science, Data Engineering, Information Systems, or a related field.
+ Proven experience (5+ years) in data engineering with a strong understanding of modern data architecture, pipelines, and operations.
+ Expertise in ETL/ELT tools, data warehousing, and cloud-native platforms such as Databricks, Snowflake, or BigQuery.
+ Strong hands-on experience with SQL, Apache Spark, and Python (experience with Pandas preferred).
+ Proficient in working with large datasets and performance optimization for distributed computing environments.
+ Prior experience with tools like Informatica, Cognos, SQL Server, and Oracle is a strong plus.
+ Deep understanding of data modeling, metadata management, and data governance frameworks.
+ Demonstrated experience in leading engineering teams and managing project lifecycles.
+ Familiarity with DevOps/DataOps practices, version control systems (e.g., GitHub), and CI/CD pipelines.
+ Excellent communication, leadership, and collaborator management skills.
+ Experience working in cloud environments such as AWS, Azure, or GCP.
Why Join Us?
This is your chance to work on exceptionally impactful projects and grow your career in an encouraging and innovative environment!
Thermo Fisher Scientific is an EEO/Affirmative Action Employer and does not discriminate on the basis of race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability or any other legally protected status.
This advertiser has chosen not to accept applicants from your region.

Lead Data Engineer

Bangalore, Karnataka Target

Posted 1 day ago

Job Viewed

Tap Again To Close

Job Description

**About us:**
Working at Target means helping all families discover the joy of everyday life. We bring that vision to life through our values and culture. Learn more about Target here ( .
**About the Role:**
As a **Lead Data Engineer** , you will serve as the **technical anchor** for the engineering team, responsible for designing and developing **scalable, high-performance data solutions** . You will **own and drive data architecture** that supports both functional and non-functional business needs, ensuring **reliability, efficiency, and scalability** .
Your expertise in **big data technologies, distributed systems, and cloud platforms** will help shape the engineering roadmap and best practices for **data processing, analytics, and real-time data serving** . You will play a key role in **architecting and optimizing data pipelines** using **Hadoop, Spark, Scala/Java, and cloud technologies** to support enterprise-wide data initiatives.
Additionally, **experience with API development for serving low-latency data** and **Customer Data Platforms (CDP)** will be a strong plus.
**Key Responsibilities:**
+ Architect and build **scalable, high-performance data pipelines** and **distributed data processing solutions** using **Hadoop, Spark, Scala/Java, and cloud platforms (AWS/GCP/Azure)** .
+ Design and implement **real-time and batch data processing solutions** , ensuring data is efficiently processed and made available for analytical and operational use.
+ **Develop APIs and data services** to expose **low-latency, high-throughput** data for downstream applications, enabling real-time decision-making.
+ Optimize and enhance **data models, workflows, and processing frameworks** to improve performance, scalability, and cost-efficiency.
+ Drive **data governance, security, and compliance** best practices.
+ Collaborate with **data scientists, product teams, and business stakeholders** to understand requirements and deliver **data-driven solutions** .
+ Lead the **design, implementation, and lifecycle management** of data services and solutions.
+ Stay up to date with **emerging technologies** and drive adoption of best practices in **big data engineering, cloud computing, and API development** .
+ Provide **technical leadership and mentorship** to engineering teams, promoting best practices in **data engineering and API design** .
**About You:**
+ **7+ years of experience in data engineering, software development, or distributed systems.**
+ **Expertise in big data technologies** such as **Hadoop, Spark, and distributed processing frameworks.**
+ **Strong programming skills in Scala and/or Java** (Python is a plus).
+ **Experience with cloud platforms (AWS, GCP, or Azure)** and their **data ecosystem** (e.g., S3, BigQuery, Databricks, EMR, Snowflake, etc.).
+ **Proficiency in API development** using **REST, GraphQL, or gRPC** to serve real-time and batch data.
+ **Experience with real-time and streaming data architectures** (Kafka, Flink, Kinesis, etc.).
+ Strong knowledge of **data modeling, ETL pipeline design, and performance optimization** .
+ Understanding of **data governance, security, and compliance** in large-scale data environments.
+ **Experience with Customer Data Platforms (CDP) or customer-centric data processing** is a strong plus.
+ Strong problem-solving skills and ability to work in **complex, unstructured environments** .
+ Excellent communication and collaboration skills, with experience working in **cross-functional teams** .
**Why Join Us?**
+ Work with cutting-edge **big data, API, and cloud technologies** in a fast-paced, collaborative environment.
+ Influence and shape the **future of data architecture and real-time data services** at Target.
+ Solve **high-impact business problems** using **scalable, low-latency data solutions** .
+ Be part of a culture that values **innovation, learning, and growth** .
This advertiser has chosen not to accept applicants from your region.

Lead Data Engineer

Bangalore, Karnataka Target

Posted 1 day ago

Job Viewed

Tap Again To Close

Job Description

**About us:**
As a Fortune 50 company with more than 400,000 team members worldwide, Target is an iconic brand and one of America's leading retailers.
Joining Target means promoting a culture of mutual care and respect and striving to make the most meaningful and positive impact. Becoming a Target team member means joining a community that values different voices and lifts each other up. Here, we believe your unique perspective is important, and you'll build relationships by being authentic and respectful.
**Overview about TII:**
At Target, we have a timeless purpose and a proven strategy. And that hasn't happened by accident. Some of the best minds from different backgrounds come together at Target to redefine retail in an inclusive learning environment that values people and delivers world-class outcomes. That winning formula is especially apparent in Bengaluru, where Target in India operates as a fully integrated part of Target's global team and has more than 4,000 team members supporting the company's global strategy and operations.
**Team Overview:**
Every time a guest enters a Target store or browses Target.com nor the app, they experience the impact of Target's investments in technology and innovation. We're the technologists behind one of the most loved retail brands, delivering joy to millions of our guests, team members, and communities.
Join our global in-house technology team of more than 5,000 of engineers, data scientists, architects and product managers striving to make Target the most convenient, safe and joyful place to shop. We use agile practices and leverage open-source software to adapt and build best-in-class technology for our team members and guests-and we do so with a focus on diversity and inclusion, experimentation and continuous learning.
**Position Overview:**
As a **Lead Data Engineer** , you will serve as the **technical anchor** for the engineering team, responsible for designing and developing **scalable, high-performance data solutions** . You will **own and drive data architecture** that supports both functional and non-functional business needs, ensuring **reliability, efficiency, and scalability** .
Your expertise in **big data technologies, distributed systems, and cloud platforms** will help shape the engineering roadmap and best practices for **data processing, analytics, and real-time data serving** . You will play a key role in **architecting and optimizing data pipelines** using **Hadoop, Spark, Scala/Java, and cloud technologies** to support enterprise-wide data initiatives.
Additionally, **experience with API development for serving low-latency data** and **Customer Data Platforms (CDP)** will be a strong plus.
**Key Responsibilities:**
+ Architect and build **scalable, high-performance data pipelines** and **distributed data processing solutions** using **Hadoop, Spark, Scala/Java, and cloud platforms (AWS/GCP/Azure)** .
+ Design and implement **real-time and batch data processing solutions** , ensuring data is efficiently processed and made available for analytical and operational use.
+ **Develop APIs and data services** to expose **low-latency, high-throughput** data for downstream applications, enabling real-time decision-making.
+ Optimize and enhance **data models, workflows, and processing frameworks** to improve performance, scalability, and cost-efficiency.
+ Drive **data governance, security, and compliance** best practices.
+ Collaborate with **data scientists, product teams, and business stakeholders** to understand requirements and deliver **data-driven solutions** .
+ Lead the **design, implementation, and lifecycle management** of data services and solutions.
+ Stay up to date with **emerging technologies** and drive adoption of best practices in **big data engineering, cloud computing, and API development** .
+ Provide **technical leadership and mentorship** to engineering teams, promoting best practices in **data engineering and API design** .
**About You:**
+ **7+ years of experience in data engineering, software development, or distributed systems.**
+ **Expertise in big data technologies** such as **Hadoop, Spark, and distributed processing frameworks.**
+ **Strong programming skills in Scala and/or Java** (Python is a plus).
+ **Experience with cloud platforms (AWS, GCP, or Azure)** and their **data ecosystem** (e.g., S3, BigQuery, Databricks, EMR, Snowflake, etc.).
+ **Proficiency in API development** using **REST, GraphQL, or gRPC** to serve real-time and batch data.
+ **Experience with real-time and streaming data architectures** (Kafka, Flink, Kinesis, etc.).
+ Strong knowledge of **data modeling, ETL pipeline design, and performance optimization** .
+ Understanding of **data governance, security, and compliance** in large-scale data environments.
+ **Experience with Customer Data Platforms (CDP) or customer-centric data processing** is a strong plus.
+ Strong problem-solving skills and ability to work in **complex, unstructured environments** .
+ Excellent communication and collaboration skills, with experience working in **cross-functional teams** .
**Why Join Us?**
+ Work with cutting-edge **big data, API, and cloud technologies** in a fast-paced, collaborative environment.
+ Influence and shape the **future of data architecture and real-time data services** at Target.
+ Solve **high-impact business problems** using **scalable, low-latency data solutions** .
+ Be part of a culture that values **innovation, learning, and growth** .
**Life at Target-** ** ** **
This advertiser has chosen not to accept applicants from your region.

Lead Data Engineer

Hyderabad, Andhra Pradesh S&P Global

Posted 1 day ago

Job Viewed

Tap Again To Close

Job Description

**About the Role:**
**Grade Level (for internal use):**
11
**The Team:** The Market Intelligence Industry Data Solutions business line provides data technology and services supporting acquisition, ingestion, content management, mastering, and distribution to power our Financial Institution Group business and customer needs. We focus on platform scalability to support business operations by following a common data lifecycle that accelerates business value. Our team provides essential intelligence for the Financial Services, Real Estate, and Insurance industries.
**The Impact:** The Data Engineering team will be responsible for implementing and maintaining services and tools to support existing feed systems. This enables users to consume FIG datasets and makes FIG data available for broader consumption and processing within the company.
**What's in it for you:** Opportunity to work with global stakeholders and engage with the latest tools and technologies.
**Responsibilities:**
+ Build new data acquisition and transformation pipelines using advanced data processing and cloud technologies.
+ Collaborate with the broader technology team, including information architecture and data integration teams, to align pipelines with strategic initiatives.
**What We're Looking For:**
+ Bachelor's degree in computer science or a related field, with at least 8+ years of professional software development experience.
+ Must have: **Programming languages commonly used for data processing,** **Data orchestration and workflow management systems,** **Distributed data processing framework, relational database management systems,** **B** **ig data processing frameworks** Experience with large-scale data processing platforms.
+ Deep understanding of RESTful services, good API design, and object-oriented programming principles.
+ Proficiency in object-oriented or functional scripting languages.
+ Good working knowledge of relational and NoSQL databases.
+ Experience in maintaining and developing software in production environments utilizing cloud-based tools.
+ Strong collaboration and teamwork skills, along with excellent written and verbal communication abilities.
+ Self-starter and motivated individual with the ability to thrive in a fast-paced software development environment.
+ Agile experience is highly desirable.
+ Experience with data warehousing and analytics platforms will be a significant advantage.
**Technical Expertise**
+ Data Engineering Expertise: Strong experience in distributed data processing and optimization using modern frameworks.
+ Cloud Platforms: Proficient in leveraging cloud services for scalable data solutions, including ETL orchestration, containerized deployments, and data storage.
+ Workflow Orchestration: Skilled in designing and managing complex data pipelines and workflows.
+ Programming & Scripting: Proficient in writing clean, modular, and testable code for data processing tasks.
+ Database Management: Solid understanding of both relational and non-relational databases, including data querying and modeling.
+ ETL & Data Architecture: Proven ability to design and implement robust ETL pipelines and optimize data models for performance and scalability.
**Soft Skills**
+ **Excellent communication skills** - able to articulate technical concepts to non-technical stakeholders.
+ **Strong interpersonal skills** - collaborative, empathetic, and team-oriented.
+ Demonstrated ability to work on **challenging projects** and go the **extra mile** to deliver results.
**Preferred Qualifications**
+ Experience with CI/CD pipelines, Github and DevOps practices is must.
+ Familiarity with data lake and data warehouse architectures.
+ Exposure to real-time data processing frameworks, and observability tools like Grafana is added advantage.
**What's In It For** **You?**
**Our Purpose:**
Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology-the right combination can unlock possibility and change the world.
Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress.
**Our People:**
We're more than 35,000 strong worldwide-so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all.
From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We're committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We're constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference.
**Our Values:**
**Integrity, Discovery, Partnership**
At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of **integrity** in all we do, bring a spirit of **discovery** to our work, and collaborate in close **partnership** with each other and our customers to achieve shared goals.
**Benefits:**
We take care of you, so you can take care of business. We care about our people. That's why we provide everything you-and your career-need to thrive at S&P Global.
Our benefits include:
+ Health & Wellness: Health care coverage designed for the mind and body.
+ Flexible Downtime: Generous time off helps keep you energized for your time on.
+ Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills.
+ Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs.
+ Family Friendly Perks: It's not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families.
+ Beyond the Basics: From retail discounts to referral incentive awards-small perks can make a big difference.
For more information on benefits by country visit: Hiring and Opportunity at S&P Global:**
At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets.
**Recruitment Fraud Alert:**
If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to . S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, "pre-employment training" or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here ( .
---
**Equal Opportunity Employer**
S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment.
If you need an accommodation during the application process due to a disability, please send an email to:   and your request will be forwarded to the appropriate person. 
**US Candidates Only:** The EEO is the Law Poster   describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - - Professional (EEO-2 Job Categories-United States of America), IFTECH202.2 - Middle Professional Tier II (EEO Job Group), SWP Priority - Ratings - (Strategic Workforce Planning)
**Job ID:**
**Posted On:**
**Location:** Ahmedabad, Gujarat, India
This advertiser has chosen not to accept applicants from your region.

Lead Data Engineer

Ahmedabad, Gujarat S&P Global

Posted 1 day ago

Job Viewed

Tap Again To Close

Job Description

**About the Role:**
**Grade Level (for internal use):**
11
**The Team:** The Market Intelligence Industry Data Solutions business line provides data technology and services supporting acquisition, ingestion, content management, mastering, and distribution to power our Financial Institution Group business and customer needs. We focus on platform scalability to support business operations by following a common data lifecycle that accelerates business value. Our team provides essential intelligence for the Financial Services, Real Estate, and Insurance industries.
**The Impact:** The Data Engineering team will be responsible for implementing and maintaining services and tools to support existing feed systems. This enables users to consume FIG datasets and makes FIG data available for broader consumption and processing within the company.
**What's in it for you:** Opportunity to work with global stakeholders and engage with the latest tools and technologies.
**Responsibilities:**
+ Build new data acquisition and transformation pipelines using advanced data processing and cloud technologies.
+ Collaborate with the broader technology team, including information architecture and data integration teams, to align pipelines with strategic initiatives.
**What We're Looking For:**
+ Bachelor's degree in computer science or a related field, with at least 8+ years of professional software development experience.
+ Must have: **Programming languages commonly used for data processing,** **Data orchestration and workflow management systems,** **Distributed data processing framework, relational database management systems,** **B** **ig data processing frameworks** Experience with large-scale data processing platforms.
+ Deep understanding of RESTful services, good API design, and object-oriented programming principles.
+ Proficiency in object-oriented or functional scripting languages.
+ Good working knowledge of relational and NoSQL databases.
+ Experience in maintaining and developing software in production environments utilizing cloud-based tools.
+ Strong collaboration and teamwork skills, along with excellent written and verbal communication abilities.
+ Self-starter and motivated individual with the ability to thrive in a fast-paced software development environment.
+ Agile experience is highly desirable.
+ Experience with data warehousing and analytics platforms will be a significant advantage.
**Technical Expertise**
+ Data Engineering Expertise: Strong experience in distributed data processing and optimization using modern frameworks.
+ Cloud Platforms: Proficient in leveraging cloud services for scalable data solutions, including ETL orchestration, containerized deployments, and data storage.
+ Workflow Orchestration: Skilled in designing and managing complex data pipelines and workflows.
+ Programming & Scripting: Proficient in writing clean, modular, and testable code for data processing tasks.
+ Database Management: Solid understanding of both relational and non-relational databases, including data querying and modeling.
+ ETL & Data Architecture: Proven ability to design and implement robust ETL pipelines and optimize data models for performance and scalability.
**Soft Skills**
+ **Excellent communication skills** - able to articulate technical concepts to non-technical stakeholders.
+ **Strong interpersonal skills** - collaborative, empathetic, and team-oriented.
+ Demonstrated ability to work on **challenging projects** and go the **extra mile** to deliver results.
**Preferred Qualifications**
+ Experience with CI/CD pipelines, Github and DevOps practices is must.
+ Familiarity with data lake and data warehouse architectures.
+ Exposure to real-time data processing frameworks, and observability tools like Grafana is added advantage.
**What's In It For** **You?**
**Our Purpose:**
Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology-the right combination can unlock possibility and change the world.
Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress.
**Our People:**
We're more than 35,000 strong worldwide-so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all.
From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We're committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We're constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference.
**Our Values:**
**Integrity, Discovery, Partnership**
At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of **integrity** in all we do, bring a spirit of **discovery** to our work, and collaborate in close **partnership** with each other and our customers to achieve shared goals.
**Benefits:**
We take care of you, so you can take care of business. We care about our people. That's why we provide everything you-and your career-need to thrive at S&P Global.
Our benefits include:
+ Health & Wellness: Health care coverage designed for the mind and body.
+ Flexible Downtime: Generous time off helps keep you energized for your time on.
+ Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills.
+ Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs.
+ Family Friendly Perks: It's not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families.
+ Beyond the Basics: From retail discounts to referral incentive awards-small perks can make a big difference.
For more information on benefits by country visit: Hiring and Opportunity at S&P Global:**
At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets.
**Recruitment Fraud Alert:**
If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to . S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, "pre-employment training" or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here ( .
---
**Equal Opportunity Employer**
S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment.
If you need an accommodation during the application process due to a disability, please send an email to:   and your request will be forwarded to the appropriate person. 
**US Candidates Only:** The EEO is the Law Poster   describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - - Professional (EEO-2 Job Categories-United States of America), IFTECH202.2 - Middle Professional Tier II (EEO Job Group), SWP Priority - Ratings - (Strategic Workforce Planning)
**Job ID:**
**Posted On:**
**Location:** Ahmedabad, Gujarat, India
This advertiser has chosen not to accept applicants from your region.

Lead Data Engineer

Bengaluru, Karnataka Burns & McDonnell

Posted 1 day ago

Job Viewed

Tap Again To Close

Job Description

**Description**
+ Lead the design, development, and implementation of scalable data pipelines and ELT processes using Databricks, DLT, dbt, Airflow, and other tools.
+ Collaborate with stakeholders to understand data requirements and deliver high-quality data solutions.
+ Optimize and maintain existing data pipelines to ensure data quality, reliability, and performance.
+ Develop and enforce data engineering best practices, including coding standards, testing, and documentation.
+ Mentor junior data engineers, providing technical leadership and fostering a culture of continuous learning and improvement.
+ Monitor and troubleshoot data pipeline issues, ensuring timely resolution and minimal disruption to business operations.
+ Stay up to date with the latest industry trends and technologies, and proactively recommend improvements to our data engineering practices.
**Qualifications**
+ Systems (MIS), Data Science or related field.
+ 15 years of experience in data engineering and/or architecture, with a focus on big data technologies.
+ Extensive production experience with Databricks, Apache Spark, and other related technologies.
+ Familiarity with orchestration and ELT tools like Airflow, dbt, etc.
+ Expert SQL knowledge.
+ Proficiency in programming languages such as Python, Scala, or Java.
+ Strong understanding of data warehousing concepts.
+ Experience with cloud platforms such as Azure, AWS, Google Cloud.
+ Excellent problem-solving skills and the ability to work in a fast-paced, collaborative environment.
+ Strong communication and leadership skills, with the ability to effectively mentor and guide
+ Experience with machine learning and data science workflows
+ Knowledge of data governance and security best practices
+ Certification in Databricks, Azure, Google Cloud or related technologies.
This job posting will remain open a minimum of 72 hours and on an ongoing basis until filled.
**Job** Engineering
**Primary Location** India-Karnataka-Bengaluru
**Schedule:** Full-time
**Travel:** No
**Req ID:**
**Job Hire Type** Experienced Not Applicable #BMI N/A
This advertiser has chosen not to accept applicants from your region.

Lead Data Engineer

Mumbai, Maharashtra Burns & McDonnell

Posted 1 day ago

Job Viewed

Tap Again To Close

Job Description

**Description**
+ Lead the design, development, and implementation of scalable data pipelines and ELT processes using Databricks, DLT, dbt, Airflow, and other tools.
+ Collaborate with stakeholders to understand data requirements and deliver high-quality data solutions.
+ Optimize and maintain existing data pipelines to ensure data quality, reliability, and performance.
+ Develop and enforce data engineering best practices, including coding standards, testing, and documentation.
+ Mentor junior data engineers, providing technical leadership and fostering a culture of continuous learning and improvement.
+ Monitor and troubleshoot data pipeline issues, ensuring timely resolution and minimal disruption to business operations.
+ Stay up to date with the latest industry trends and technologies, and proactively recommend improvements to our data engineering practices.
**Qualifications**
+ Systems (MIS), Data Science or related field.
+ 15 years of experience in data engineering and/or architecture, with a focus on big data technologies.
+ Extensive production experience with Databricks, Apache Spark, and other related technologies.
+ Familiarity with orchestration and ELT tools like Airflow, dbt, etc.
+ Expert SQL knowledge.
+ Proficiency in programming languages such as Python, Scala, or Java.
+ Strong understanding of data warehousing concepts.
+ Experience with cloud platforms such as Azure, AWS, Google Cloud.
+ Excellent problem-solving skills and the ability to work in a fast-paced, collaborative environment.
+ Strong communication and leadership skills, with the ability to effectively mentor and guide
+ Experience with machine learning and data science workflows
+ Knowledge of data governance and security best practices
+ Certification in Databricks, Azure, Google Cloud or related technologies.
This job posting will remain open a minimum of 72 hours and on an ongoing basis until filled.
**Job** Information Technology
**Primary Location** India-Maharashtra-Mumbai
**Schedule:** Full-time
**Travel:** No
**Req ID:**
**Job Hire Type** Experienced Not Applicable #BMI N/A
This advertiser has chosen not to accept applicants from your region.
Be The First To Know

About the latest Lead data engineer Jobs in India !

Lead Data Engineer

Pune, Maharashtra Mastercard

Posted 1 day ago

Job Viewed

Tap Again To Close

Job Description

**Our Purpose**
_Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we're helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential._
**Title and Summary**
Lead Data Engineer
Who is Mastercard?
Mastercard is a global technology company in the payments industry. Our mission is to connect and power an inclusive, digital economy that benefits everyone, everywhere by making transactions safe, simple, smart, and accessible. Using secure data and networks, partnerships and passion, our innovations and solutions help individuals, financial institutions, governments, and businesses realize their greatest potential.
Our decency quotient, or DQ, drives our culture and everything we do inside and outside of our company. With connections across more than 210 countries and territories, we are building a sustainable world that unlocks priceless possibilities for all.
Overview
Ethoca, a Mastercard Company is seeking a Lead Data Engineer to join our team to drive on premise solutions together with Azure cloud enablement while exploring big data solutions within our technology landscape. The role is visible and critical as part of a high performing team - it will appeal to you if you have an effective combination of domain knowledge, relevant experience, and the ability to execute on the details.
You will bring cutting edge software and full stack development skills with advanced knowledge of cloud and data lake experience while working with massive data volume. You will own this - our teams are small, agile and focused on the needs of the high growth fintech marketplace. You will be working across functional teams to deliver on the cloud strategy.
We are committed in making our systems resilient and responsive yet easily maintainable on premise and on cloud.
Role
Own the development of ETL/ELT, data movement, streaming and non-streaming data with a solid background in development of reports/dashboards, applications, services, user interfaces while maintaining and scaling existing solutions.
Existing solutions are built on data that resides in both SAP HANA data warehouse and Snowflake warehouse, we expect the successful candidate will always pay attention to detail: configuration, maintenance, security and reliability of data and Data Services in the different environments as we build out a state-of-the-art analytics foundation (on premise and on cloud).
- Tenured in the fields of Computer Science/Engineering or Software Engineering
- Bachelor's degree in Computer Science, or a related technical field including programming
- Experience with cloud infrastructure management and automation (preferably Azure)
- Experience with software development and configuration automation is a must have
- Expertise in designing, analyzing, and troubleshooting large-scale systems
- Capability to debug, optimize code, and automate routine tasks
- Extensive experience in Machine Learning and Artificial Intelligence
- Hands-on experience with building data lake solutions, streaming analytics solutions and code development across environments (i.e. DevOps)
All About You
- Extensive data warehousing/data lake development experience with strong data modeling and data integration experience
- Strong SQL and higher-level programming languages with solid knowledge of data mining, machine learning algorithms and tools
- Good understanding of data warehouse/data lake design patterns and best practices
- Solid understanding of data ingestion (i.e. streaming platforms like Kafka)
- Strong experience with data integration tools - ETL/ELT tools (i.e. Apache NiFi, Azure Data Factory, Pentaho, Talend)
Experience working:
- In a Data Warehousing and BI environment with understanding of warehousing concepts
- Cloud platforms particularly MS Azure
- Snowflake Computing
- Knowledge of Source Control System (SCS) - preferably "Git" source control
- Application frameworks (i.e. Springboot)
- Strong understanding of database change management process (DCM)
- Systematic problem-solving approach, with effective communication skills and a sense of drive
- Strong understanding and working knowledge of Continuous Integration and Continuous Deployment concepts
- Excellent written and verbal communication skills with top notch problem solving and analytical skills
- Plan and own deployments, migrations and upgrades to minimize service impacts with mitigation plans
- Understand and tune performance across all physical and logical dimensions
- Support Ethoca's architects and analysts as they design and build effective, agile applications
- Use your experience to help shape and scale the future of our development and production infrastructure
Nice to have:
- Scripting experience with one or more of the following:
o Java and Java Script, Python, R
- Experience working with analytics and data processing engines like Apache Spark/Storm
-Experience with application development
- Experience working with SAP HANA or across cloud platforms
- A single page application framework (SPA) Example: VueJS or ReactJS (Javascript)
- Understanding of Cloud Foundry application platform
- Working knowledge of data mining, machine learning algorithms and tools
- ETL tools experience
- Reporting/Dashboarding tool/s experience
Ideally you have experience in banking, e-commerce, credit cards or payment processing and exposure to both SaaS and premises-based architectures. In addition, you have a post-secondary degree in computer science, mathematics, or quantitative science.
**Corporate Security Responsibility**
All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must:
+ Abide by Mastercard's security policies and practices;
+ Ensure the confidentiality and integrity of the information being accessed;
+ Report any suspected information security violation or breach, and
+ Complete all periodic mandatory security trainings in accordance with Mastercard's guidelines.
This advertiser has chosen not to accept applicants from your region.

Lead Data Engineer

Pune, Maharashtra Mastercard

Posted 1 day ago

Job Viewed

Tap Again To Close

Job Description

**Our Purpose**
_Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we're helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential._
**Title and Summary**
Lead Data Engineer
Overview
We are the global technology company behind the world's fastest payments processing network. We are a vehicle for commerce, a connection to financial systems for the previously excluded, a technology innovation lab, and the home of Priceless®. We ensure every employee has the opportunity to be a part of something bigger and to change lives. We believe as our company grows, so should you. We believe in connecting everyone to endless, priceless possibilities.
Our team within Mastercard - Services:
The Services org is a key differentiator for Mastercard, providing the cutting-edge services that are used by some of the world's largest organizations to make multi-million-dollar decisions and grow their businesses. Focused on thinking big and scaling fast around the globe, this agile team is responsible for end-to-end solutions for a diverse global customer base. Centered on data-driven technologies and innovation, these services include payments-focused consulting, loyalty and marketing programs, business Test & Learn experimentation, and data-driven information and risk management services.
Advanced Analytics Program:
Within the Services Technology Team, the Advanced Analytics program is a relatively new program that is comprised of a rich set of products that provide accurate perspectives on Credit Risk, Portfolio Optimization, and Ad Insights. Currently, we are enhancing our customer experience with new user interfaces, moving to API and web application-based data publishing to allow for seamless integration in other Mastercard products and externally, utilizing new data sets and algorithms to further analytic capabilities, and generating scalable big data processes.
We are looking for an innovative lead data engineer who will lead the technical design and development of an Analytic Foundation. The Analytic Foundation is a suite of individually commercialized analytical capabilities that also includes a comprehensive data platform. These services will be offered through a series of APIs that deliver data and insights from various points along a central data store. This individual will partner closely with other areas of the business to build and enhance solutions that drive value for our customers.
Engineers work in small, flexible teams. Every team member contributes to designing, building, and testing features. The range of work you will encounter varies from building intuitive, responsive UIs to designing backend data models, architecting data flows, and beyond. There are no rigid organizational structures, and each team uses processes that work best for its members and projects.
Here are a few examples of products in our space:
- Portfolio Optimizer (PO) is a solution that leverages Mastercard's data assets and analytics to allow issuers to identify and increase revenue opportunities within their credit and debit portfolios.
- Audiences uses anonymized and aggregated transaction insights to offer targeting segments that have high likelihood to make purchases within a category to allow for more effective campaign planning and activation.
- Credit Risk products are a new suite of APIs and tooling to provide lenders real-time access to KPIs and insights serving thousands of clients to make smarter risk decisions using Mastercard data.
Help found a new, fast-growing engineering team!
Position Responsibilities:
As a Lead Data Engineer within Advanced Analytics team, you will:
- Lead collaboration with data scientists to understand the existing modeling pipeline and identify optimization opportunities.
- Oversee the integration and management of data from various sources and storage systems, establishing processes and pipelines to produce cohesive datasets for analysis and modeling.
- Partner with Product Managers and Customer Experience Designers to develop a deep understanding of users and use cases and apply that knowledge to scoping and building new modules and features
- Design and develop data pipelines to automate repetitive tasks within data science and data engineering.
- Demonstrated experience leading cross-functional teams or working across different teams to solve complex problems.
- Identify patterns and innovative solutions in existing spaces, consistently seeking opportunities to simplify, automate tasks, and build reusable components for multiple use cases and teams.
- Create data products that are well-modeled, thoroughly documented, and easy to understand and maintain.
- Comfortable leading projects in environments with undefined or ambiguous requirements.
- Mentor junior data engineers
Ideal Candidate Qualifications:
- High proficiency in using Python or Scala, Spark, Hadoop platforms & tools (Hive, Impala, Oozie, Airflow, NiFi, Scoop), SQL to build Big Data products & platforms
- Extensive experience with Spark Processing engine.
- Proficiency in, at least, one modern programming language such as Python, Java or Scala
- Strong Computer Science fundamentals in object-oriented design, data structures, algorithm design, problem solving, and complexity analysis.
- Ability to easily move between business, data management, and technical teams; ability to quickly intuit the business use case and identify technical solutions to enable it
- Working Knowledge in Software Development engineering Paradigms along with Data Engineering.
- Relational Databases as well as NoSQL experience
- Experience in cloud technologies like Databricks/AWS/Azure
- Basic Shell scripting and knowledge of Linux/Unix systems
- Experience in designing & developing software at scale
- Strong written and verbal English communication skills.
**Corporate Security Responsibility**
All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must:
+ Abide by Mastercard's security policies and practices;
+ Ensure the confidentiality and integrity of the information being accessed;
+ Report any suspected information security violation or breach, and
+ Complete all periodic mandatory security trainings in accordance with Mastercard's guidelines.
This advertiser has chosen not to accept applicants from your region.

Lead Data Engineer

Bangalore, Karnataka Labcorp

Posted 1 day ago

Job Viewed

Tap Again To Close

Job Description

**Job Description:**
We are looking for a Lead Data Engineer with strong Databricks expertise who can also design and build accelerators for automation. The role requires hands-on development in Databricks (PySpark, Spark SQL, Delta Lake,Autoloader, DLT etc) along with experience in creating frameworks for test data generation, report validation, and backend validation. You will be leading technical work, mentoring team members, and building reusable solutions that speed up project delivery.
**Key Responsibilities:**
·Design, develop, and optimize ETL/ELT pipelines on Databricks using PySpark, Spark SQL,Delta Lake, DLT live.
·Build automation accelerators such as:
Synthetic test data generators for large-scale data sets.
Automated report validation tools to compare dashboards with backend data.
Backend validation frameworks for reconciliation and data quality checks.
·Implement data validation frameworks (Great Expectations, Deequ, or custom solutions).
·Ensure data quality, lineage, and governance using Unity Catalog.
·Collaborate with QA, BI, and business teams to integrate accelerators into workflows.
·Drive best practices in coding, performance tuning, and cost optimization on Databricks.
·Lead and mentor data engineers, review code, and set technical standards.
**Must-Have Skills**
-5-7 years of experience in data engineering, with at least 3-5 years hands-on experience in databricks
·Databricks: PySpark, Spark SQL, Delta Lake, Delta Live Tables (DLT).
·Automation & Accelerators: Experience creating test data generators, validation frameworks, report reconciliation tools.
·Programming language: Strong in Python,PySpark and SQL.
·Validation Tools: open source tools like Great Expectations, Deequ, or equivalent custom frameworks.
·Data Pipelines: Batch and streaming pipeline design.
·Data Governance: Unity Catalog for access, lineage, and compliance.
·DevOps/CI-CD: Git, Azure DevOps, Jenkins, or GitHub Actions for deployment automation.
·Cloud & Storage: Azure (preferred) / AWS / GCP with hands-on in ADLS/S3/GCS.
**Good-to-Have Skills**
·Streaming: Kafka, EventHub, or Kinesis.
·Infra-as-Code: Terraform for Databricks and cloud provisioning.
·Reporting Tools: Power BI, Tableau, or Looker (for validation accelerators).
·Testing: Exposure to pytest or unittest for automation.
**Qualifications:**
-Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
**Labcorp is proud to be an Equal Opportunity Employer:**
Labcorp strives for inclusion and belonging in the workforce and does not tolerate harassment or discrimination of any kind. We make employment decisions based on the needs of our business and the qualifications and merit of the individual. Qualified applicants will receive consideration for employment without regard to race, religion, color, national origin, sex (including pregnancy, childbirth, or related medical conditions), family or parental status, marital, civil union or domestic partnership status, sexual orientation, gender identity, gender expression, personal appearance, age, veteran status, disability, genetic information, or any other legally protected characteristic. Additionally, all qualified applicants with arrest or conviction records will be considered for employment in accordance with applicable law.
**We encourage all to apply**
If you are an individual with a disability who needs assistance using our online tools to search and apply for jobs, or needs an accommodation, please visit our accessibility site ( or contact us at Labcorp Accessibility. ( ) For more information about how we collect and store your personal data, please see our Privacy Statement ( .
This advertiser has chosen not to accept applicants from your region.
 

Nearby Locations

Other Jobs Near Me

Industry

  1. request_quote Accounting
  2. work Administrative
  3. eco Agriculture Forestry
  4. smart_toy AI & Emerging Technologies
  5. school Apprenticeships & Trainee
  6. apartment Architecture
  7. palette Arts & Entertainment
  8. directions_car Automotive
  9. flight_takeoff Aviation
  10. account_balance Banking & Finance
  11. local_florist Beauty & Wellness
  12. restaurant Catering
  13. volunteer_activism Charity & Voluntary
  14. science Chemical Engineering
  15. child_friendly Childcare
  16. foundation Civil Engineering
  17. clean_hands Cleaning & Sanitation
  18. diversity_3 Community & Social Care
  19. construction Construction
  20. brush Creative & Digital
  21. currency_bitcoin Crypto & Blockchain
  22. support_agent Customer Service & Helpdesk
  23. medical_services Dental
  24. medical_services Driving & Transport
  25. medical_services E Commerce & Social Media
  26. school Education & Teaching
  27. electrical_services Electrical Engineering
  28. bolt Energy
  29. local_mall Fmcg
  30. gavel Government & Non Profit
  31. emoji_events Graduate
  32. health_and_safety Healthcare
  33. beach_access Hospitality & Tourism
  34. groups Human Resources
  35. precision_manufacturing Industrial Engineering
  36. security Information Security
  37. handyman Installation & Maintenance
  38. policy Insurance
  39. code IT & Software
  40. gavel Legal
  41. sports_soccer Leisure & Sports
  42. inventory_2 Logistics & Warehousing
  43. supervisor_account Management
  44. supervisor_account Management Consultancy
  45. supervisor_account Manufacturing & Production
  46. campaign Marketing
  47. build Mechanical Engineering
  48. perm_media Media & PR
  49. local_hospital Medical
  50. local_hospital Military & Public Safety
  51. local_hospital Mining
  52. medical_services Nursing
  53. local_gas_station Oil & Gas
  54. biotech Pharmaceutical
  55. checklist_rtl Project Management
  56. shopping_bag Purchasing
  57. home_work Real Estate
  58. person_search Recruitment Consultancy
  59. store Retail
  60. point_of_sale Sales
  61. science Scientific Research & Development
  62. wifi Telecoms
  63. psychology Therapy
  64. pets Veterinary
View All Lead Data Engineer Jobs