21 Etl Processes jobs in Delhi

Associate Architect - Data Engineering

New Delhi, Delhi Response Informatics

Posted 11 days ago

Job Viewed

Tap Again To Close

Job Description

About the Role:

We are seeking an experienced Data Architect to lead the transformation of enterprise data

solutions, with a strong focus on migrating Alteryx workflows into Azure Databricks. The

ideal candidate will have deep expertise in the Microsoft Azure ecosystem, including Azure

Data Factory, Databricks, Synapse Analytics, Microsoft Fabric, and a strong

background in data architecture, governance, and distributed computing. This role

requires both strategic thinking and hands-on architectural leadership to ensure scalable,

secure, and high-performance data solutions.


Key Responsibilities:

Define the overall migration strategy for transforming Alteryx workflows into

scalable, cloud-native data solutions on Azure Databricks.

Architect end-to-end data frameworks leveraging Databricks, Delta Lake, Azure

Data Lake, and Synapse.

Establish best practices, standards, and governance frameworks for pipeline

design, orchestration, and data lifecycle management.

Guide engineering teams in re-engineering Alteryx workflows into distributed Spark-

based architectures.

Collaborate with business stakeholders to ensure solutions align with analytics,

reporting, and advanced AI/ML initiatives.

Oversee data quality, lineage, and security compliance across the data

ecosystem.

Drive CI/CD adoption, automation, and DevOps practices for Azure Databricks

and related services.

Provide architectural leadership, design reviews, and mentorship to engineering

and analytics teams.

Optimize solutions for performance, scalability, and cost-efficiency within Azure.

Participate in enterprise architecture forums and influence data strategy across the

organization.


Required Skills and Qualifications:

10+ years of experience in data architecture, engineering, or solution design.

Proven expertise in Alteryx workflows and their modernization into Azure

Databricks (Spark, PySpark, SQL, Delta Lake).

Deep knowledge of the Microsoft Azure data ecosystem:

o Azure Data Factory (ADF)

o Azure Synapse Analytics


o Microsoft Fabric

o Azure Databricks

Strong background in data governance, lineage, security, and compliance

frameworks.

Demonstrated experience in architecting data lakes, data warehouses, and

analytics platforms.

Proficiency in Python, SQL, and Apache Spark for prototyping and design

validation.

Excellent leadership, communication, and stakeholder management skills.


Preferred Qualifications:

Microsoft Azure certifications (e.g., Azure Solutions Architect Expert, Azure Data

Engineer Associate).

Experience leading large-scale migration programs or modernization initiatives.

Familiarity with enterprise architecture frameworks (TOGAF, Zachman).

Exposure to machine learning enablement on Azure Databricks.

Strong understanding of Agile delivery and working in multi-disciplinary teams.

This advertiser has chosen not to accept applicants from your region.

Associate Architect - Data Engineering

Narela, Delhi Response Informatics

Posted 11 days ago

Job Viewed

Tap Again To Close

Job Description

About the Role:

We are seeking an experienced Data Architect to lead the transformation of enterprise data

solutions, with a strong focus on migrating Alteryx workflows into Azure Databricks. The

ideal candidate will have deep expertise in the Microsoft Azure ecosystem, including Azure

Data Factory, Databricks, Synapse Analytics, Microsoft Fabric, and a strong

background in data architecture, governance, and distributed computing. This role

requires both strategic thinking and hands-on architectural leadership to ensure scalable,

secure, and high-performance data solutions.


Key Responsibilities:

Define the overall migration strategy for transforming Alteryx workflows into

scalable, cloud-native data solutions on Azure Databricks.

Architect end-to-end data frameworks leveraging Databricks, Delta Lake, Azure

Data Lake, and Synapse.

Establish best practices, standards, and governance frameworks for pipeline

design, orchestration, and data lifecycle management.

Guide engineering teams in re-engineering Alteryx workflows into distributed Spark-

based architectures.

Collaborate with business stakeholders to ensure solutions align with analytics,

reporting, and advanced AI/ML initiatives.

Oversee data quality, lineage, and security compliance across the data

ecosystem.

Drive CI/CD adoption, automation, and DevOps practices for Azure Databricks

and related services.

Provide architectural leadership, design reviews, and mentorship to engineering

and analytics teams.

Optimize solutions for performance, scalability, and cost-efficiency within Azure.

Participate in enterprise architecture forums and influence data strategy across the

organization.


Required Skills and Qualifications:

10+ years of experience in data architecture, engineering, or solution design.

Proven expertise in Alteryx workflows and their modernization into Azure

Databricks (Spark, PySpark, SQL, Delta Lake).

Deep knowledge of the Microsoft Azure data ecosystem:

o Azure Data Factory (ADF)

o Azure Synapse Analytics


o Microsoft Fabric

o Azure Databricks

Strong background in data governance, lineage, security, and compliance

frameworks.

Demonstrated experience in architecting data lakes, data warehouses, and

analytics platforms.

Proficiency in Python, SQL, and Apache Spark for prototyping and design

validation.

Excellent leadership, communication, and stakeholder management skills.


Preferred Qualifications:

Microsoft Azure certifications (e.g., Azure Solutions Architect Expert, Azure Data

Engineer Associate).

Experience leading large-scale migration programs or modernization initiatives.

Familiarity with enterprise architecture frameworks (TOGAF, Zachman).

Exposure to machine learning enablement on Azure Databricks.

Strong understanding of Agile delivery and working in multi-disciplinary teams.

This advertiser has chosen not to accept applicants from your region.

Associate Architect - Data Engineering

Delhi, Delhi Response Informatics

Posted 11 days ago

Job Viewed

Tap Again To Close

Job Description

About the Role:

We are seeking an experienced Data Architect to lead the transformation of enterprise data

solutions, with a strong focus on migrating Alteryx workflows into Azure Databricks. The

ideal candidate will have deep expertise in the Microsoft Azure ecosystem, including Azure

Data Factory, Databricks, Synapse Analytics, Microsoft Fabric, and a strong

background in data architecture, governance, and distributed computing. This role

requires both strategic thinking and hands-on architectural leadership to ensure scalable,

secure, and high-performance data solutions.


Key Responsibilities:

Define the overall migration strategy for transforming Alteryx workflows into

scalable, cloud-native data solutions on Azure Databricks.

Architect end-to-end data frameworks leveraging Databricks, Delta Lake, Azure

Data Lake, and Synapse.

Establish best practices, standards, and governance frameworks for pipeline

design, orchestration, and data lifecycle management.

Guide engineering teams in re-engineering Alteryx workflows into distributed Spark-

based architectures.

Collaborate with business stakeholders to ensure solutions align with analytics,

reporting, and advanced AI/ML initiatives.

Oversee data quality, lineage, and security compliance across the data

ecosystem.

Drive CI/CD adoption, automation, and DevOps practices for Azure Databricks

and related services.

Provide architectural leadership, design reviews, and mentorship to engineering

and analytics teams.

Optimize solutions for performance, scalability, and cost-efficiency within Azure.

Participate in enterprise architecture forums and influence data strategy across the

organization.


Required Skills and Qualifications:

10+ years of experience in data architecture, engineering, or solution design.

Proven expertise in Alteryx workflows and their modernization into Azure

Databricks (Spark, PySpark, SQL, Delta Lake).

Deep knowledge of the Microsoft Azure data ecosystem:

o Azure Data Factory (ADF)

o Azure Synapse Analytics


o Microsoft Fabric

o Azure Databricks

Strong background in data governance, lineage, security, and compliance

frameworks.

Demonstrated experience in architecting data lakes, data warehouses, and

analytics platforms.

Proficiency in Python, SQL, and Apache Spark for prototyping and design

validation.

Excellent leadership, communication, and stakeholder management skills.


Preferred Qualifications:

Microsoft Azure certifications (e.g., Azure Solutions Architect Expert, Azure Data

Engineer Associate).

Experience leading large-scale migration programs or modernization initiatives.

Familiarity with enterprise architecture frameworks (TOGAF, Zachman).

Exposure to machine learning enablement on Azure Databricks.

Strong understanding of Agile delivery and working in multi-disciplinary teams.

This advertiser has chosen not to accept applicants from your region.

Analytics & Insights Manager (Data Engineering)

New Delhi, Delhi Roche

Posted 8 days ago

Job Viewed

Tap Again To Close

Job Description

At Roche you can show up as yourself, embraced for the unique qualities you bring. Our culture encourages personal expression, open dialogue, and genuine connections, where you are valued, accepted and respected for who you are, allowing you to thrive both personally and professionally. This is how we aim to prevent, stop and cure diseases and ensure everyone has access to healthcare today and for generations to come. Join Roche, where every voice matters.
**The Position**
A healthier future. It's what drives us to innovate. To continuously advance science and ensure everyone has access to the healthcare they need today and for generations to come. Creating a world where we all have more time with the people we love. That's what makes us Roche.
Healthcare is evolving, and Global Procurement (GP) is responding by continuously striving for the highest possible performance, taking innovative and strategic approaches to business and supplier partnerships. Global Procurement proactively manages the entire supplier ecosystem, making a vital contribution to improving health outcomes, reducing costs for patients and global healthcare systems, and ensuring that Roche continues doing now what patients need next.
**The Opportunity:**
This role sits within the Enablement Chapter where we drive operational and financial effectiveness in Global Procurement by advancing talent growth and development, delivering actionable insights, fostering high engagement, and ensuring robust performance management. Our team is dedicated to enabling better outcomes and providing comprehensive support to GP leadership and chapters.
As an Analytics & Insights Manager (Data Engineering) in A&I Data Solutions, you will bring structured thinking, facilitation, execution, and focus to procurement enabling and functional capabilities such as analytics, operations, governance, and strategic projects. Using your specialized knowledge or expertise in data engineering and general procurement, the role proactively identifies and drives strategies and approaches that positively impact capability and functional goals. This role supports data engineering and analytics efforts in Global Procurement by maintaining data pipelines and helping to improve data accessibility and accuracy.
You will collaborate with internal procurement, finance, and other relevant colleagues to understand needs and gather feedback to develop, enhance, or deploy functional enabling services and solutions that increase procurement's effectiveness and efficiency.
You will work closely with other team members to align on requirements, develop, validate, and deploy services, solutions, and frameworks to the broader procurement function.
As an Analytics & Insights Manager (Data Engineering), you will play a variety of roles according to your experience, knowledge, and general business requirements, including but not limited to:
**Responsibilities include:**
+ Managing the transition of procurement data sources between Snowflake databases while ensuring data integrity.
+ Facilitating the integration of diverse procurement data systems and managing data pipelines in Snowflake to streamline data availability and accessibility.
+ Developing and optimizing sophisticated SQL queries for data manipulation, transformation, and reporting tasks.
+ Managing and maintaining complex data mappings to ensure accuracy and reliability.
+ Collaborating seamlessly with key stakeholders across the procurement function to gather data requirements.
+ Addressing data-related issues with advanced troubleshooting techniques.
+ Leveraging GitLab and other orchestration tools for version control and collaboration with key stakeholders, ensuring best practices in code management and CI/CD automation.
**Who you are:**
+ You hold a university degree in Computer Science, Information Systems, or related disciplines.
+ You have 2-3 years of work experience, ideally in data engineering.
+ You have procurement analytics experience (preferred).
+ You have hands-on experience with Snowflake environments.
+ You are proficient in ETL/ELT technologies, DataOps and tools such as Talend/dbt/GitLab.
+ You have expertise in SQL and preferably Python for database querying and data transformation.
+ You have knowledge of cloud-based data solutions and infrastructure.
+ You have understanding of data mapping and data quality management (preferred).
+ You have experience with Git for version control and GitLab for CI/CD automation (not required but advantageous).
+ You have experience with workflow automation tools such as Automate Now or Airflow (preferred).
+ You demonstrate curiosity, active listening and a willingness to experiment and test new ideas when appropriate, with the focus very much on continuous learning and improvement.
+ You are open-minded and inclusive, generously sharing ideas and knowledge, while being receptive to ideas and feedback from others.
+ You are fluent in English to a Business level.
Join our team and enable the strong capability expertise needed to meet the evolving needs of our customers.
**Who we are**
A healthier future drives us to innovate. Together, more than 100'000 employees across the globe are dedicated to advance science, ensuring everyone has access to healthcare today and for generations to come. Our efforts result in more than 26 million people treated with our medicines and over 30 billion tests conducted using our Diagnostics products. We empower each other to explore new possibilities, foster creativity, and keep our ambitions high, so we can deliver life-changing healthcare solutions that make a global impact.
Let's build a healthier future, together.
**Roche is an Equal Opportunity Employer.**
This advertiser has chosen not to accept applicants from your region.

Senior Manager - Data Engineering Lead

New Delhi, Delhi DIAGEO India

Posted 12 days ago

Job Viewed

Tap Again To Close

Job Description

Job Title: Senior Manager - Data Engineering Lead


Qualification: Bachelor’s or master’s degree in computer science, Data Engineering, or related field.


Required skillset:

  • Experience in data engineering.
  • Proven experience in cloud platforms (AWS, Azure, or GCP) and data services (Glue, Synapse, Big Query, Databricks, etc.).
  • Hands-on experience with tools like Apache Spark, Kafka, Airflow, dbt, and modern orchestration platforms.
  • Technical Skills
  • Proficient in SQL, Python/Scala/Java.
  • Strong understanding of modern data Lake concepts (e.g., Snowflake, Redshift, BigQuery).
  • Familiarity with CI/CD, Infrastructure as Code (e.g., Terraform), and DevOps for data.

Nice to Have:

  • Prior experience working in a regulated industry (alcohol, pharma, tobacco, etc.).
  • Exposure to demand forecasting, route-to-market analytics, or distributor performance management.
  • Knowledge of CRM, ERP, or supply chain systems (e.g., Salesforce, SAP, Oracle).
  • Familiarity with marketing attribution models and campaign performance tracking.


Preferred Attributes:

  • Strong analytical and problem-solving skills.
  • Excellent communication and stakeholder engagement abilities.
  • Passion for data-driven innovation and delivering business impact.
  • Certification in cloud platforms or data engineering (e.g., Google Cloud Professional Data Engineer).
  • Excellent communication and stakeholder management skills.

Key Accountabilities:

  • Design and implement scalable, high-performance data architecture solutions aligned with enterprise strategy.
  • Define standards and best practices for data modelling, metadata management, and data governance.
  • Collaborate with business stakeholders, data scientists, and application architects to align data infrastructure with business needs.
  • Guide the selection of technologies, including cloud-native and hybrid data architecture patterns (e.g., Lambda/Kappa architectures).
  • Lead the development, deployment, and maintenance of end-to-end data pipelines using ETL/ELT frameworks.
  • Manage ingestion from structured and unstructured data sources (APIs, files, databases, streaming sources).
  • Optimize data workflows for performance, reliability, and cost efficiency.
  • Ensure data quality, lineage, cataloging, and security through automated validation and monitoring.


  • Oversee data lake design, implementation, and daily operations (e.g., Azure Data Lake, AWS S3, GCP BigLake).
  • Implement access controls, data lifecycle management, and partitioning strategies.
  • Monitor and manage performance, storage costs, and data availability in real time.
  • Ensure compliance with enterprise data policies and regulatory requirements (e.g., GDPR, CCPA).


  • Lead and mentor a team of data engineers and architects.
  • Establish a culture of continuous improvement, innovation, and operational excellence.
  • Work closely with IT, DevOps, and InfoSec teams to ensure secure and scalable infrastructure.


Flexible Working Statement: Flexibility is key to our success. From part-time and compressed hours to different locations, our people work flexibly in ways to suit them. Talk to us about what flexibility means to you so that you’re supported from day one.


Diversity statement: Our purpose is to celebrate life, every day, everywhere. And creating an inclusive culture, where everyone feels valued and that they can belong, is a crucial part of this.

We embrace diversity in the broadest possible sense. This means that you’ll be welcomed and celebrated for who you are just by being you. You’ll be part of and help build and champion an inclusive culture that celebrates people of different gender, ethnicity, ability, age, sexual orientation, social class, educational backgrounds, experiences, mindsets, and more.

Our ambition is to create the best performing, most trusted and respected consumer products companies in the world. Join us and help transform our business as we take our brands to the next level and build new ones as part of shaping the next generation of celebrations for consumers around the world.

This advertiser has chosen not to accept applicants from your region.

Senior Manager - Data Engineering Lead

Narela, Delhi DIAGEO India

Posted 12 days ago

Job Viewed

Tap Again To Close

Job Description

Job Title: Senior Manager - Data Engineering Lead


Qualification: Bachelor’s or master’s degree in computer science, Data Engineering, or related field.


Required skillset:

  • Experience in data engineering.
  • Proven experience in cloud platforms (AWS, Azure, or GCP) and data services (Glue, Synapse, Big Query, Databricks, etc.).
  • Hands-on experience with tools like Apache Spark, Kafka, Airflow, dbt, and modern orchestration platforms.
  • Technical Skills
  • Proficient in SQL, Python/Scala/Java.
  • Strong understanding of modern data Lake concepts (e.g., Snowflake, Redshift, BigQuery).
  • Familiarity with CI/CD, Infrastructure as Code (e.g., Terraform), and DevOps for data.

Nice to Have:

  • Prior experience working in a regulated industry (alcohol, pharma, tobacco, etc.).
  • Exposure to demand forecasting, route-to-market analytics, or distributor performance management.
  • Knowledge of CRM, ERP, or supply chain systems (e.g., Salesforce, SAP, Oracle).
  • Familiarity with marketing attribution models and campaign performance tracking.


Preferred Attributes:

  • Strong analytical and problem-solving skills.
  • Excellent communication and stakeholder engagement abilities.
  • Passion for data-driven innovation and delivering business impact.
  • Certification in cloud platforms or data engineering (e.g., Google Cloud Professional Data Engineer).
  • Excellent communication and stakeholder management skills.

Key Accountabilities:

  • Design and implement scalable, high-performance data architecture solutions aligned with enterprise strategy.
  • Define standards and best practices for data modelling, metadata management, and data governance.
  • Collaborate with business stakeholders, data scientists, and application architects to align data infrastructure with business needs.
  • Guide the selection of technologies, including cloud-native and hybrid data architecture patterns (e.g., Lambda/Kappa architectures).
  • Lead the development, deployment, and maintenance of end-to-end data pipelines using ETL/ELT frameworks.
  • Manage ingestion from structured and unstructured data sources (APIs, files, databases, streaming sources).
  • Optimize data workflows for performance, reliability, and cost efficiency.
  • Ensure data quality, lineage, cataloging, and security through automated validation and monitoring.


  • Oversee data lake design, implementation, and daily operations (e.g., Azure Data Lake, AWS S3, GCP BigLake).
  • Implement access controls, data lifecycle management, and partitioning strategies.
  • Monitor and manage performance, storage costs, and data availability in real time.
  • Ensure compliance with enterprise data policies and regulatory requirements (e.g., GDPR, CCPA).


  • Lead and mentor a team of data engineers and architects.
  • Establish a culture of continuous improvement, innovation, and operational excellence.
  • Work closely with IT, DevOps, and InfoSec teams to ensure secure and scalable infrastructure.


Flexible Working Statement: Flexibility is key to our success. From part-time and compressed hours to different locations, our people work flexibly in ways to suit them. Talk to us about what flexibility means to you so that you’re supported from day one.


Diversity statement: Our purpose is to celebrate life, every day, everywhere. And creating an inclusive culture, where everyone feels valued and that they can belong, is a crucial part of this.

We embrace diversity in the broadest possible sense. This means that you’ll be welcomed and celebrated for who you are just by being you. You’ll be part of and help build and champion an inclusive culture that celebrates people of different gender, ethnicity, ability, age, sexual orientation, social class, educational backgrounds, experiences, mindsets, and more.

Our ambition is to create the best performing, most trusted and respected consumer products companies in the world. Join us and help transform our business as we take our brands to the next level and build new ones as part of shaping the next generation of celebrations for consumers around the world.

This advertiser has chosen not to accept applicants from your region.

Senior Manager - Data Engineering Lead

Delhi, Delhi DIAGEO India

Posted 12 days ago

Job Viewed

Tap Again To Close

Job Description

Job Title: Senior Manager - Data Engineering Lead


Qualification: Bachelor’s or master’s degree in computer science, Data Engineering, or related field.


Required skillset:

  • Experience in data engineering.
  • Proven experience in cloud platforms (AWS, Azure, or GCP) and data services (Glue, Synapse, Big Query, Databricks, etc.).
  • Hands-on experience with tools like Apache Spark, Kafka, Airflow, dbt, and modern orchestration platforms.
  • Technical Skills
  • Proficient in SQL, Python/Scala/Java.
  • Strong understanding of modern data Lake concepts (e.g., Snowflake, Redshift, BigQuery).
  • Familiarity with CI/CD, Infrastructure as Code (e.g., Terraform), and DevOps for data.

Nice to Have:

  • Prior experience working in a regulated industry (alcohol, pharma, tobacco, etc.).
  • Exposure to demand forecasting, route-to-market analytics, or distributor performance management.
  • Knowledge of CRM, ERP, or supply chain systems (e.g., Salesforce, SAP, Oracle).
  • Familiarity with marketing attribution models and campaign performance tracking.


Preferred Attributes:

  • Strong analytical and problem-solving skills.
  • Excellent communication and stakeholder engagement abilities.
  • Passion for data-driven innovation and delivering business impact.
  • Certification in cloud platforms or data engineering (e.g., Google Cloud Professional Data Engineer).
  • Excellent communication and stakeholder management skills.

Key Accountabilities:

  • Design and implement scalable, high-performance data architecture solutions aligned with enterprise strategy.
  • Define standards and best practices for data modelling, metadata management, and data governance.
  • Collaborate with business stakeholders, data scientists, and application architects to align data infrastructure with business needs.
  • Guide the selection of technologies, including cloud-native and hybrid data architecture patterns (e.g., Lambda/Kappa architectures).
  • Lead the development, deployment, and maintenance of end-to-end data pipelines using ETL/ELT frameworks.
  • Manage ingestion from structured and unstructured data sources (APIs, files, databases, streaming sources).
  • Optimize data workflows for performance, reliability, and cost efficiency.
  • Ensure data quality, lineage, cataloging, and security through automated validation and monitoring.


  • Oversee data lake design, implementation, and daily operations (e.g., Azure Data Lake, AWS S3, GCP BigLake).
  • Implement access controls, data lifecycle management, and partitioning strategies.
  • Monitor and manage performance, storage costs, and data availability in real time.
  • Ensure compliance with enterprise data policies and regulatory requirements (e.g., GDPR, CCPA).


  • Lead and mentor a team of data engineers and architects.
  • Establish a culture of continuous improvement, innovation, and operational excellence.
  • Work closely with IT, DevOps, and InfoSec teams to ensure secure and scalable infrastructure.


Flexible Working Statement: Flexibility is key to our success. From part-time and compressed hours to different locations, our people work flexibly in ways to suit them. Talk to us about what flexibility means to you so that you’re supported from day one.


Diversity statement: Our purpose is to celebrate life, every day, everywhere. And creating an inclusive culture, where everyone feels valued and that they can belong, is a crucial part of this.

We embrace diversity in the broadest possible sense. This means that you’ll be welcomed and celebrated for who you are just by being you. You’ll be part of and help build and champion an inclusive culture that celebrates people of different gender, ethnicity, ability, age, sexual orientation, social class, educational backgrounds, experiences, mindsets, and more.

Our ambition is to create the best performing, most trusted and respected consumer products companies in the world. Join us and help transform our business as we take our brands to the next level and build new ones as part of shaping the next generation of celebrations for consumers around the world.

This advertiser has chosen not to accept applicants from your region.
Be The First To Know

About the latest Etl processes Jobs in Delhi !

Sr Analytics & Insights Manager (Data Engineering)

New Delhi, Delhi Roche

Posted 8 days ago

Job Viewed

Tap Again To Close

Job Description

At Roche you can show up as yourself, embraced for the unique qualities you bring. Our culture encourages personal expression, open dialogue, and genuine connections, where you are valued, accepted and respected for who you are, allowing you to thrive both personally and professionally. This is how we aim to prevent, stop and cure diseases and ensure everyone has access to healthcare today and for generations to come. Join Roche, where every voice matters.
**The Position**
A healthier future. It's what drives us to innovate. To continuously advance science and ensure everyone has access to the healthcare they need today and for generations to come. Creating a world where we all have more time with the people we love. That's what makes us Roche.
Healthcare is evolving, and Global Procurement (GP) is responding by continuously striving for the highest possible performance, taking innovative and strategic approaches to business and supplier partnerships. Global Procurement proactively manages the entire supplier ecosystem, making a vital contribution to improving health outcomes, reducing costs for patients and global healthcare systems, and ensuring that Roche continues doing now what patients need next.
**The Opportunity:**
This role sits within the Enablement Chapter where we drive operational and financial effectiveness in Global Procurement by advancing talent growth and development, delivering actionable insights, fostering high engagement, and ensuring robust performance management. Our team is dedicated to enabling better outcomes and providing comprehensive support to GP leadership and chapters.
As a Senior Analytics & Insights Manager (Data Engineering) in A&I Data Solutions, you will bring structured thinking, facilitation, execution, and focus to procurement enabling and functional capabilities such as analytics, operations, governance, and strategic projects. The role will lead data engineering efforts in Global Procurement, streamlining data systems and analytics to boost operational efficiency. Using your specialized knowledge and in-depth expertise in data engineering and general procurement, you will proactively identify and drive strategies and approaches that positively impact capability and functional goals.
You will collaborate with internal procurement, finance, and other relevant colleagues to align on needs and opportunity identification to develop, enhance, or deploy functional enabling services and solutions that increase procurement's effectiveness and efficiency.
You will work closely with other team members, either as a peer coach, project or workstream lead, or team lead to embed best practices and deploy services, solutions, and frameworks to the broader procurement function.
As a Senior Analytics & Insights Manager (Data Engineering), you will play a variety of roles according to your experience, knowledge, and general business/team requirements, including but not limited to:
**Responsibilities include:**
+ Managing the transition of procurement data sources between databases (e.g. Snowflake) while ensuring data integrity.
+ Facilitating the integration of diverse procurement data systems and managing data pipelines in Snowflake to streamline data availability and accessibility.
+ Developing and optimizing sophisticated SQL queries for data manipulation, transformation, and reporting tasks.
+ Managing and maintaining complex data mappings to ensure accuracy and reliability.
+ Collaborating seamlessly with key stakeholders across the procurement function to gather data requirements.
+ Addressing data-related issues with advanced troubleshooting techniques.
+ Leveraging GitLab and other orchestration tools for version control and collaboration with key stakeholders, ensuring best practices in code management and CI/CD automation.
**Who you are:**
+ You hold a university degree in Computer Science, Information Systems, or related disciplines.
+ You have 5-7 years of work experience with at least 3 years of experience in data engineering.
+ You have procurement analytics experience (preferred).
+ You have hands-on experience with Snowflake environments.
+ You are proficient in ETL/ELT technologies, DataOps and tools such as Talend/dbt/GitLab.
+ You have expertise in SQL and preferably Python for database querying and data transformation.
+ You have knowledge of cloud-based data solutions and infrastructure.
+ You have an understanding of data mapping and data quality management (preferred).
+ You have experience with Git for version control and GitLab for CI/CD automation (not required but advantageous).
+ You are experienced with workflow automation tools such as Automate Now or Airflow (preferred).
+ You demonstrate curiosity, active listening, and a willingness to experiment and test new ideas when appropriate, with the focus very much on continuous learning and improvement.
+ You are open-minded and inclusive, generously sharing ideas and knowledge, while being receptive to ideas and feedback from others.
+ You are fluent in English to a Business level.
Join our team and enable the strong capability expertise needed to meet the evolving needs of our customers.
**Who we are**
A healthier future drives us to innovate. Together, more than 100'000 employees across the globe are dedicated to advance science, ensuring everyone has access to healthcare today and for generations to come. Our efforts result in more than 26 million people treated with our medicines and over 30 billion tests conducted using our Diagnostics products. We empower each other to explore new possibilities, foster creativity, and keep our ambitions high, so we can deliver life-changing healthcare solutions that make a global impact.
Let's build a healthier future, together.
**Roche is an Equal Opportunity Employer.**
This advertiser has chosen not to accept applicants from your region.

ETL Developer

Delhi, Delhi IntraEdge

Posted 17 days ago

Job Viewed

Tap Again To Close

Job Description

Website-

Job Title: ETL Developer – DataStage, AWS, Snowflake

Experience: 5–7 Years

Location: (Remote)

Job Type: (Full-time )

About the Role

We are looking for a talented and motivated ETL Developer / Senior Developer to join our data engineering team. You will work on building scalable and efficient data pipelines using IBM DataStage (on Cloud Pak for Data) , AWS Glue , and Snowflake . You will collaborate with architects, business analysts, and data modelers to ensure timely and accurate delivery of critical data assets supporting analytics and AI/ML use cases.


Key Responsibilities

  • Design, develop, and maintain ETL pipelines using IBM DataStage (CP4D) and AWS Glue/Lambda for ingestion from varied sources like flat files, APIs, Oracle, DB2, etc.
  • Build and optimize data flows for loading curated datasets into Snowflake , leveraging best practices for schema design, partitioning, and transformation logic.
  • Participate in code reviews , performance tuning, and defect triage sessions.
  • Work closely with data governance teams to ensure lineage, privacy tagging, and quality controls are embedded within pipelines.
  • Contribute to CI/CD integration of ETL components using Git, Jenkins, and parameterized job configurations.
  • Troubleshoot and resolve issues in QA/UAT/Production environments as needed.
  • Adhere to agile delivery practices, sprint planning, and documentation requirements.

Required Skills and Experience

  • 4+ years of experience in ETL development with at least 1–2 years in IBM DataStage (preferably CP4D version) .
  • Hands-on experience with AWS Glue (PySpark or Spark) and AWS Lambda for event-based processing.
  • Experience working with Snowflake : loading strategies, stream-task, zero-copy cloning, and performance tuning.
  • Proficiency in SQL , Unix scripting , and basic Python for data handling or automation.
  • Familiarity with S3 , version control systems (Git), and job orchestration tools.
  • Experience with data profiling, cleansing, and quality validation routines.
  • Understanding of data lake/data warehouse architectures and DevOps practices.

Good to Have

  • Experience with Collibra, BigID , or other metadata/governance tools
  • Exposure to Data Mesh/Data Domain models
  • Experience with agile/Scrum delivery and Jira/Confluence tools
  • AWS or Snowflake certification is a plus
This advertiser has chosen not to accept applicants from your region.

ETL Developer

New Delhi, Delhi IntraEdge

Posted 17 days ago

Job Viewed

Tap Again To Close

Job Description

Website-

Job Title: ETL Developer – DataStage, AWS, Snowflake

Experience: 5–7 Years

Location: (Remote)

Job Type: (Full-time )

About the Role

We are looking for a talented and motivated ETL Developer / Senior Developer to join our data engineering team. You will work on building scalable and efficient data pipelines using IBM DataStage (on Cloud Pak for Data) , AWS Glue , and Snowflake . You will collaborate with architects, business analysts, and data modelers to ensure timely and accurate delivery of critical data assets supporting analytics and AI/ML use cases.


Key Responsibilities

  • Design, develop, and maintain ETL pipelines using IBM DataStage (CP4D) and AWS Glue/Lambda for ingestion from varied sources like flat files, APIs, Oracle, DB2, etc.
  • Build and optimize data flows for loading curated datasets into Snowflake , leveraging best practices for schema design, partitioning, and transformation logic.
  • Participate in code reviews , performance tuning, and defect triage sessions.
  • Work closely with data governance teams to ensure lineage, privacy tagging, and quality controls are embedded within pipelines.
  • Contribute to CI/CD integration of ETL components using Git, Jenkins, and parameterized job configurations.
  • Troubleshoot and resolve issues in QA/UAT/Production environments as needed.
  • Adhere to agile delivery practices, sprint planning, and documentation requirements.

Required Skills and Experience

  • 4+ years of experience in ETL development with at least 1–2 years in IBM DataStage (preferably CP4D version) .
  • Hands-on experience with AWS Glue (PySpark or Spark) and AWS Lambda for event-based processing.
  • Experience working with Snowflake : loading strategies, stream-task, zero-copy cloning, and performance tuning.
  • Proficiency in SQL , Unix scripting , and basic Python for data handling or automation.
  • Familiarity with S3 , version control systems (Git), and job orchestration tools.
  • Experience with data profiling, cleansing, and quality validation routines.
  • Understanding of data lake/data warehouse architectures and DevOps practices.

Good to Have

  • Experience with Collibra, BigID , or other metadata/governance tools
  • Exposure to Data Mesh/Data Domain models
  • Experience with agile/Scrum delivery and Jira/Confluence tools
  • AWS or Snowflake certification is a plus
This advertiser has chosen not to accept applicants from your region.
 

Nearby Locations

Other Jobs Near Me

Industry

  1. request_quote Accounting
  2. work Administrative
  3. eco Agriculture Forestry
  4. smart_toy AI & Emerging Technologies
  5. school Apprenticeships & Trainee
  6. apartment Architecture
  7. palette Arts & Entertainment
  8. directions_car Automotive
  9. flight_takeoff Aviation
  10. account_balance Banking & Finance
  11. local_florist Beauty & Wellness
  12. restaurant Catering
  13. volunteer_activism Charity & Voluntary
  14. science Chemical Engineering
  15. child_friendly Childcare
  16. foundation Civil Engineering
  17. clean_hands Cleaning & Sanitation
  18. diversity_3 Community & Social Care
  19. construction Construction
  20. brush Creative & Digital
  21. currency_bitcoin Crypto & Blockchain
  22. support_agent Customer Service & Helpdesk
  23. medical_services Dental
  24. medical_services Driving & Transport
  25. medical_services E Commerce & Social Media
  26. school Education & Teaching
  27. electrical_services Electrical Engineering
  28. bolt Energy
  29. local_mall Fmcg
  30. gavel Government & Non Profit
  31. emoji_events Graduate
  32. health_and_safety Healthcare
  33. beach_access Hospitality & Tourism
  34. groups Human Resources
  35. precision_manufacturing Industrial Engineering
  36. security Information Security
  37. handyman Installation & Maintenance
  38. policy Insurance
  39. code IT & Software
  40. gavel Legal
  41. sports_soccer Leisure & Sports
  42. inventory_2 Logistics & Warehousing
  43. supervisor_account Management
  44. supervisor_account Management Consultancy
  45. supervisor_account Manufacturing & Production
  46. campaign Marketing
  47. build Mechanical Engineering
  48. perm_media Media & PR
  49. local_hospital Medical
  50. local_hospital Military & Public Safety
  51. local_hospital Mining
  52. medical_services Nursing
  53. local_gas_station Oil & Gas
  54. biotech Pharmaceutical
  55. checklist_rtl Project Management
  56. shopping_bag Purchasing
  57. home_work Real Estate
  58. person_search Recruitment Consultancy
  59. store Retail
  60. point_of_sale Sales
  61. science Scientific Research & Development
  62. wifi Telecoms
  63. psychology Therapy
  64. pets Veterinary
View All Etl Processes Jobs View All Jobs in Delhi