21 Data Pipelines jobs in Delhi
Data Integration & Modeling Specialist
Posted today
Job Viewed
Job Description
Job Title: Data Integration & Modeling Specialist
Job Type: Contract
Location: Remote
Duration: 6 Months
Job Summary:
We are seeking a highly skilled Data Integration & Modeling Specialist with hands-on experience in developing common metamodels, defining integration specifications, and working with semantic web technologies and various data formats. The ideal candidate will bring deep technical expertise and a collaborative mindset to support enterprise-level data integration and standardization initiatives.
Key Responsibilities:
Develop common metamodels by integrating requirements across diverse systems and organizations.
Define integration specifications, establish data standards, and develop logical and physical data models.
Collaborate with stakeholders to align data architectures with organizational needs and industry best practices.
Implement and govern semantic data solutions using RDF and SPARQL.
Perform data transformations and scripting using TCL, Python, and Java.
Work with multiple data formats including FRL, VRL, HRL, XML, and JSON to support integration and processing pipelines.
Document technical specifications and provide guidance on data standards and modeling best practices.
Required Qualifications:
3+ years of experience (within the last 8 years) in developing common metamodels, preferably using NIEM standards.
3+ years of experience (within the last 8 years) in:
Defining integration specifications
Developing data models
Governing data standards
2+ years of recent experience with:
Tool Command Language (TCL)
Python
Java
2+ years of experience with:
Resource Description Framework (RDF)
SPARQL Query Language
2+ years of experience working with:
Fixed Record Layout (FRL)
Variable Record Layout (VRL)
Hierarchical Record Layout (HRL)
XML
JSONodeling Specialist
Data Integration & Modeling Specialist
Posted 13 days ago
Job Viewed
Job Description
Job Title: Data Integration & Modeling Specialist
Job Type: Contract
Location: Remote
Duration: 6 Months
Job Summary:
We are seeking a highly skilled Data Integration & Modeling Specialist with hands-on experience in developing common metamodels, defining integration specifications, and working with semantic web technologies and various data formats. The ideal candidate will bring deep technical expertise and a collaborative mindset to support enterprise-level data integration and standardization initiatives.
Key Responsibilities:
Develop common metamodels by integrating requirements across diverse systems and organizations.
Define integration specifications, establish data standards, and develop logical and physical data models.
Collaborate with stakeholders to align data architectures with organizational needs and industry best practices.
Implement and govern semantic data solutions using RDF and SPARQL.
Perform data transformations and scripting using TCL, Python, and Java.
Work with multiple data formats including FRL, VRL, HRL, XML, and JSON to support integration and processing pipelines.
Document technical specifications and provide guidance on data standards and modeling best practices.
Required Qualifications:
3+ years of experience (within the last 8 years) in developing common metamodels, preferably using NIEM standards.
3+ years of experience (within the last 8 years) in:
Defining integration specifications
Developing data models
Governing data standards
2+ years of recent experience with:
Tool Command Language (TCL)
Python
Java
2+ years of experience with:
Resource Description Framework (RDF)
SPARQL Query Language
2+ years of experience working with:
Fixed Record Layout (FRL)
Variable Record Layout (VRL)
Hierarchical Record Layout (HRL)
XML
JSONodeling Specialist
ETL Developer
Posted 17 days ago
Job Viewed
Job Description
Website-
Job Title: ETL Developer – DataStage, AWS, Snowflake
Experience: 5–7 Years
Location: (Remote)
Job Type: (Full-time )
About the Role
We are looking for a talented and motivated ETL Developer / Senior Developer to join our data engineering team. You will work on building scalable and efficient data pipelines using IBM DataStage (on Cloud Pak for Data) , AWS Glue , and Snowflake . You will collaborate with architects, business analysts, and data modelers to ensure timely and accurate delivery of critical data assets supporting analytics and AI/ML use cases.
Key Responsibilities
- Design, develop, and maintain ETL pipelines using IBM DataStage (CP4D) and AWS Glue/Lambda for ingestion from varied sources like flat files, APIs, Oracle, DB2, etc.
- Build and optimize data flows for loading curated datasets into Snowflake , leveraging best practices for schema design, partitioning, and transformation logic.
- Participate in code reviews , performance tuning, and defect triage sessions.
- Work closely with data governance teams to ensure lineage, privacy tagging, and quality controls are embedded within pipelines.
- Contribute to CI/CD integration of ETL components using Git, Jenkins, and parameterized job configurations.
- Troubleshoot and resolve issues in QA/UAT/Production environments as needed.
- Adhere to agile delivery practices, sprint planning, and documentation requirements.
Required Skills and Experience
- 4+ years of experience in ETL development with at least 1–2 years in IBM DataStage (preferably CP4D version) .
- Hands-on experience with AWS Glue (PySpark or Spark) and AWS Lambda for event-based processing.
- Experience working with Snowflake : loading strategies, stream-task, zero-copy cloning, and performance tuning.
- Proficiency in SQL , Unix scripting , and basic Python for data handling or automation.
- Familiarity with S3 , version control systems (Git), and job orchestration tools.
- Experience with data profiling, cleansing, and quality validation routines.
- Understanding of data lake/data warehouse architectures and DevOps practices.
Good to Have
- Experience with Collibra, BigID , or other metadata/governance tools
- Exposure to Data Mesh/Data Domain models
- Experience with agile/Scrum delivery and Jira/Confluence tools
- AWS or Snowflake certification is a plus
ETL Developer
Posted 17 days ago
Job Viewed
Job Description
Website-
Job Title: ETL Developer – DataStage, AWS, Snowflake
Experience: 5–7 Years
Location: (Remote)
Job Type: (Full-time )
About the Role
We are looking for a talented and motivated ETL Developer / Senior Developer to join our data engineering team. You will work on building scalable and efficient data pipelines using IBM DataStage (on Cloud Pak for Data) , AWS Glue , and Snowflake . You will collaborate with architects, business analysts, and data modelers to ensure timely and accurate delivery of critical data assets supporting analytics and AI/ML use cases.
Key Responsibilities
- Design, develop, and maintain ETL pipelines using IBM DataStage (CP4D) and AWS Glue/Lambda for ingestion from varied sources like flat files, APIs, Oracle, DB2, etc.
- Build and optimize data flows for loading curated datasets into Snowflake , leveraging best practices for schema design, partitioning, and transformation logic.
- Participate in code reviews , performance tuning, and defect triage sessions.
- Work closely with data governance teams to ensure lineage, privacy tagging, and quality controls are embedded within pipelines.
- Contribute to CI/CD integration of ETL components using Git, Jenkins, and parameterized job configurations.
- Troubleshoot and resolve issues in QA/UAT/Production environments as needed.
- Adhere to agile delivery practices, sprint planning, and documentation requirements.
Required Skills and Experience
- 4+ years of experience in ETL development with at least 1–2 years in IBM DataStage (preferably CP4D version) .
- Hands-on experience with AWS Glue (PySpark or Spark) and AWS Lambda for event-based processing.
- Experience working with Snowflake : loading strategies, stream-task, zero-copy cloning, and performance tuning.
- Proficiency in SQL , Unix scripting , and basic Python for data handling or automation.
- Familiarity with S3 , version control systems (Git), and job orchestration tools.
- Experience with data profiling, cleansing, and quality validation routines.
- Understanding of data lake/data warehouse architectures and DevOps practices.
Good to Have
- Experience with Collibra, BigID , or other metadata/governance tools
- Exposure to Data Mesh/Data Domain models
- Experience with agile/Scrum delivery and Jira/Confluence tools
- AWS or Snowflake certification is a plus
ETL Developer
Posted 17 days ago
Job Viewed
Job Description
Website-
Job Title: ETL Developer – DataStage, AWS, Snowflake
Experience: 5–7 Years
Location: (Remote)
Job Type: (Full-time )
About the Role
We are looking for a talented and motivated ETL Developer / Senior Developer to join our data engineering team. You will work on building scalable and efficient data pipelines using IBM DataStage (on Cloud Pak for Data) , AWS Glue , and Snowflake . You will collaborate with architects, business analysts, and data modelers to ensure timely and accurate delivery of critical data assets supporting analytics and AI/ML use cases.
Key Responsibilities
- Design, develop, and maintain ETL pipelines using IBM DataStage (CP4D) and AWS Glue/Lambda for ingestion from varied sources like flat files, APIs, Oracle, DB2, etc.
- Build and optimize data flows for loading curated datasets into Snowflake , leveraging best practices for schema design, partitioning, and transformation logic.
- Participate in code reviews , performance tuning, and defect triage sessions.
- Work closely with data governance teams to ensure lineage, privacy tagging, and quality controls are embedded within pipelines.
- Contribute to CI/CD integration of ETL components using Git, Jenkins, and parameterized job configurations.
- Troubleshoot and resolve issues in QA/UAT/Production environments as needed.
- Adhere to agile delivery practices, sprint planning, and documentation requirements.
Required Skills and Experience
- 4+ years of experience in ETL development with at least 1–2 years in IBM DataStage (preferably CP4D version) .
- Hands-on experience with AWS Glue (PySpark or Spark) and AWS Lambda for event-based processing.
- Experience working with Snowflake : loading strategies, stream-task, zero-copy cloning, and performance tuning.
- Proficiency in SQL , Unix scripting , and basic Python for data handling or automation.
- Familiarity with S3 , version control systems (Git), and job orchestration tools.
- Experience with data profiling, cleansing, and quality validation routines.
- Understanding of data lake/data warehouse architectures and DevOps practices.
Good to Have
- Experience with Collibra, BigID , or other metadata/governance tools
- Exposure to Data Mesh/Data Domain models
- Experience with agile/Scrum delivery and Jira/Confluence tools
- AWS or Snowflake certification is a plus
ETL Developer
Posted 22 days ago
Job Viewed
Job Description
About PTR Global
PTR Global is a leader in providing innovative workforce solutions, dedicated to optimizing talent acquisition and management processes. Our commitment to excellence has earned us the trust of businesses looking to enhance their talent strategies. We cultivate a dynamic and collaborative environment that empowers our employees to excel and contribute to our clients' success.
Job Summary
We are seeking a highly skilled ETL Developer to join our team in Chennai. The ideal candidate will be responsible for designing, developing, and maintaining ETL processes, as well as data warehouse design and modeling, to support our data integration and business intelligence initiatives. This role requires proficiency in T-SQL, Azure Data Factory (ADF), and SSIS, along with excellent problem-solving and communication skills.
Responsibilities
- Design, develop, and maintain ETL processes to support data integration and business intelligence initiatives.
- Utilize T-SQL to write complex queries and stored procedures for data extraction and transformation.
- Implement and manage ETL processes using SSIS (SQL Server Integration Services).
- Design and model data warehouses to support reporting and analytics needs.
- Ensure data accuracy, quality, and integrity through effective testing and validation procedures.
- Collaborate with business analysts and stakeholders to understand data requirements and deliver solutions that meet their needs.
- Monitor and troubleshoot ETL processes to ensure optimal performance and resolve any issues promptly.
- Document ETL processes, workflows, and data mappings to ensure clarity and maintainability.
- Stay current with industry trends and best practices in ETL development, data integration, and data warehousing.
Must Haves
- Minimum 4+ years of experience as an ETL Developer or in a similar role.
- Proficiency in T-SQL for writing complex queries and stored procedures.
- Experience with SSIS (SQL Server Integration Services) for developing and managing ETL processes.
- Knowledge of ADF (Azure Data Factory) and its application in ETL processes.
- Experience in data warehouse design and modeling.
- Knowledge of Microsoft's Azure cloud suite, including Data Factory, Data Storage, Blob Storage, Power BI, and Power Automate.
- Strong problem-solving and analytical skills.
- Excellent communication and interpersonal skills.
- Strong attention to detail and commitment to data quality.
- Bachelor's degree in Computer Science, Information Technology, or a related field is preferred.
ETL Developer
Posted 22 days ago
Job Viewed
Job Description
About PTR Global
PTR Global is a leader in providing innovative workforce solutions, dedicated to optimizing talent acquisition and management processes. Our commitment to excellence has earned us the trust of businesses looking to enhance their talent strategies. We cultivate a dynamic and collaborative environment that empowers our employees to excel and contribute to our clients' success.
Job Summary
We are seeking a highly skilled ETL Developer to join our team in Chennai. The ideal candidate will be responsible for designing, developing, and maintaining ETL processes, as well as data warehouse design and modeling, to support our data integration and business intelligence initiatives. This role requires proficiency in T-SQL, Azure Data Factory (ADF), and SSIS, along with excellent problem-solving and communication skills.
Responsibilities
- Design, develop, and maintain ETL processes to support data integration and business intelligence initiatives.
- Utilize T-SQL to write complex queries and stored procedures for data extraction and transformation.
- Implement and manage ETL processes using SSIS (SQL Server Integration Services).
- Design and model data warehouses to support reporting and analytics needs.
- Ensure data accuracy, quality, and integrity through effective testing and validation procedures.
- Collaborate with business analysts and stakeholders to understand data requirements and deliver solutions that meet their needs.
- Monitor and troubleshoot ETL processes to ensure optimal performance and resolve any issues promptly.
- Document ETL processes, workflows, and data mappings to ensure clarity and maintainability.
- Stay current with industry trends and best practices in ETL development, data integration, and data warehousing.
Must Haves
- Minimum 4+ years of experience as an ETL Developer or in a similar role.
- Proficiency in T-SQL for writing complex queries and stored procedures.
- Experience with SSIS (SQL Server Integration Services) for developing and managing ETL processes.
- Knowledge of ADF (Azure Data Factory) and its application in ETL processes.
- Experience in data warehouse design and modeling.
- Knowledge of Microsoft's Azure cloud suite, including Data Factory, Data Storage, Blob Storage, Power BI, and Power Automate.
- Strong problem-solving and analytical skills.
- Excellent communication and interpersonal skills.
- Strong attention to detail and commitment to data quality.
- Bachelor's degree in Computer Science, Information Technology, or a related field is preferred.
Be The First To Know
About the latest Data pipelines Jobs in Delhi !
ETL Developer
Posted 22 days ago
Job Viewed
Job Description
About PTR Global
PTR Global is a leader in providing innovative workforce solutions, dedicated to optimizing talent acquisition and management processes. Our commitment to excellence has earned us the trust of businesses looking to enhance their talent strategies. We cultivate a dynamic and collaborative environment that empowers our employees to excel and contribute to our clients' success.
Job Summary
We are seeking a highly skilled ETL Developer to join our team in Chennai. The ideal candidate will be responsible for designing, developing, and maintaining ETL processes, as well as data warehouse design and modeling, to support our data integration and business intelligence initiatives. This role requires proficiency in T-SQL, Azure Data Factory (ADF), and SSIS, along with excellent problem-solving and communication skills.
Responsibilities
- Design, develop, and maintain ETL processes to support data integration and business intelligence initiatives.
- Utilize T-SQL to write complex queries and stored procedures for data extraction and transformation.
- Implement and manage ETL processes using SSIS (SQL Server Integration Services).
- Design and model data warehouses to support reporting and analytics needs.
- Ensure data accuracy, quality, and integrity through effective testing and validation procedures.
- Collaborate with business analysts and stakeholders to understand data requirements and deliver solutions that meet their needs.
- Monitor and troubleshoot ETL processes to ensure optimal performance and resolve any issues promptly.
- Document ETL processes, workflows, and data mappings to ensure clarity and maintainability.
- Stay current with industry trends and best practices in ETL development, data integration, and data warehousing.
Must Haves
- Minimum 4+ years of experience as an ETL Developer or in a similar role.
- Proficiency in T-SQL for writing complex queries and stored procedures.
- Experience with SSIS (SQL Server Integration Services) for developing and managing ETL processes.
- Knowledge of ADF (Azure Data Factory) and its application in ETL processes.
- Experience in data warehouse design and modeling.
- Knowledge of Microsoft's Azure cloud suite, including Data Factory, Data Storage, Blob Storage, Power BI, and Power Automate.
- Strong problem-solving and analytical skills.
- Excellent communication and interpersonal skills.
- Strong attention to detail and commitment to data quality.
- Bachelor's degree in Computer Science, Information Technology, or a related field is preferred.
Associate Architect - Data Engineering
Posted 11 days ago
Job Viewed
Job Description
About the Role:
We are seeking an experienced Data Architect to lead the transformation of enterprise data
solutions, with a strong focus on migrating Alteryx workflows into Azure Databricks. The
ideal candidate will have deep expertise in the Microsoft Azure ecosystem, including Azure
Data Factory, Databricks, Synapse Analytics, Microsoft Fabric, and a strong
background in data architecture, governance, and distributed computing. This role
requires both strategic thinking and hands-on architectural leadership to ensure scalable,
secure, and high-performance data solutions.
Key Responsibilities:
Define the overall migration strategy for transforming Alteryx workflows into
scalable, cloud-native data solutions on Azure Databricks.
Architect end-to-end data frameworks leveraging Databricks, Delta Lake, Azure
Data Lake, and Synapse.
Establish best practices, standards, and governance frameworks for pipeline
design, orchestration, and data lifecycle management.
Guide engineering teams in re-engineering Alteryx workflows into distributed Spark-
based architectures.
Collaborate with business stakeholders to ensure solutions align with analytics,
reporting, and advanced AI/ML initiatives.
Oversee data quality, lineage, and security compliance across the data
ecosystem.
Drive CI/CD adoption, automation, and DevOps practices for Azure Databricks
and related services.
Provide architectural leadership, design reviews, and mentorship to engineering
and analytics teams.
Optimize solutions for performance, scalability, and cost-efficiency within Azure.
Participate in enterprise architecture forums and influence data strategy across the
organization.
Required Skills and Qualifications:
10+ years of experience in data architecture, engineering, or solution design.
Proven expertise in Alteryx workflows and their modernization into Azure
Databricks (Spark, PySpark, SQL, Delta Lake).
Deep knowledge of the Microsoft Azure data ecosystem:
o Azure Data Factory (ADF)
o Azure Synapse Analytics
o Microsoft Fabric
o Azure Databricks
Strong background in data governance, lineage, security, and compliance
frameworks.
Demonstrated experience in architecting data lakes, data warehouses, and
analytics platforms.
Proficiency in Python, SQL, and Apache Spark for prototyping and design
validation.
Excellent leadership, communication, and stakeholder management skills.
Preferred Qualifications:
Microsoft Azure certifications (e.g., Azure Solutions Architect Expert, Azure Data
Engineer Associate).
Experience leading large-scale migration programs or modernization initiatives.
Familiarity with enterprise architecture frameworks (TOGAF, Zachman).
Exposure to machine learning enablement on Azure Databricks.
Strong understanding of Agile delivery and working in multi-disciplinary teams.
Associate Architect - Data Engineering
Posted 11 days ago
Job Viewed
Job Description
About the Role:
We are seeking an experienced Data Architect to lead the transformation of enterprise data
solutions, with a strong focus on migrating Alteryx workflows into Azure Databricks. The
ideal candidate will have deep expertise in the Microsoft Azure ecosystem, including Azure
Data Factory, Databricks, Synapse Analytics, Microsoft Fabric, and a strong
background in data architecture, governance, and distributed computing. This role
requires both strategic thinking and hands-on architectural leadership to ensure scalable,
secure, and high-performance data solutions.
Key Responsibilities:
Define the overall migration strategy for transforming Alteryx workflows into
scalable, cloud-native data solutions on Azure Databricks.
Architect end-to-end data frameworks leveraging Databricks, Delta Lake, Azure
Data Lake, and Synapse.
Establish best practices, standards, and governance frameworks for pipeline
design, orchestration, and data lifecycle management.
Guide engineering teams in re-engineering Alteryx workflows into distributed Spark-
based architectures.
Collaborate with business stakeholders to ensure solutions align with analytics,
reporting, and advanced AI/ML initiatives.
Oversee data quality, lineage, and security compliance across the data
ecosystem.
Drive CI/CD adoption, automation, and DevOps practices for Azure Databricks
and related services.
Provide architectural leadership, design reviews, and mentorship to engineering
and analytics teams.
Optimize solutions for performance, scalability, and cost-efficiency within Azure.
Participate in enterprise architecture forums and influence data strategy across the
organization.
Required Skills and Qualifications:
10+ years of experience in data architecture, engineering, or solution design.
Proven expertise in Alteryx workflows and their modernization into Azure
Databricks (Spark, PySpark, SQL, Delta Lake).
Deep knowledge of the Microsoft Azure data ecosystem:
o Azure Data Factory (ADF)
o Azure Synapse Analytics
o Microsoft Fabric
o Azure Databricks
Strong background in data governance, lineage, security, and compliance
frameworks.
Demonstrated experience in architecting data lakes, data warehouses, and
analytics platforms.
Proficiency in Python, SQL, and Apache Spark for prototyping and design
validation.
Excellent leadership, communication, and stakeholder management skills.
Preferred Qualifications:
Microsoft Azure certifications (e.g., Azure Solutions Architect Expert, Azure Data
Engineer Associate).
Experience leading large-scale migration programs or modernization initiatives.
Familiarity with enterprise architecture frameworks (TOGAF, Zachman).
Exposure to machine learning enablement on Azure Databricks.
Strong understanding of Agile delivery and working in multi-disciplinary teams.