21,900 Etl jobs in India
Etl Data Warehousing
Posted today
Job Viewed
Job Description
- Pune, India
Experience Required:
- 5-8 years
Key Skills Required:
- Proficiency in ETL data warehousing, SQL, and data warehousing concepts.
- Familiarity with AWS platforms and Python programming.
- Basic understanding of ITIL frameworks.
Qualifications:
- Relevant experience in developing and managing ETL data warehousing solutions.
Notice Period:
- Immediate joiners or those with a notice period of up to 30 days.
Data Engineer - ETL

Posted 13 days ago
Job Viewed
Job Description
NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now.
We are currently seeking a Data Engineer - ETL to join our team in Bangalore, Karnātaka (IN-KA), India (IN).
**Job Duties:**
- Migrate ETL workflows from SAP BODS to AWS Glue/dbt/Talend.
- Develop and maintain scalable ETL pipelines in AWS.
- Write PySpark scripts for large-scale data processing.
- Optimize SQL queries and transformations for AWS PostgreSQL.
- Work with Cloud Engineers to ensure smooth deployment and performance tuning.
- Integrate data pipelines with existing Unix systems.
Document ETL processes and migration steps.
**Minimum Skills Required:**
- Strong hands-on experience with SAP BODS.
- Proficiency in PySpark and Python scripting.
- Experience with AWS PostgreSQL (schema design, performance tuning, migration).
- Strong SQL and data modelling skills.
- Experience with Unix/Linux and shell scripting.
- Knowledge of data migration best practices and performance optimization.
* Experience for migration mappings to AWS Glue/dbt/Talend is a plus.
**About NTT DATA**
NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com ( possible, we hire locally to NTT DATA offices or client sites. This ensures we can provide timely and effective support tailored to each client's needs. While many positions offer remote or hybrid work options, these arrangements are subject to change based on client requirements. For employees near an NTT DATA office or client site, in-office attendance may be required for meetings or events, depending on business needs. At NTT DATA, we are committed to staying flexible and meeting the evolving needs of both our clients and employees. NTT DATA recruiters will never ask for payment or banking information and will only use @nttdata.com and @talent.nttdataservices.com email addresses. If you are requested to provide payment or disclose banking information, please submit a contact us form, .
**_NTT DATA endeavors to make_** **_ **_accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at_** **_ **_._** **_This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here ( . If you'd like more information on your EEO rights under the law, please click here ( . For Pay Transparency information, please click here ( ._**
Data Engineer - ETL

Posted 13 days ago
Job Viewed
Job Description
NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now.
We are currently seeking a Data Engineer - ETL to join our team in Bangalore, Karnātaka (IN-KA), India (IN).
**Job Duties:**
- Migrate ETL workflows from SAP BODS to AWS Glue/dbt/Talend.
- Develop and maintain scalable ETL pipelines in AWS.
- Write PySpark scripts for large-scale data processing.
- Optimize SQL queries and transformations for AWS PostgreSQL.
- Work with Cloud Engineers to ensure smooth deployment and performance tuning.
- Integrate data pipelines with existing Unix systems.
Document ETL processes and migration steps.
**Minimum Skills Required:**
- Strong hands-on experience with SAP BODS.
- Proficiency in PySpark and Python scripting.
- Experience with AWS PostgreSQL (schema design, performance tuning, migration).
- Strong SQL and data modelling skills.
- Experience with Unix/Linux and shell scripting.
- Knowledge of data migration best practices and performance optimization.
* Experience for migration mappings to AWS Glue/dbt/Talend is a plus.
**About NTT DATA**
NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com ( possible, we hire locally to NTT DATA offices or client sites. This ensures we can provide timely and effective support tailored to each client's needs. While many positions offer remote or hybrid work options, these arrangements are subject to change based on client requirements. For employees near an NTT DATA office or client site, in-office attendance may be required for meetings or events, depending on business needs. At NTT DATA, we are committed to staying flexible and meeting the evolving needs of both our clients and employees. NTT DATA recruiters will never ask for payment or banking information and will only use @nttdata.com and @talent.nttdataservices.com email addresses. If you are requested to provide payment or disclose banking information, please submit a contact us form, .
**_NTT DATA endeavors to make_** **_ **_accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at_** **_ **_._** **_This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here ( . If you'd like more information on your EEO rights under the law, please click here ( . For Pay Transparency information, please click here ( ._**
Data Engineer - ETL
Posted today
Job Viewed
Job Description
Req ID:
We are currently seeking a Data Engineer - ETL to join our team in Bangalore, Karnātaka (IN-KA), India (IN).
Job Duties:
• Migrate ETL workflows from SAP BODS to AWS Glue/dbt/Talend.
• Develop and maintain scalable ETL pipelines in AWS.
• Write PySpark scripts for large-scale data processing.
• Optimize SQL queries and transformations for AWS PostgreSQL.
• Work with Cloud Engineers to ensure smooth deployment and performance tuning.
• Integrate data pipelines with existing Unix systems.
Document ETL processes and migration steps.
Minimum Skills Required:
• Strong hands-on experience with SAP BODS.
• Proficiency in PySpark and Python scripting.
• Experience with AWS PostgreSQL (schema design, performance tuning, migration).
• Strong SQL and data modelling skills.
• Experience with Unix/Linux and shell scripting.
• Knowledge of data migration best practices and performance optimization.
* Experience for migration mappings to AWS Glue/dbt/Talend is a plus.
About NTT DATA
NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at
Data Engineer - ETL
Posted today
Job Viewed
Job Description
Req ID:
NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now.
We are currently seeking a Data Engineer - ETL to join our team in Bangalore, Karnātaka (IN-KA), India (IN).
Job Duties:
• Migrate ETL workflows from SAP BODS to AWS Glue/dbt/Talend.
• Develop and maintain scalable ETL pipelines in AWS.
• Write PySpark scripts for large-scale data processing.
• Optimize SQL queries and transformations for AWS PostgreSQL.
• Work with Cloud Engineers to ensure smooth deployment and performance tuning.
• Integrate data pipelines with existing Unix systems.
Document ETL processes and migration steps.
Minimum Skills Required:
• Strong hands-on experience with SAP BODS.
• Proficiency in PySpark and Python scripting.
• Experience with AWS PostgreSQL (schema design, performance tuning, migration).
• Strong SQL and data modelling skills.
• Experience with Unix/Linux and shell scripting.
• Knowledge of data migration best practices and performance optimization.
* Experience for migration mappings to AWS Glue/dbt/Talend is a plus.
About NTT DATA
NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com
Whenever possible, we hire locally to NTT DATA offices or client sites. This ensures we can provide timely and effective support tailored to each client’s needs. While many positions offer remote or hybrid work options, these arrangements are subject to change based on client requirements. For employees near an NTT DATA office or client site, in-office attendance may be required for meetings or events, depending on business needs. At NTT DATA, we are committed to staying flexible and meeting the evolving needs of both our clients and employees. NTT DATA recruiters will never ask for payment or banking information and will only use @nttdata.com and @talent.nttdataservices.com email addresses. If you are requested to provide payment or disclose banking information, please submit a contact us form, .
NTT DATA endeavors to make accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here . If you'd like more information on your EEO rights under the law, please click here . For Pay Transparency information, please click here .
ETL Data Engineer
Posted today
Job Viewed
Job Description
Pls rate the candidate (from 1 to 5, 1 lowest, 5 highest ) in these areas
- Big Data
- PySpark
- AWS
- Redshift
Position Summary
Experienced ETL Developers and Data Engineers to ingest and analyze data from multiple enterprise sources into Adobe Experience Platform
Requirements
- About 4-6 years of professional technology experience mostly focused on the following:
- 4+ year of experience on developing data ingestion pipelines using Pyspark(batch and streaming).
- 4+ years experience on multiple Data engineering related services on AWS, e.g. Glue, Athena, DynamoDb, Kinesis, Kafka, Lambda, Redshift etc.
- 1+ years of experience of working with Redshift esp the following.
o Experience and knowledge of loading data from various sources, e.g. s3 bucket and on-prem data sources into Redshift.
o Experience of optimizing data ingestion into Redshift.
o Experience of designing, developing and optimizing queries on Redshift using SQL or PySparkSQL
o Experience of designing tables in Redshift(distribution key, compression etc., vacuuming,etc. )
Experience of developing applications that consume the services exposed as ReST APIs. Experience and ability to write and analyze complex and performant SQLs
Special Consideration given for
- 2 years of Developing and supporting ETL pipelines using enterprise-grade ETL tools like Pentaho, Informatica, Talend
- Good knowledge on Data Modellin g(design patterns and best practices)
- Experience with Reporting Technologies (i.e. Tableau, PowerBI)
What youll do
Analyze and understand customers use case and data sources and extract, transform and load data from multitude of customers enterprise sources and ingest into Adobe Experience Platform
Design and build data ingestion pipelines into the platform using PySpark
Ensure ingestion is designed and implemented in a performant manner to support the throughout and latency needed.
Develop and test complex SQLs to extractanalyze and report the data ingested into the Adobe Experience platform.
Ensure the SQLs are implemented in compliance with the best practice to they are performant.
Migrate platform configurations, including the data ingestion pipelines and SQL, across various sandboxes.
Debug any issues reported on data ingestion, SQL or any other functionalities of the platform and resolve the issues.
Support Data Architects in implementing data model in the platform.
Contribute to the innovation charter and develop intellectual property for the organization.
Present on advanced features and complex use case implementations at multiple forums.
Attend regular scrum events or equivalent and provide update on the deliverables.
Work independently across multiple engagements with none or minimum supervision.
ETL Data Engineer
Posted today
Job Viewed
Job Description
Pls rate the candidate (from 1 to 5, 1 lowest, 5 highest ) in these areas
- Big Data
- PySpark
- AWS
- Redshift
Position Summary
Experienced ETL Developers and Data Engineers to ingest and analyze data from multiple enterprise sources into Adobe Experience Platform
Requirements
- About 4-6 years of professional technology experience mostly focused on the following:
- 4+ year of experience on developing data ingestion pipelines using Pyspark(batch and streaming).
- 4+ years experience on multiple Data engineering related services on AWS, e.g. Glue, Athena, DynamoDb, Kinesis, Kafka, Lambda, Redshift etc.
- 1+ years of experience of working with Redshift esp the following.
o Experience and knowledge of loading data from various sources, e.g. s3 bucket and on-prem data sources into Redshift.
o Experience of optimizing data ingestion into Redshift.
o Experience of designing, developing and optimizing queries on Redshift using SQL or PySparkSQL
o Experience of designing tables in Redshift(distribution key, compression etc., vacuuming,etc. )
Experience of developing applications that consume the services exposed as ReST APIs. Experience and ability to write and analyze complex and performant SQLs
Special Consideration given for
- 2 years of Developing and supporting ETL pipelines using enterprise-grade ETL tools like Pentaho, Informatica, Talend
- Good knowledge on Data Modellin g(design patterns and best practices)
- Experience with Reporting Technologies (i.e. Tableau, PowerBI)
What youll do
Analyze and understand customers use case and data sources and extract, transform and load data from multitude of customers enterprise sources and ingest into Adobe Experience Platform
Design and build data ingestion pipelines into the platform using PySpark
Ensure ingestion is designed and implemented in a performant manner to support the throughout and latency needed.
Develop and test complex SQLs to extractanalyze and report the data ingested into the Adobe Experience platform.
Ensure the SQLs are implemented in compliance with the best practice to they are performant.
Migrate platform configurations, including the data ingestion pipelines and SQL, across various sandboxes.
Debug any issues reported on data ingestion, SQL or any other functionalities of the platform and resolve the issues.
Support Data Architects in implementing data model in the platform.
Contribute to the innovation charter and develop intellectual property for the organization.
Present on advanced features and complex use case implementations at multiple forums.
Attend regular scrum events or equivalent and provide update on the deliverables.
Work independently across multiple engagements with none or minimum supervision.
Be The First To Know
About the latest Etl Jobs in India !
Statusneo-Data Engineer (ETL)
Posted today
Job Viewed
Job Description
Role: ETIL Developer
Location: Mumbai
Experience: 3-5 Years
Skills - ETL, BDM, Informatica, Data Integrator
Role Overview:
We are seeking a skilled ETL Developer with experience in Informatica, Big Data Management (BDM), and Data Integrator. The ideal candidate will have a strong background in data extraction, transformation, and loading (ETL) processes, with a focus on optimizing data integration solutions for complex data environments. You will play a critical role in designing and implementing ETL workflows to support our business intelligence and data warehousing initiatives.
Key Responsibilities:
- Design, develop, and maintain ETL processes using Informatica, BDM, and Data Integrator.
- Collaborate with data architects and business analysts to understand data requirements and translate them into ETL solutions.
- Optimize ETL processes for performance, scalability, and reliability.
- Conduct data quality assessments and implement data cleansing procedures.
- Monitor and troubleshoot ETL processes to ensure timely and accurate data integration.
- Work with large datasets across multiple data sources, including structured and unstructured data.
- Document ETL processes, data flows, and mappings to ensure clarity and consistency.
Required Skills:
- 3-5 years of experience in ETL development with a strong focus on Informatica, BDM, and Data Integrator.
- Proficiency in SQL and database technologies (e.g., Oracle, SQL Server, MySQL).
- Experience with big data technologies and frameworks.
- Strong analytical and problem-solving skills.
- Familiarity with data warehousing concepts and best practices.
- Excellent communication and collaboration skills.
About Statusneo :
We accelerate your business transformation by leveraging best fit CLOUD NATIVE technologies wherever feasible. We are DIGITAL consultants who partner with you to solve & deliver. We are experts in CLOUD NATIVE TECHNOLOGY CONSULTING & SOLUTIONS. We build, maintain & monitor highly scalable, modular applications that leverage elastic compute, storage and network of leading cloud platforms. We CODE your NEO transformations. #StatusNeo
Business domain experience is vital to the success of neo transformations empowered by digital technology. Experts in domain ask the right business questions to diagnose and address. Our consultants leverage your domain expertise & augment our digital excellence to build cutting edge cloud solutions.
Data ETL Engineer
Posted today
Job Viewed
Job Description
Responsibilities:
Requirements:
Pluses:
Etl
Posted today
Job Viewed
Job Description
SRM Technologies, part of the SRM Group, was established in 1998 and provides Cloud and Infrastructure, Digital Transformation, Managed IT Services, Application Lifecycle, Quality Assurance, eCommerce and Product Engineering services. These are offered to the Education, Automotive, Manufacturing, Consumer, Transportation & Logistics, Supply Chain and Healthcare industries.
BI ETL developer will be responsible for implementing data pipelines in Azure Data Factory and building reports/ dashboards in PowerBI/ Tableau
**Requirements**:
**Responsibilities**
- An Azure data engineer also helps ensure that data pipelines and data stores are high-performing, efficient, organized, and reliable, given a specific set of business requirements and constraints
- An Azure data engineer also designs, implements, monitors, and optimises data platforms to meet the data pipeline needs
- Solution design using Microsoft Azure services and related tools
- Design enterprise data models and Data Warehouse solutions
- Specification of ETL pipelines, data integration and data migration design
- Design & implementation of Master data management solutions
**Job Qualifications**
- Experience in the design of reporting & data visualisation solutions such as Power BI or Tableau
- Experience in building data pipelines for structured and unstructured data from multiple source systems
- Data validation, Basic Data Modelling, SQL Expertise
- Excellent developing skills using Azure Data brick and Spark SQL
- Excellent experience of CI/ CD using Azure DevOps, ADF, Azure Data Lake/ ADL Configuration management of Notebook
- How to setup local branch and branch management
**Required skills**:
Azure Data Factory : Creating Move and transformation pipelines and Pulling data from Various Sources Like NetSuite, Salesforce, Jira.
Azure Synapse : Creating Transformation Logics with Py spark Notebooks.
Rest and Soap APi : Creating Rest and soap Api's for data migration to Data lake.
Jira and Git hub : Creating CI/CD pipelines Using Agile.
SQL server : Creating Joins and aggregations and Querying data using T-SQL.
City
Chennai
State/Province
Tamil Nadu
Country
India
Zip/Postal Code
Industry
Technology