25,923 Etl jobs in India

ETL - DATA Engineer

Chennai, Tamil Nadu ₹1500000 - ₹2500000 Y Wroots Global

Posted 1 day ago

Job Viewed

Tap Again To Close

Job Description

F2F Drive on 11th Oct 2025 (Location - Chennai)

Senior ETL Developer AWS | Movate Chennai

Experience: 3 to 9 Years

Location: Chennai (5 Days Office)

Notice Period: Immediate to 30 Days

Mandatory Skills:

  • ETL Development: 3+ years overall ETL experience with minimum 3+ years in AWS PySpark scripting.
  • AWS Data Solutions: Hands-on deployment & operations using S3, Lambda, SNS, Step Functions (strong AWS services knowledge is a must).
  • Programming: Advanced PySpark expertise and solid experience with Python packages such as NumPy, Pandas, etc.
  • Individual Contributor: Ability to own end-to-end design, build, and deployment of data pipelines without close supervision.
  • Data Governance: Familiarity with metadata management, data lineage, and data governance principles.

Good to Have:

  • Experience processing large-scale structured & semi-structured data transformations.
  • Exposure to data lake design and Delta tables configuration.
  • Knowledge of computing and cost optimization strategies on AWS.
  • Ability to design holistic Data Integration frameworks based on environment/use case.
  • Working knowledge of MWAA (Airflow orchestration).

Soft Skills:

  • Excellent communication and stakeholder management skills.
  • Strong problem-solving mindsetability to understand business pain points and deliver effective solutions.
This advertiser has chosen not to accept applicants from your region.

ETL Data Engineer

Chennai, Tamil Nadu ₹600000 - ₹1800000 Y Relevantz Technology Services

Posted 1 day ago

Job Viewed

Tap Again To Close

Job Description

Mandatory skills (8+ Years of experience in ETL development with 4+ Years on AWS

Pyspark scripting)

1. Experience deploying and running AWS-based data solutions using services or

products such as S3, Lambda, SNS, Cloud Step Functions.

2. Person should be strong in Pyspark

3. Hands on and working knowledge in Python packages like NumPy, Pandas, Etc

4. Experience deploying and running AWS-based data solutions using services or

products such as S3, Lambda, SNS, Cloud Step Functions. Sound knowledge in AWS

services is must.

5. Person should work as Individual contributor

6. Good to have familiar with metadata management, data lineage, and principles of

data governance.

Good to have:

1. Experience to process large set of data transformations both semi and structured

data

2. Experience to build data lake & configuration on delta tables.

3. Good experience with computing & cost optimization.

4. Understanding the environment and use case and ready to build holistic Data

Integration frame works.

5. Good experience in MWAA (airflow orchestration)

Soft skill:

1. Having good communication to interact with IT-Stake holders and Business.

2. Understand the pain point to delivery

This advertiser has chosen not to accept applicants from your region.

ETL Data Engineer

Noida, Uttar Pradesh ₹800000 - ₹2400000 Y Lumiq

Posted 1 day ago

Job Viewed

Tap Again To Close

Job Description

Who we are:

LUMIQ is the leading Data and Analytics company in the Financial Services and Insurance (FSI) industry. We are trusted by the world's largest FSIs, including insurers, banks, AMCs, and NBFCs, to address their data challenges. Our clients include 40+ enterprises with over $10B in deposits/AUM, collectively representing about 1B customers globally. Our expertise lies in creating next-gen data technology products to help FSI enterprises organize, manage, and effectively use data. We have consistently challenged the status quo, introducing many industry-firsts like the first enterprise data platform in Asia on cloud for a regulated entity. Founded in 2013, LUMIQ has now completed a decade of innovation, backed by Info Edge Ventures (a JV

between Temasek Holdings of Singapore and Naukri) and US-based Season 2 Ventures.

Our Culture:

At LUMIQ, we strive to create a community of passionate data professionals who aim to transcend the usual corporate dynamics. We offer you the freedom to ideate, commit, and navigate your career trajectory at your

own pace. Culture of ownership empowerment to drive outcomes.

Our culture encourages 'Tech Poetry' combining creativity and technology to create solutions that revolutionize industry. We trust our people to manage their responsibilities with minimal policy constraints. Our team is composed of the industry's brightest minds, from PhDs and engineers to industry specialists from Banking, Insurance, NBFCs, AMCs, who will challenge and inspire you to reach new heights.

Role:

We are seeking a highly skilled ETL Data Engineer to re-engineer our existing data pipelines to extract data from a new data source (PostgreSQL / CURA system) instead of the current Microsoft SQL Server (CRM persistence store), while preserving the existing load patterns to Elasticsearch and MongoDB. The engineer will ensure this migration has zero impact on data quality, system performance, or end-user experience.

Key Responsibilities
  • Analyze existing ETL pipelines and their dependencies on Microsoft SQL Server as source systems.
  • Design and implement modifications to repoint ETL extractions from PostgreSQL (CURA) while preserving the current transformations and load logic into Elasticsearch and MongoDB.
  • Ensure end-to-end data integrity, quality, and freshness remain unaffected after the source switch.
  • Write efficient and optimized SQL queries to extract data from the new source.
  • Conduct performance testing to confirm no degradation of pipeline throughput or latency in production.
  • Work closely with DevOps and platform teams to containerize, orchestrate, and deploy the updated ETLs using Docker and Kubernetes.
  • Monitor post-deployment performance and handle any production issues proactively.
  • Document design, code, data mappings, and operational runbooks.
Required Skills and Qualifications
  • Strong experience building and maintaining large-scale distributed data systems.
  • Expert-level proficiency in Python, especially data analysis/manipulation libraries like pandas, NumPy, and Polars.
  • Advanced SQL development skills with proven experience in performance optimization.
  • Working knowledge of Docker and Kubernetes.
  • Familiarity with Elasticsearch and MongoDB as data stores.
  • Experience working in production environments with mission-critical

systems.

What Do You Get:

  • Opportunity to contribute to an entrepreneurial culture and exposure to the startup hustler culture.
  • Competitive Salary Packages.Group Medical Policies.
  • Equal Employment Opportunity.
  • Maternity Leave.
  • Opportunities for upskilling and exposure to the latest technologies.

    100% Sponsorship for certification
This advertiser has chosen not to accept applicants from your region.

ETL Data Engineer

Bengaluru, Karnataka ₹900000 - ₹1200000 Y First Phoenics Solutions

Posted 1 day ago

Job Viewed

Tap Again To Close

Job Description

Job Title: Associate Consultant Experience: 2 to 3 Years Location:

Job Summary:

We are looking for a skilled and proactive Associate Consultant with 2–3 years of experience in
data management
,
governance, and cloud platforms
. The ideal candidate should not only have technical expertise but also possess strong communication skills to work effectively with cross-functional teams and clients.

Key Responsibilities:

· Contribute to
data governance and integration
projects

· Work on CAI (CDI is a plus) and EDM implementations

· Support development using INFA, Databricks, Snowflake.

· Deliver solutions on cloud platforms (
Azure or AWS
)

· Communicate effectively with internal teams and external stakeholders

· Ensure high standards of data quality, security, and compliance

Required Skills:

· Governance along with DQ Skill & CAI

· CAI (CDI – Optional)

· EDM

· INFA+ (Databricks, Snowflake)

· Cloud Platforms (Azure / AWS)

· Strong verbal and written communication skills

· Problem-solving and teamwork abilities

Nice to Have:

· Client interaction experience

This advertiser has chosen not to accept applicants from your region.

Data Engineer/ ETL

Gurugram, Uttar Pradesh ₹1500000 - ₹2500000 Y Next Mantra Solution Private Limited

Posted 1 day ago

Job Viewed

Tap Again To Close

Job Description

Job Title:
ETL & Data Warehouse Developer (AWS)

Location:
Gurugram

Employment Type:
Full-time

About the Role

We are seeking an experienced ETL & Data Warehouse Developer with strong expertise in AWS services to join our data engineering team. This role involves designing, developing, and optimizing ETL processes and data warehousing solutions on the AWS platform. The ideal candidate will bring solid experience in ETL development, data modeling, SQL, and performance optimization.

Key Responsibilities

  • Design, develop, and implement robust ETL processes using AWS Glue, PySpark, AWS Data Pipeline, or custom scripts.
  • Build and maintain scalable and reliable data warehouse solutions on AWS.
  • Design and optimize data models for efficient storage and retrieval in AWS Redshift.
  • Utilize AWS services such as S3, Lambda, Glue, and Redshift to develop end-to-end data solutions.
  • Craft and optimize complex SQL queries to support reporting, BI, and analytics.
  • Identify and resolve performance bottlenecks in ETL workflows and data warehouse queries.
  • Collaborate with cross-functional teams to integrate data from multiple sources.
  • Ensure compliance with security and data governance best practices.
  • Maintain detailed documentation for ETL processes, data models, and configurations.

Required Qualifications

  • Bachelor's degree in Computer Science, Information Technology, or related field.
  • Proven experience as an ETL & Data Warehouse Developer with a strong focus on AWS.
  • Strong proficiency in SQL, stored procedures, and query optimization.
  • Hands-on experience with AWS Glue, Redshift, S3, and other AWS services.
  • Strong knowledge of data modeling and warehouse design principles.
  • Familiarity with security, compliance, and governance standards.
  • Excellent problem-solving, analytical, and collaboration skills.
  • AWS or other relevant certifications preferred.
This advertiser has chosen not to accept applicants from your region.

ETL Data Engineer

Mumbai, Maharashtra ₹1200000 - ₹3600000 Y Sourcebae

Posted 1 day ago

Job Viewed

Tap Again To Close

Job Description

ETL Data Engineer

Skill to Evaluate: ETL, Etl Developer, GCP, Big Data, Bigquery, Kafka, Hive, Data Modeling, Python, Pyspark, SQL

Experience: 5 to 6 Years

Location: Lower Parel, Mumbai (3 Days WFO)

BGV: Education, Address, Employment, Criminal

About the Role

We are looking for a passionate and experienced Data Engineer to join our team and

help build scalable, reliable, and efficient data pipelines on cloud platforms like

Primarily on Google Cloud Platform (GCP) and secondary on Amazon Web

Services (AWS). You will work with cutting-edge technologies to process structured

and unstructured data, enabling data-driven decision-making across the

organization.

Key Responsibilities

 Design, develop, and maintain robust data pipelines and ETL/ELT workflows

using PySpark, Python, and SQL.

Build and manage data ingestion and transformation processes from various

sources including Hive, Kafka, and cloud-native services.

rchestrate workflows using Apache Airflow and ensure timely and reliable

data delivery.

ork with large-scale big data systems to process structured and

unstructured datasets.

mplement data quality checks, monitoring, and alerting mechanisms.

ollaborate with cross-functional teams including data scientists, analysts,

and product managers to understand data requirements.

ptimize data processing for performance, scalability, and cost-efficiency.

nsure compliance with data governance, security, and privacy standards.

Required Skills & Qualifications

+ years of experience in data engineering or related roles.

trong programming skills in Python and PySpark.

roficiency in SQL and experience with Hive.

ands-on experience with Apache Airflow for workflow orchestration.

xperience with Kafka for real-time data streaming.

olid understanding of big data ecosystems and distributed computing.

xperience with GCP (BigQuery, Dataflow, Dataproc

bility to work with both structured (e.g., relational databases)

and unstructured (e.g., logs, images, documents) data.

amiliarity with CI/CD tools and version control systems (e.g., Git).

nowledge of containerization (Docker) and orchestration (Kubernetes).

xposure to data cataloging and governance tools (e.g., AWS Lake

Formation, Google Data Catalog).

nderstanding of data modeling and architecture principles.

/p>

Soft Skills

trong analytical and problem-solving abilities.

xcellent communication and collaboration skills.

bility to work in Agile/Scrum environments.

wnership mindset and attention to detail.

This advertiser has chosen not to accept applicants from your region.

ETL Data engineer

Chennai, Tamil Nadu ₹500000 - ₹1500000 Y Wroots Global

Posted 1 day ago

Job Viewed

Tap Again To Close

Job Description

Role: ETL Data engineer/ETL developer

Experience: 3.9 Years - 9 Years

Location: Chennai, Guindey,WFO

Notice Period: Only Immediate Joiners

SKills:

Mandatory skills:

1.ETL

2.AWS,Lambda,SNS

3.Python,Pyspark

4.Apache airflow

Interested candidates can share their updated to CV to

This advertiser has chosen not to accept applicants from your region.
Be The First To Know

About the latest Etl Jobs in India !

ETL Data Engineer

New Delhi, Delhi The Techgalore

Posted today

Job Viewed

Tap Again To Close

Job Description

Pls rate the candidate (from 1 to 5, 1 lowest, 5 highest ) in these areas

  1. Big Data
  2. PySpark
  3. AWS
  4. Redshift

Position Summary

Experienced ETL Developers and Data Engineers to ingest and analyze data from multiple enterprise sources into Adobe Experience Platform

Requirements

  • About 4-6 years of professional technology experience mostly focused on the following:
  • 4+ year of experience on developing data ingestion pipelines using Pyspark(batch and streaming).
  • 4+ years experience on multiple Data engineering related services on AWS, e.g. Glue, Athena, DynamoDb, Kinesis, Kafka, Lambda, Redshift etc.
  • 1+ years of experience of working with Redshift esp the following.

o Experience and knowledge of loading data from various sources, e.g. s3 bucket and on-prem data sources into Redshift.

o Experience of optimizing data ingestion into Redshift.

o Experience of designing, developing and optimizing queries on Redshift using SQL or PySparkSQL

o Experience of designing tables in Redshift(distribution key, compression etc., vacuuming,etc. )

Experience of developing applications that consume the services exposed as ReST APIs. Experience and ability to write and analyze complex and performant SQLs

Special Consideration given for

  • 2 years of Developing and supporting ETL pipelines using enterprise-grade ETL tools like Pentaho, Informatica, Talend
  • Good knowledge on Data Modellin g(design patterns and best practices)
  • Experience with Reporting Technologies (i.e. Tableau, PowerBI)

What youll do

Analyze and understand customers use case and data sources and extract, transform and load data from multitude of customers enterprise sources and ingest into Adobe Experience Platform

Design and build data ingestion pipelines into the platform using PySpark

Ensure ingestion is designed and implemented in a performant manner to support the throughout and latency needed.

Develop and test complex SQLs to extractanalyze and report the data ingested into the Adobe Experience platform.

Ensure the SQLs are implemented in compliance with the best practice to they are performant.

Migrate platform configurations, including the data ingestion pipelines and SQL, across various sandboxes.

Debug any issues reported on data ingestion, SQL or any other functionalities of the platform and resolve the issues.

Support Data Architects in implementing data model in the platform.

Contribute to the innovation charter and develop intellectual property for the organization.

Present on advanced features and complex use case implementations at multiple forums.

Attend regular scrum events or equivalent and provide update on the deliverables.

Work independently across multiple engagements with none or minimum supervision.



This advertiser has chosen not to accept applicants from your region.

ETL Data Engineer

Noida, Uttar Pradesh Lumiq

Posted today

Job Viewed

Tap Again To Close

Job Description

Who we are:

LUMIQ is the leading Data and Analytics company in the Financial Services and Insurance (FSI) industry. We are trusted by the world's largest FSIs, including insurers, banks, AMCs, and NBFCs, to address their data challenges. Our clients include 40+ enterprises with over $10B in deposits/AUM, collectively representing about 1B customers globally. Our expertise lies in creating next-gen data technology products to help FSI enterprises organize, manage, and effectively use data. We have consistently challenged the status quo, introducing many industry-firsts like the first enterprise data platform in Asia on cloud for a regulated entity. Founded in 2013, LUMIQ has now completed a decade of innovation, backed by Info Edge Ventures (a JV
between Temasek Holdings of Singapore and Naukri) and US-based Season 2 Ventures.

Our Culture:

At LUMIQ, we strive to create a community of passionate data professionals who aim to transcend the usual corporate dynamics. We offer you the freedom to ideate, commit, and navigate your career trajectory at your
own pace. Culture of ownership empowerment to drive outcomes.
Our culture encourages 'Tech Poetry' combining creativity and technology to create solutions that revolutionize industry. We trust our people to manage their responsibilities with minimal policy constraints. Our team is composed of the industry's brightest minds, from PhDs and engineers to industry specialists from Banking, Insurance, NBFCs, AMCs, who will challenge and inspire you to reach new heights.

Role: 

We are seeking a highly skilled ETL Data Engineer to re-engineer our existing data pipelines to extract data from a new data source (PostgreSQL / CURA system) instead of the current Microsoft SQL Server (CRM persistence store), while preserving the existing load patterns to Elasticsearch and MongoDB. The engineer will ensure this migration has zero impact on data quality, system performance, or end-user experience .

Key Responsibilities
  • Analyze existing ETL pipelines and their dependencies on Microsoft SQL Server as source systems.
  • Design and implement modifications to repoint ETL extractions from PostgreSQL (CURA) while preserving the current transformations and load logic into Elasticsearch and MongoDB.
  • Ensure end-to-end data integrity, quality, and freshness remain unaffected after the source switch.
  • Write efficient and optimized SQL queries to extract data from the new source.
  • Conduct performance testing to confirm no degradation of pipeline throughput or latency in production.
  • Work closely with DevOps and platform teams to containerize, orchestrate, and deploy the updated ETLs using Docker and Kubernetes.
  • Monitor post-deployment performance and handle any production issues proactively.
  • Document design, code, data mappings, and operational runbooks.

Required Skills and Qualifications
  • Strong experience building and maintaining large-scale distributed data systems.
  • Expert-level proficiency in Python, especially data analysis/manipulation libraries like pandas, NumPy, and Polars.
  • Advanced SQL development skills with proven experience in performance optimization.
  • Working knowledge of Docker and Kubernetes.

  • Familiarity with Elasticsearch and MongoDB as data stores.

  • Experience working in production environments with mission-critical

    systems.

What Do You Get:

  • Opportunity to contribute to an entrepreneurial culture and exposure to the startup hustler culture.
  • Competitive Salary Packages.Group Medical Policies.
  • Equal Employment Opportunity.
  • Maternity Leave.
  • Opportunities for upskilling and exposure to the latest technologies.
    100% Sponsorship for certification
This advertiser has chosen not to accept applicants from your region.

ETL Data Engineer

Delhi, Delhi The Techgalore

Posted 25 days ago

Job Viewed

Tap Again To Close

Job Description

remote

Pls rate the candidate (from 1 to 5, 1 lowest, 5 highest ) in these areas 

  1. Big Data
  2. PySpark
  3. AWS
  4. Redshift

Position Summary

Experienced ETL Developers and Data Engineers to ingest and analyze data from multiple enterprise sources into Adobe Experience Platform

 Requirements 

  • About 4-6 years of professional technology experience mostly focused on the following: 
  •  4+ year of experience on developing data ingestion pipelines using Pyspark(batch and streaming).
  • 4+ years experience on multiple Data engineering related services on AWS, e.g. Glue, Athena, DynamoDb, Kinesis, Kafka, Lambda, Redshift etc.
  •   1+ years of experience of working with Redshift esp the following.

o   Experience and knowledge of loading data from various sources, e.g. s3 bucket and on-prem data sources into Redshift.

o   Experience of optimizing data ingestion into Redshift.

o   Experience of designing, developing and optimizing queries on Redshift using SQL or PySparkSQL

o   Experience of designing tables in Redshift(distribution key, compression etc., vacuuming,etc. ) 

  Experience of developing applications that consume the services exposed as ReST APIs.   Experience and ability to write and analyze complex and performant SQLs

Special Consideration given for  

  • 2 years of Developing and supporting ETL pipelines using enterprise-grade ETL tools like Pentaho, Informatica, Talend
  • Good knowledge on Data Modellin g(design patterns and best practices)
  •   Experience with Reporting Technologies (i.e. Tableau, PowerBI)

What youll do

  Analyze and understand customers use case and data sources and extract, transform and load data from multitude of customers enterprise sources and ingest into Adobe Experience Platform

  Design and build data ingestion pipelines into the platform using PySpark

  Ensure ingestion is designed and implemented in a performant manner to support the throughout and latency needed.

  Develop and test complex SQLs to extractanalyze and report the data ingested into the Adobe Experience platform.

  Ensure the SQLs are implemented in compliance with the best practice to they are performant.

  Migrate platform configurations, including the data ingestion pipelines and SQL, across various sandboxes.

  Debug any issues reported on data ingestion, SQL or any other functionalities of the platform and resolve the issues.

  Support Data Architects in implementing data model in the platform.

  Contribute to the innovation charter and develop intellectual property for the organization.

  Present on advanced features and complex use case implementations at multiple forums.  

  Attend regular scrum events or equivalent and provide update on the deliverables.

  Work independently across multiple engagements with none or minimum supervision.



This advertiser has chosen not to accept applicants from your region.
 

Nearby Locations

Other Jobs Near Me

Industry

  1. request_quote Accounting
  2. work Administrative
  3. eco Agriculture Forestry
  4. smart_toy AI & Emerging Technologies
  5. school Apprenticeships & Trainee
  6. apartment Architecture
  7. palette Arts & Entertainment
  8. directions_car Automotive
  9. flight_takeoff Aviation
  10. account_balance Banking & Finance
  11. local_florist Beauty & Wellness
  12. restaurant Catering
  13. volunteer_activism Charity & Voluntary
  14. science Chemical Engineering
  15. child_friendly Childcare
  16. foundation Civil Engineering
  17. clean_hands Cleaning & Sanitation
  18. diversity_3 Community & Social Care
  19. construction Construction
  20. brush Creative & Digital
  21. currency_bitcoin Crypto & Blockchain
  22. support_agent Customer Service & Helpdesk
  23. medical_services Dental
  24. medical_services Driving & Transport
  25. medical_services E Commerce & Social Media
  26. school Education & Teaching
  27. electrical_services Electrical Engineering
  28. bolt Energy
  29. local_mall Fmcg
  30. gavel Government & Non Profit
  31. emoji_events Graduate
  32. health_and_safety Healthcare
  33. beach_access Hospitality & Tourism
  34. groups Human Resources
  35. precision_manufacturing Industrial Engineering
  36. security Information Security
  37. handyman Installation & Maintenance
  38. policy Insurance
  39. code IT & Software
  40. gavel Legal
  41. sports_soccer Leisure & Sports
  42. inventory_2 Logistics & Warehousing
  43. supervisor_account Management
  44. supervisor_account Management Consultancy
  45. supervisor_account Manufacturing & Production
  46. campaign Marketing
  47. build Mechanical Engineering
  48. perm_media Media & PR
  49. local_hospital Medical
  50. local_hospital Military & Public Safety
  51. local_hospital Mining
  52. medical_services Nursing
  53. local_gas_station Oil & Gas
  54. biotech Pharmaceutical
  55. checklist_rtl Project Management
  56. shopping_bag Purchasing
  57. home_work Real Estate
  58. person_search Recruitment Consultancy
  59. store Retail
  60. point_of_sale Sales
  61. science Scientific Research & Development
  62. wifi Telecoms
  63. psychology Therapy
  64. pets Veterinary
View All Etl Jobs