25,923 Etl jobs in India
ETL - DATA Engineer
Posted 1 day ago
Job Viewed
Job Description
F2F Drive on 11th Oct 2025 (Location - Chennai)
Senior ETL Developer AWS | Movate Chennai
Experience: 3 to 9 Years
Location: Chennai (5 Days Office)
Notice Period: Immediate to 30 Days
Mandatory Skills:
- ETL Development: 3+ years overall ETL experience with minimum 3+ years in AWS PySpark scripting.
- AWS Data Solutions: Hands-on deployment & operations using S3, Lambda, SNS, Step Functions (strong AWS services knowledge is a must).
- Programming: Advanced PySpark expertise and solid experience with Python packages such as NumPy, Pandas, etc.
- Individual Contributor: Ability to own end-to-end design, build, and deployment of data pipelines without close supervision.
- Data Governance: Familiarity with metadata management, data lineage, and data governance principles.
Good to Have:
- Experience processing large-scale structured & semi-structured data transformations.
- Exposure to data lake design and Delta tables configuration.
- Knowledge of computing and cost optimization strategies on AWS.
- Ability to design holistic Data Integration frameworks based on environment/use case.
- Working knowledge of MWAA (Airflow orchestration).
Soft Skills:
- Excellent communication and stakeholder management skills.
- Strong problem-solving mindsetability to understand business pain points and deliver effective solutions.
ETL Data Engineer
Posted 1 day ago
Job Viewed
Job Description
Mandatory skills (8+ Years of experience in ETL development with 4+ Years on AWS
Pyspark scripting)
1. Experience deploying and running AWS-based data solutions using services or
products such as S3, Lambda, SNS, Cloud Step Functions.
2. Person should be strong in Pyspark
3. Hands on and working knowledge in Python packages like NumPy, Pandas, Etc
4. Experience deploying and running AWS-based data solutions using services or
products such as S3, Lambda, SNS, Cloud Step Functions. Sound knowledge in AWS
services is must.
5. Person should work as Individual contributor
6. Good to have familiar with metadata management, data lineage, and principles of
data governance.
Good to have:
1. Experience to process large set of data transformations both semi and structured
data
2. Experience to build data lake & configuration on delta tables.
3. Good experience with computing & cost optimization.
4. Understanding the environment and use case and ready to build holistic Data
Integration frame works.
5. Good experience in MWAA (airflow orchestration)
Soft skill:
1. Having good communication to interact with IT-Stake holders and Business.
2. Understand the pain point to delivery
ETL Data Engineer
Posted 1 day ago
Job Viewed
Job Description
Who we are:
LUMIQ is the leading Data and Analytics company in the Financial Services and Insurance (FSI) industry. We are trusted by the world's largest FSIs, including insurers, banks, AMCs, and NBFCs, to address their data challenges. Our clients include 40+ enterprises with over $10B in deposits/AUM, collectively representing about 1B customers globally. Our expertise lies in creating next-gen data technology products to help FSI enterprises organize, manage, and effectively use data. We have consistently challenged the status quo, introducing many industry-firsts like the first enterprise data platform in Asia on cloud for a regulated entity. Founded in 2013, LUMIQ has now completed a decade of innovation, backed by Info Edge Ventures (a JV
between Temasek Holdings of Singapore and Naukri) and US-based Season 2 Ventures.
Our Culture:
At LUMIQ, we strive to create a community of passionate data professionals who aim to transcend the usual corporate dynamics. We offer you the freedom to ideate, commit, and navigate your career trajectory at your
own pace. Culture of ownership empowerment to drive outcomes.
Our culture encourages 'Tech Poetry' combining creativity and technology to create solutions that revolutionize industry. We trust our people to manage their responsibilities with minimal policy constraints. Our team is composed of the industry's brightest minds, from PhDs and engineers to industry specialists from Banking, Insurance, NBFCs, AMCs, who will challenge and inspire you to reach new heights.
Role:
We are seeking a highly skilled ETL Data Engineer to re-engineer our existing data pipelines to extract data from a new data source (PostgreSQL / CURA system) instead of the current Microsoft SQL Server (CRM persistence store), while preserving the existing load patterns to Elasticsearch and MongoDB. The engineer will ensure this migration has zero impact on data quality, system performance, or end-user experience.
Key Responsibilities- Analyze existing ETL pipelines and their dependencies on Microsoft SQL Server as source systems.
- Design and implement modifications to repoint ETL extractions from PostgreSQL (CURA) while preserving the current transformations and load logic into Elasticsearch and MongoDB.
- Ensure end-to-end data integrity, quality, and freshness remain unaffected after the source switch.
- Write efficient and optimized SQL queries to extract data from the new source.
- Conduct performance testing to confirm no degradation of pipeline throughput or latency in production.
- Work closely with DevOps and platform teams to containerize, orchestrate, and deploy the updated ETLs using Docker and Kubernetes.
- Monitor post-deployment performance and handle any production issues proactively.
- Document design, code, data mappings, and operational runbooks.
- Strong experience building and maintaining large-scale distributed data systems.
- Expert-level proficiency in Python, especially data analysis/manipulation libraries like pandas, NumPy, and Polars.
- Advanced SQL development skills with proven experience in performance optimization.
- Working knowledge of Docker and Kubernetes.
- Familiarity with Elasticsearch and MongoDB as data stores.
- Experience working in production environments with mission-critical
systems.
What Do You Get:
- Opportunity to contribute to an entrepreneurial culture and exposure to the startup hustler culture.
- Competitive Salary Packages.Group Medical Policies.
- Equal Employment Opportunity.
- Maternity Leave.
- Opportunities for upskilling and exposure to the latest technologies.
100% Sponsorship for certification
ETL Data Engineer
Posted 1 day ago
Job Viewed
Job Description
Job Title: Associate Consultant Experience: 2 to 3 Years Location:
Job Summary:
We are looking for a skilled and proactive Associate Consultant with 2–3 years of experience in
data management
,
governance, and cloud platforms
. The ideal candidate should not only have technical expertise but also possess strong communication skills to work effectively with cross-functional teams and clients.
Key Responsibilities:
· Contribute to
data governance and integration
projects
· Work on CAI (CDI is a plus) and EDM implementations
· Support development using INFA, Databricks, Snowflake.
· Deliver solutions on cloud platforms (
Azure or AWS
)
· Communicate effectively with internal teams and external stakeholders
· Ensure high standards of data quality, security, and compliance
Required Skills:
· Governance along with DQ Skill & CAI
· CAI (CDI – Optional)
· EDM
· INFA+ (Databricks, Snowflake)
· Cloud Platforms (Azure / AWS)
· Strong verbal and written communication skills
· Problem-solving and teamwork abilities
Nice to Have:
· Client interaction experience
Data Engineer/ ETL
Posted 1 day ago
Job Viewed
Job Description
Job Title:
ETL & Data Warehouse Developer (AWS)
Location:
Gurugram
Employment Type:
Full-time
About the Role
We are seeking an experienced ETL & Data Warehouse Developer with strong expertise in AWS services to join our data engineering team. This role involves designing, developing, and optimizing ETL processes and data warehousing solutions on the AWS platform. The ideal candidate will bring solid experience in ETL development, data modeling, SQL, and performance optimization.
Key Responsibilities
- Design, develop, and implement robust ETL processes using AWS Glue, PySpark, AWS Data Pipeline, or custom scripts.
- Build and maintain scalable and reliable data warehouse solutions on AWS.
- Design and optimize data models for efficient storage and retrieval in AWS Redshift.
- Utilize AWS services such as S3, Lambda, Glue, and Redshift to develop end-to-end data solutions.
- Craft and optimize complex SQL queries to support reporting, BI, and analytics.
- Identify and resolve performance bottlenecks in ETL workflows and data warehouse queries.
- Collaborate with cross-functional teams to integrate data from multiple sources.
- Ensure compliance with security and data governance best practices.
- Maintain detailed documentation for ETL processes, data models, and configurations.
Required Qualifications
- Bachelor's degree in Computer Science, Information Technology, or related field.
- Proven experience as an ETL & Data Warehouse Developer with a strong focus on AWS.
- Strong proficiency in SQL, stored procedures, and query optimization.
- Hands-on experience with AWS Glue, Redshift, S3, and other AWS services.
- Strong knowledge of data modeling and warehouse design principles.
- Familiarity with security, compliance, and governance standards.
- Excellent problem-solving, analytical, and collaboration skills.
- AWS or other relevant certifications preferred.
ETL Data Engineer
Posted 1 day ago
Job Viewed
Job Description
ETL Data Engineer
Skill to Evaluate: ETL, Etl Developer, GCP, Big Data, Bigquery, Kafka, Hive, Data Modeling, Python, Pyspark, SQL
Experience: 5 to 6 Years
Location: Lower Parel, Mumbai (3 Days WFO)
BGV: Education, Address, Employment, Criminal
About the Role
We are looking for a passionate and experienced Data Engineer to join our team and
help build scalable, reliable, and efficient data pipelines on cloud platforms like
Primarily on Google Cloud Platform (GCP) and secondary on Amazon Web
Services (AWS). You will work with cutting-edge technologies to process structured
and unstructured data, enabling data-driven decision-making across the
organization.
Key Responsibilities
Design, develop, and maintain robust data pipelines and ETL/ELT workflows
using PySpark, Python, and SQL.
Build and manage data ingestion and transformation processes from various
sources including Hive, Kafka, and cloud-native services.
rchestrate workflows using Apache Airflow and ensure timely and reliable
data delivery.
ork with large-scale big data systems to process structured and
unstructured datasets.
mplement data quality checks, monitoring, and alerting mechanisms.
ollaborate with cross-functional teams including data scientists, analysts,
and product managers to understand data requirements.
ptimize data processing for performance, scalability, and cost-efficiency.
nsure compliance with data governance, security, and privacy standards.
Required Skills & Qualifications
+ years of experience in data engineering or related roles.
trong programming skills in Python and PySpark.
roficiency in SQL and experience with Hive.
ands-on experience with Apache Airflow for workflow orchestration.
xperience with Kafka for real-time data streaming.
olid understanding of big data ecosystems and distributed computing.
xperience with GCP (BigQuery, Dataflow, Dataproc
bility to work with both structured (e.g., relational databases)
and unstructured (e.g., logs, images, documents) data.
amiliarity with CI/CD tools and version control systems (e.g., Git).
nowledge of containerization (Docker) and orchestration (Kubernetes).
xposure to data cataloging and governance tools (e.g., AWS Lake
Formation, Google Data Catalog).
nderstanding of data modeling and architecture principles.
/p>
Soft Skills
trong analytical and problem-solving abilities.
xcellent communication and collaboration skills.
bility to work in Agile/Scrum environments.
wnership mindset and attention to detail.
ETL Data engineer
Posted 1 day ago
Job Viewed
Job Description
Role: ETL Data engineer/ETL developer
Experience: 3.9 Years - 9 Years
Location: Chennai, Guindey,WFO
Notice Period: Only Immediate Joiners
SKills:
Mandatory skills:
1.ETL
2.AWS,Lambda,SNS
3.Python,Pyspark
4.Apache airflow
Interested candidates can share their updated to CV to
Be The First To Know
About the latest Etl Jobs in India !
ETL Data Engineer
Posted today
Job Viewed
Job Description
Pls rate the candidate (from 1 to 5, 1 lowest, 5 highest ) in these areas
- Big Data
- PySpark
- AWS
- Redshift
Position Summary
Experienced ETL Developers and Data Engineers to ingest and analyze data from multiple enterprise sources into Adobe Experience Platform
Requirements
- About 4-6 years of professional technology experience mostly focused on the following:
- 4+ year of experience on developing data ingestion pipelines using Pyspark(batch and streaming).
- 4+ years experience on multiple Data engineering related services on AWS, e.g. Glue, Athena, DynamoDb, Kinesis, Kafka, Lambda, Redshift etc.
- 1+ years of experience of working with Redshift esp the following.
o Experience and knowledge of loading data from various sources, e.g. s3 bucket and on-prem data sources into Redshift.
o Experience of optimizing data ingestion into Redshift.
o Experience of designing, developing and optimizing queries on Redshift using SQL or PySparkSQL
o Experience of designing tables in Redshift(distribution key, compression etc., vacuuming,etc. )
Experience of developing applications that consume the services exposed as ReST APIs. Experience and ability to write and analyze complex and performant SQLs
Special Consideration given for
- 2 years of Developing and supporting ETL pipelines using enterprise-grade ETL tools like Pentaho, Informatica, Talend
- Good knowledge on Data Modellin g(design patterns and best practices)
- Experience with Reporting Technologies (i.e. Tableau, PowerBI)
What youll do
Analyze and understand customers use case and data sources and extract, transform and load data from multitude of customers enterprise sources and ingest into Adobe Experience Platform
Design and build data ingestion pipelines into the platform using PySpark
Ensure ingestion is designed and implemented in a performant manner to support the throughout and latency needed.
Develop and test complex SQLs to extractanalyze and report the data ingested into the Adobe Experience platform.
Ensure the SQLs are implemented in compliance with the best practice to they are performant.
Migrate platform configurations, including the data ingestion pipelines and SQL, across various sandboxes.
Debug any issues reported on data ingestion, SQL or any other functionalities of the platform and resolve the issues.
Support Data Architects in implementing data model in the platform.
Contribute to the innovation charter and develop intellectual property for the organization.
Present on advanced features and complex use case implementations at multiple forums.
Attend regular scrum events or equivalent and provide update on the deliverables.
Work independently across multiple engagements with none or minimum supervision.
ETL Data Engineer
Posted today
Job Viewed
Job Description
Who we are:
LUMIQ is the leading Data and Analytics company in the Financial Services and Insurance (FSI) industry. We are trusted by the world's largest FSIs, including insurers, banks, AMCs, and NBFCs, to address their data challenges. Our clients include 40+ enterprises with over $10B in deposits/AUM, collectively representing about 1B customers globally. Our expertise lies in creating next-gen data technology products to help FSI enterprises organize, manage, and effectively use data. We have consistently challenged the status quo, introducing many industry-firsts like the first enterprise data platform in Asia on cloud for a regulated entity. Founded in 2013, LUMIQ has now completed a decade of innovation, backed by Info Edge Ventures (a JVbetween Temasek Holdings of Singapore and Naukri) and US-based Season 2 Ventures.
Our Culture:
At LUMIQ, we strive to create a community of passionate data professionals who aim to transcend the usual corporate dynamics. We offer you the freedom to ideate, commit, and navigate your career trajectory at your
own pace. Culture of ownership empowerment to drive outcomes.
Our culture encourages 'Tech Poetry' combining creativity and technology to create solutions that revolutionize industry. We trust our people to manage their responsibilities with minimal policy constraints. Our team is composed of the industry's brightest minds, from PhDs and engineers to industry specialists from Banking, Insurance, NBFCs, AMCs, who will challenge and inspire you to reach new heights.
Role:
We are seeking a highly skilled ETL Data Engineer to re-engineer our existing data pipelines to extract data from a new data source (PostgreSQL / CURA system) instead of the current Microsoft SQL Server (CRM persistence store), while preserving the existing load patterns to Elasticsearch and MongoDB. The engineer will ensure this migration has zero impact on data quality, system performance, or end-user experience .
Key Responsibilities- Analyze existing ETL pipelines and their dependencies on Microsoft SQL Server as source systems.
- Design and implement modifications to repoint ETL extractions from PostgreSQL (CURA) while preserving the current transformations and load logic into Elasticsearch and MongoDB.
- Ensure end-to-end data integrity, quality, and freshness remain unaffected after the source switch.
- Write efficient and optimized SQL queries to extract data from the new source.
- Conduct performance testing to confirm no degradation of pipeline throughput or latency in production.
- Work closely with DevOps and platform teams to containerize, orchestrate, and deploy the updated ETLs using Docker and Kubernetes.
- Monitor post-deployment performance and handle any production issues proactively.
-
Document design, code, data mappings, and operational runbooks.
- Strong experience building and maintaining large-scale distributed data systems.
- Expert-level proficiency in Python, especially data analysis/manipulation libraries like pandas, NumPy, and Polars.
- Advanced SQL development skills with proven experience in performance optimization.
-
Working knowledge of Docker and Kubernetes.
-
Familiarity with Elasticsearch and MongoDB as data stores.
-
Experience working in production environments with mission-critical
systems.
What Do You Get:
- Opportunity to contribute to an entrepreneurial culture and exposure to the startup hustler culture.
- Competitive Salary Packages.Group Medical Policies.
- Equal Employment Opportunity.
- Maternity Leave.
- Opportunities for upskilling and exposure to the latest technologies.
100% Sponsorship for certification
ETL Data Engineer
Posted 25 days ago
Job Viewed
Job Description
Pls rate the candidate (from 1 to 5, 1 lowest, 5 highest ) in these areas
- Big Data
- PySpark
- AWS
- Redshift
Position Summary
Experienced ETL Developers and Data Engineers to ingest and analyze data from multiple enterprise sources into Adobe Experience Platform
Requirements
- About 4-6 years of professional technology experience mostly focused on the following:
- 4+ year of experience on developing data ingestion pipelines using Pyspark(batch and streaming).
- 4+ years experience on multiple Data engineering related services on AWS, e.g. Glue, Athena, DynamoDb, Kinesis, Kafka, Lambda, Redshift etc.
- 1+ years of experience of working with Redshift esp the following.
o Experience and knowledge of loading data from various sources, e.g. s3 bucket and on-prem data sources into Redshift.
o Experience of optimizing data ingestion into Redshift.
o Experience of designing, developing and optimizing queries on Redshift using SQL or PySparkSQL
o Experience of designing tables in Redshift(distribution key, compression etc., vacuuming,etc. )
Experience of developing applications that consume the services exposed as ReST APIs. Experience and ability to write and analyze complex and performant SQLs
Special Consideration given for
- 2 years of Developing and supporting ETL pipelines using enterprise-grade ETL tools like Pentaho, Informatica, Talend
- Good knowledge on Data Modellin g(design patterns and best practices)
- Experience with Reporting Technologies (i.e. Tableau, PowerBI)
What youll do
Analyze and understand customers use case and data sources and extract, transform and load data from multitude of customers enterprise sources and ingest into Adobe Experience Platform
Design and build data ingestion pipelines into the platform using PySpark
Ensure ingestion is designed and implemented in a performant manner to support the throughout and latency needed.
Develop and test complex SQLs to extractanalyze and report the data ingested into the Adobe Experience platform.
Ensure the SQLs are implemented in compliance with the best practice to they are performant.
Migrate platform configurations, including the data ingestion pipelines and SQL, across various sandboxes.
Debug any issues reported on data ingestion, SQL or any other functionalities of the platform and resolve the issues.
Support Data Architects in implementing data model in the platform.
Contribute to the innovation charter and develop intellectual property for the organization.
Present on advanced features and complex use case implementations at multiple forums.
Attend regular scrum events or equivalent and provide update on the deliverables.
Work independently across multiple engagements with none or minimum supervision.