20,925 Etl jobs in India
ETL Data Engineer
Posted today
Job Viewed
Job Description
Pls rate the candidate (from 1 to 5, 1 lowest, 5 highest ) in these areas
- Big Data
- PySpark
- AWS
- Redshift
Position Summary
Experienced ETL Developers and Data Engineers to ingest and analyze data from multiple enterprise sources into Adobe Experience Platform
Requirements
- About 4-6 years of professional technology experience mostly focused on the following:
- 4+ year of experience on developing data ingestion pipelines using Pyspark(batch and streaming).
- 4+ years experience on multiple Data engineering related services on AWS, e.g. Glue, Athena, DynamoDb, Kinesis, Kafka, Lambda, Redshift etc.
- 1+ years of experience of working with Redshift esp the following.
o Experience and knowledge of loading data from various sources, e.g. s3 bucket and on-prem data sources into Redshift.
o Experience of optimizing data ingestion into Redshift.
o Experience of designing, developing and optimizing queries on Redshift using SQL or PySparkSQL
o Experience of designing tables in Redshift(distribution key, compression etc., vacuuming,etc. )
Experience of developing applications that consume the services exposed as ReST APIs. Experience and ability to write and analyze complex and performant SQLs
Special Consideration given for
- 2 years of Developing and supporting ETL pipelines using enterprise-grade ETL tools like Pentaho, Informatica, Talend
- Good knowledge on Data Modellin g(design patterns and best practices)
- Experience with Reporting Technologies (i.e. Tableau, PowerBI)
What youll do
Analyze and understand customers use case and data sources and extract, transform and load data from multitude of customers enterprise sources and ingest into Adobe Experience Platform
Design and build data ingestion pipelines into the platform using PySpark
Ensure ingestion is designed and implemented in a performant manner to support the throughout and latency needed.
Develop and test complex SQLs to extractanalyze and report the data ingested into the Adobe Experience platform.
Ensure the SQLs are implemented in compliance with the best practice to they are performant.
Migrate platform configurations, including the data ingestion pipelines and SQL, across various sandboxes.
Debug any issues reported on data ingestion, SQL or any other functionalities of the platform and resolve the issues.
Support Data Architects in implementing data model in the platform.
Contribute to the innovation charter and develop intellectual property for the organization.
Present on advanced features and complex use case implementations at multiple forums.
Attend regular scrum events or equivalent and provide update on the deliverables.
Work independently across multiple engagements with none or minimum supervision.
ETL Data Engineer
Posted 4 days ago
Job Viewed
Job Description
Pls rate the candidate (from 1 to 5, 1 lowest, 5 highest ) in these areas
- Big Data
- PySpark
- AWS
- Redshift
Position Summary
Experienced ETL Developers and Data Engineers to ingest and analyze data from multiple enterprise sources into Adobe Experience Platform
Requirements
- About 4-6 years of professional technology experience mostly focused on the following:
- 4+ year of experience on developing data ingestion pipelines using Pyspark(batch and streaming).
- 4+ years experience on multiple Data engineering related services on AWS, e.g. Glue, Athena, DynamoDb, Kinesis, Kafka, Lambda, Redshift etc.
- 1+ years of experience of working with Redshift esp the following.
o Experience and knowledge of loading data from various sources, e.g. s3 bucket and on-prem data sources into Redshift.
o Experience of optimizing data ingestion into Redshift.
o Experience of designing, developing and optimizing queries on Redshift using SQL or PySparkSQL
o Experience of designing tables in Redshift(distribution key, compression etc., vacuuming,etc. )
Experience of developing applications that consume the services exposed as ReST APIs. Experience and ability to write and analyze complex and performant SQLs
Special Consideration given for
- 2 years of Developing and supporting ETL pipelines using enterprise-grade ETL tools like Pentaho, Informatica, Talend
- Good knowledge on Data Modellin g(design patterns and best practices)
- Experience with Reporting Technologies (i.e. Tableau, PowerBI)
What youll do
Analyze and understand customers use case and data sources and extract, transform and load data from multitude of customers enterprise sources and ingest into Adobe Experience Platform
Design and build data ingestion pipelines into the platform using PySpark
Ensure ingestion is designed and implemented in a performant manner to support the throughout and latency needed.
Develop and test complex SQLs to extractanalyze and report the data ingested into the Adobe Experience platform.
Ensure the SQLs are implemented in compliance with the best practice to they are performant.
Migrate platform configurations, including the data ingestion pipelines and SQL, across various sandboxes.
Debug any issues reported on data ingestion, SQL or any other functionalities of the platform and resolve the issues.
Support Data Architects in implementing data model in the platform.
Contribute to the innovation charter and develop intellectual property for the organization.
Present on advanced features and complex use case implementations at multiple forums.
Attend regular scrum events or equivalent and provide update on the deliverables.
Work independently across multiple engagements with none or minimum supervision.
Statusneo-Data Engineer (ETL)
Posted today
Job Viewed
Job Description
Role: ETIL Developer
Location: Mumbai
Experience: 3-5 Years
Skills - ETL, BDM, Informatica, Data Integrator
Role Overview:
We are seeking a skilled ETL Developer with experience in Informatica, Big Data Management (BDM), and Data Integrator. The ideal candidate will have a strong background in data extraction, transformation, and loading (ETL) processes, with a focus on optimizing data integration solutions for complex data environments. You will play a critical role in designing and implementing ETL workflows to support our business intelligence and data warehousing initiatives.
Key Responsibilities:
- Design, develop, and maintain ETL processes using Informatica, BDM, and Data Integrator.
- Collaborate with data architects and business analysts to understand data requirements and translate them into ETL solutions.
- Optimize ETL processes for performance, scalability, and reliability.
- Conduct data quality assessments and implement data cleansing procedures.
- Monitor and troubleshoot ETL processes to ensure timely and accurate data integration.
- Work with large datasets across multiple data sources, including structured and unstructured data.
- Document ETL processes, data flows, and mappings to ensure clarity and consistency.
Required Skills:
- 3-5 years of experience in ETL development with a strong focus on Informatica, BDM, and Data Integrator.
- Proficiency in SQL and database technologies (e.g., Oracle, SQL Server, MySQL).
- Experience with big data technologies and frameworks.
- Strong analytical and problem-solving skills.
- Familiarity with data warehousing concepts and best practices.
- Excellent communication and collaboration skills.
About Statusneo :
We accelerate your business transformation by leveraging best fit CLOUD NATIVE technologies wherever feasible. We are DIGITAL consultants who partner with you to solve & deliver. We are experts in CLOUD NATIVE TECHNOLOGY CONSULTING & SOLUTIONS. We build, maintain & monitor highly scalable, modular applications that leverage elastic compute, storage and network of leading cloud platforms. We CODE your NEO transformations. #StatusNeo
Business domain experience is vital to the success of neo transformations empowered by digital technology. Experts in domain ask the right business questions to diagnose and address. Our consultants leverage your domain expertise & augment our digital excellence to build cutting edge cloud solutions.
Data ETL Engineer
Posted today
Job Viewed
Job Description
Responsibilities:
Requirements:
Pluses:
Etl
Posted today
Job Viewed
Job Description
**Technology / Domain**:ETL
**Role**: ETL Developer
**Job description**:
- 2+ years building, deploying, and maintaining end-to-end (data lake to visualization) ETL pipelines
- High proficiency with SQL
- Proficiency with Looker (or similar BI tool)
- Experience with conceptual, logical, and physical data modeling
- Proficiency in Python (or other OOP languages)
- Experience with version control and deploying production code
- Demonstrable experience querying and transforming data programmatically
- Able to analyze data and critically examine results for patterns
- Familiarity with dbt,Jenkins,Apache Airflow,AWS (s3, Lambda, EC2, IAM),Stats software packages (R, Python pandas, etc)is a big plus.
Etl
Posted today
Job Viewed
Job Description
**Job Code**:
JD-19626
**JOB DESCRIPTION**: ETL Developer Exp - 3 to 5 yrs Shift time - 10 am to 7 pm (Should be available as per our international/national client needs) Budget - 18 to 25 LPA No of position - 2 Location - Bangalore (Wfo/Hybrid) Notice period - Immediate joiner Responsibilities Independently plans, designs, develop, executes and monitor complex data integration activities to support project delivery and daily operation. Expert in defining, implementing, debugging and optimizing data integration mappings and scripts from a variety of data sources. Spearheads development of ETL code, metadata definition and models, queries, scripts, schedules, work processes and maintenance procedures and identify opportunities to optimize the sizing, performance and efficiency of existing processes/procedures. Mentors less experienced analysts on proper standards/techniques to improve their accuracy and efficiency. Perform the unit testing, system integration testing, regression testing and assist with user acceptance testing. Articulates business requirements in a technical solution that can be designed and engineered. Consults with the business to develop documentation and communication materials to ensure accurate usage and interpretation of data. Develops technical understanding of how the data flows from various source systems and source types to fine tune data integration solutions. Work independently or as part of a team to deliver data warehouse ETL projects. Adhere to established standards and best practices and provide input for improvement of those processes. Self-motivated team player who can work with mínimal supervision and can adapt to a quickly changing environment.
**Experience Required**:
3 - 5 Years
**Industry Type**:
IT
**Employment Type**:
Permanent
**Location**:
India
**Roles & Responsibilities**:
**Expertise & Qualification**:
Btech
Sr. Software Engineer - ETL(Extract, Transform, Load) Job
Posted today
Job Viewed
Job Description
YASH Technologies is a leading technology integrator specializing in helping clients reimagine operating models, enhance competitiveness, optimize costs, foster exceptional stakeholder experiences, and drive business transformation.
At YASH, we’re a cluster of the brightest stars working with cutting-edge technologies. Our purpose is anchored in a single truth – bringing real positive changes in an increasingly virtual world and it drives us beyond generational gaps and disruptions of the future.
We are looking forward to hire ETL(Extract, Transform, Load) Professionals in the following areas :
Experience: 4-6 years
Involve in implementation, maintenance and participate in data loads for Microsoft CRM Dynamics platform. Coordinate with data team on requirements and apply technical expertise in data clean up, data profiling and data movement from Sybase to CRM Dynamics using ETL concepts and SSIS tool. Having SQL, ETL Concepts and data knowledge is desired to perform activities efficiently of this role.
Roles and responsibilities :
At YASH, you are empowered to create a career that will take you to where you want to go while working in an inclusive team environment. We leverage career-oriented skilling models and optimize our collective intelligence aided with technology for continuous learning, unlearning, and relearning at a rapid pace and scale.
Our Hyperlearning workplace is grounded upon four principles
Be The First To Know
About the latest Etl Jobs in India !
Sr. Software Engineer - ETL(Extract, Transform, Load) Job
Posted today
Job Viewed
Job Description
YASH Technologies is a leading technology integrator specializing in helping clients reimagine operating models, enhance competitiveness, optimize costs, foster exceptional stakeholder experiences, and drive business transformation.
At YASH, we’re a cluster of the brightest stars working with cutting-edge technologies. Our purpose is anchored in a single truth – bringing real positive changes in an increasingly virtual world and it drives us beyond generational gaps and disruptions of the future.
We are looking forward to hire ETL(Extract, Transform, Load) Professionals in the following areas :
Experience: 4-6 years
Involve in implementation, maintenance and participate in data loads for Microsoft CRM Dynamics platform. Coordinate with data team on requirements and apply technical expertise in data clean up, data profiling and data movement from Sybase to CRM Dynamics using ETL concepts and SSIS tool. Having SQL, ETL Concepts and data knowledge is desired to perform activities efficiently of this role.
Roles and responsibilities :
At YASH, you are empowered to create a career that will take you to where you want to go while working in an inclusive team environment. We leverage career-oriented skilling models and optimize our collective intelligence aided with technology for continuous learning, unlearning, and relearning at a rapid pace and scale.
Our Hyperlearning workplace is grounded upon four principles
Sr ETL Data Engineer -HL7
Posted 1 day ago
Job Viewed
Job Description
Job Title: Sr ETL Data Engineer -HL7
Location: Remote – India (UK Shift)
Type: Full-Time
About BigRio:
BigRio is a remote-based, technology consulting firm headquartered in Boston, MA. We specialize in delivering advanced software solutions that include custom development, cloud data platforms, AI/ML integrations, and data analytics. With a diverse portfolio of clients across industries such as healthcare, biotech, fintech, and more, BigRio offers the opportunity to work on cutting-edge projects with a team of top-tier professionals.
About the Role:
We are seeking a highly skilled and detail-oriented Sr Data ETL Engineer to join our team supporting a leading healthcare client. This is a remote, full-time opportunity based in India, aligned with the UK business hours . The ideal candidate will have deep expertise in Data Pipelines, ETL and HL7 structured data and familiarity with EMR and EHR systems like ModMed .
Key Responsibilities:
- Build and maintain robust ETL pipelines to ingest and transform clinical and operational data.
- Integrate data from various healthcare sources using HL7 , ADT , SUI , and Formsite -based inputs.
- Ensure accuracy, integrity, and security of sensitive healthcare data.
- Collaborate with application developers and clinical teams to understand requirements and deliver scalable data solutions.
- Provide data extracts and reports as needed, working closely with analytics and product teams.
- Work independently and effectively in a remote, distributed team environment during UK hours.
Required Skills:
- 5+ years of experience in data engineering with strong proficiency in ETL and Healthcare.
- Proven expertise in building and maintaining ETL pipelines in a healthcare or regulated environment.
- Deep understanding of healthcare data formats and protocols : HL7 , ADT , SUI , Formsite , etc.
- )Working experience with EHR platforms , particularly ModMed or similar (e.g., Epic, Cerner).
- Familiarity with data privacy standards and compliance (HIPAA or similar frameworks).
- Comfortable working in agile environments and using tools like Jira and Confluence.
- Excellent communication skills in English (verbal and written).
Nice to Have:
- Experience with cloud data services (AWS/GCP/Azure).
- Familiarity with scripting languages like Python or Bash.
- ModMed (preferred) or other EHR experience
- Understanding of database version control and CI/CD workflows.
Shift Details:
- This role follows UK business hours (approx. 1:00 PM to 10:00 PM IST).
Flexibility for occasional overlap with US teams is a plus.
Equal Opportunity Statement
BigRio is an equal opportunity employer. We prohibit discrimination and harassment of any kind based on race, religion, national origin, sex, sexual orientation, gender identity, age, pregnancy, status as a qualified individual with disability, protected veteran status, or other protected characteristic as outlined by federal, state, or local laws. BigRio makes hiring decisions based solely on qualifications, merit, and business needs at the time. All qualified applicants will receive equal consideration for employment.
Sr ETL Data Engineer -HL7
Posted today
Job Viewed
Job Description
Location: Remote – India (UK Shift)
Type: Full-Time
About BigRio:
BigRio is a remote-based, technology consulting firm headquartered in Boston, MA. We specialize in delivering advanced software solutions that include custom development, cloud data platforms, AI/ML integrations, and data analytics. With a diverse portfolio of clients across industries such as healthcare, biotech, fintech, and more, BigRio offers the opportunity to work on cutting-edge projects with a team of top-tier professionals.
About the Role:
We are seeking a highly skilled and detail-oriented Sr Data ETL Engineer to join our team supporting a leading healthcare client. This is a remote, full-time opportunity based in India, aligned with the UK business hours . The ideal candidate will have deep expertise in Data Pipelines, ETL and HL7 structured data and familiarity with EMR and EHR systems like ModMed .
Key Responsibilities:
Build and maintain robust ETL pipelines to ingest and transform clinical and operational data.
Integrate data from various healthcare sources using HL7 , ADT , SUI , and Formsite -based inputs.
Ensure accuracy, integrity, and security of sensitive healthcare data.
Collaborate with application developers and clinical teams to understand requirements and deliver scalable data solutions.
Provide data extracts and reports as needed, working closely with analytics and product teams.
Work independently and effectively in a remote, distributed team environment during UK hours.
Required Skills:
5+ years of experience in data engineering with strong proficiency in ETL and Healthcare.
Proven expertise in building and maintaining ETL pipelines in a healthcare or regulated environment.
Deep understanding of healthcare data formats and protocols : HL7 , ADT , SUI , Formsite , etc.
)Working experience with EHR platforms , particularly ModMed or similar (e.g., Epic, Cerner).
Familiarity with data privacy standards and compliance (HIPAA or similar frameworks).
Comfortable working in agile environments and using tools like Jira and Confluence.
Excellent communication skills in English (verbal and written).
Nice to Have:
Experience with cloud data services (AWS/GCP/Azure).
Familiarity with scripting languages like Python or Bash.
ModMed (preferred) or other EHR experience
Understanding of database version control and CI/CD workflows.
Shift Details:
This role follows UK business hours (approx. 1:00 PM to 10:00 PM IST).
Flexibility for occasional overlap with US teams is a plus.
Equal Opportunity Statement
BigRio is an equal opportunity employer. We prohibit discrimination and harassment of any kind based on race, religion, national origin, sex, sexual orientation, gender identity, age, pregnancy, status as a qualified individual with disability, protected veteran status, or other protected characteristic as outlined by federal, state, or local laws. BigRio makes hiring decisions based solely on qualifications, merit, and business needs at the time. All qualified applicants will receive equal consideration for employment.