2,750 Etl Processes jobs in India

Senior Data Engineer (Data Lake, Forecasting & Governance)- 9+ yrs-Immediate

Prayagraj, Uttar Pradesh Markovate

Job Viewed

Tap Again To Close

Job Description

Job description


We are looking for an experienced Senior Data Engineer to lead the development of scalable AWS-native data lake pipelines, with a strong focus on time series forecasting, upsert-ready architectures, and enterprise-grade data governance. This role demands end-to-end ownership of the data lifecycle from ingestion to partitioning, versioning, QA, lineage tracking, and BI delivery.

The ideal candidate will be highly proficient in AWS data services, PySpark, and versioned storage formats such as Apache Hudi or Iceberg. A strong understanding of data quality, observability, governance, and metadata management in large-scale analytical systems is critical.


Roles & Responsibilities


  • Design and implement data lake zoning (Raw Clean Modeled) using Amazon S3, AWS Glue, and Athena.
  • Ingest structured and unstructured datasets including POS, USDA, Circana, and internal sales data.
  • Build versioned and upsert-ready ETL pipelines using Apache Hudi or Iceberg.
  • Create forecast-ready datasets with lagged, rolling, and trend features for revenue and occupancy modeling.
  • Optimize Athena datasets with partitioning, CTAS queries, and S3 metadata tagging.
  • Implement S3 lifecycle policies, intelligent file partitioning, and audit logging for performance and compliance.
  • Build reusable transformation logic using dbt-core or PySpark to support KPIs and time series outputs.
  • Integrate data quality frameworks such as Great Expectations, custom logs, and AWS CloudWatch for field-level validation and anomaly detection.
  • Apply data governance practices using tools like OpenMetadata or Atlan, enabling lineage tracking, data cataloging, and impact analysis.
  • Establish QA automation frameworks for pipeline validation, data regression testing, and UAT handoff.
  • Collaborate with BI, QA, and business teams to finalize schema design and deliverables for dashboard consumption.
  • Ensure compliance with enterprise data governance policies and enable discovery and collaboration through metadata platforms.


Preferred Candidate Profile

  • 9-12 years of experience in data engineering.
  • Deep hands-on experience with AWS Glue, Athena, S3, Step Functions, and Glue, Data Catalog.
  • Strong command over PySpark, dbt-core, CTAS query optimization, and advanced partition strategies.
  • Proven experience with versioned ingestion using Apache Hudi, Iceberg, or Delta Lake.
  • Experience in data lineage, metadata tagging, and governance tooling using OpenMetadata, Atlan, or similar platforms.
  • Proficiency in feature engineering for time series forecasting (lags, rolling windows, trends).
  • Expertise in Git-based workflows, CI/CD, and deployment automation (Bitbucket or similar).
  • Strong understanding of time series KPIs: revenue forecasts, occupancy trends, demand volatility, etc.
  • Knowledge of statistical forecasting frameworks (e.g., Prophet, GluonTS, Scikit-learn).
  • Experience with Superset or Streamlit for QA visualization and UAT testing.
  • Experience building data QA frameworks and embedding data validation checks at each stage of the ETL lifecycle.
  • Independent thinker capable of designing systems that scale with evolving business logic and compliance requirements.
  • Excellent communication skills for collaboration with BI, QA, data governance, and business stakeholders.
  • High attention to detail, especially around data accuracy, documentation, traceability, and auditability.
This advertiser has chosen not to accept applicants from your region.

Job No Longer Available

This position is no longer listed on WhatJobs. The employer may be reviewing applications, filled the role, or has removed the listing.

However, we have similar jobs available for you below.

Data Engineering

Mumbai, Maharashtra Godrej Capital

Posted today

Job Viewed

Tap Again To Close

Job Description

Godrej Capital is a subsidiary of Godrej Industries and is the holding company for Godrej Housing finance & Godrej Finance. With a digital-first approach and a keen focus on customer-centric product innovation, Godrej Capital offers Home Loans, Loan Against Property, Property Loans, Business Loans and is positioned to diversify into other customer segments and launch new products. The company is focused on building a long-term, sustainable retail financial services business in India, anchored by Godrej Group’s 125+year legacy of trust and excellence. Godrej Capital has a special focus on learning and capability development across its employee base and is committed to diversity, equity, and inclusion as a guiding principle.


The organization has been consistently recognized as a Great Place to Work™ receiving certifications in 2022 and 2023. As it stands, Godrej Capital holds a spot among India's Top 50 Best Workplaces in BFSI 2023 and is also recognized as one of India’s Great Mid-Size Workplaces 2023. Beyond that, it has also had the honor of being named the Best Organization for Women by The Economic Times in both 2022 and 2023, and the Best Workplaces for Women by Great Place to Work in 2022 and in 2023.


Function

Information Technology

Job Purpose

  • The role incumbent will be responsible for managing, expanding and optimizing our data pipeline architecture and optimizing data flow and collection for cross functional teams. The incumbent will support our team of data analysts and scientists on data initiatives and will ensure optimal and timely data delivery. Candidate must be self-driven and comfortable supporting the data needs of multiple teams, systems and products. Candidate will play a major role as we work on building a superior and scalable architecture to enable leveraging data to the fullest extent.


Role

  • Create and maintain optimal data pipeline architecture.
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Build and maintain the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability.
  • Working knowledge of message queuing, stream processing, and big data data (optional)
  • Perform sanity testing, issue reporting and tracking.
  • Assist teams in UAT testing and resolve issues as per criticality,
  • Handle audit and compliance activities for data platform.
  • Track and manage system availability and maintenance tasks.

Qualification & experience

  • Years of experience: 3-5 years of experience
  • Qualification – Engineering / Certified Data Engineer

Essential skills

  • Experience with data pipeline and workflow management tools.
  • Knowledge of AWS cloud services, Data-Lake, Glue / Python/ PySpark/ Kafka/ API/ Change Data Capture, Streaming data, data modelling will be a key advantage.
  • Experience with relational SQL and NoSQL databases.
  • Exposure to lending systems and domain
  • Machine Learning skills

Ideal candidate (in terms of current role/ organization/ industry)

  • An individual inclined to learn and explore new technologies and utilise the best out of the resources in hand.
  • Able to influence and work in a collaborative manner
This advertiser has chosen not to accept applicants from your region.

Data Engineering

Bengaluru, Karnataka ScaleneWorks

Posted today

Job Viewed

Tap Again To Close

Job Description

Job Title: Middleware Engineer
Position: Data Engineer
Experience: 5-6yrs
Category: IT Infrastructure
Main location: India, Karnataka, Bangalore
Employment Type: Full Time
Qualification: Bachelor's degree in Computer Science or related field or higher.
Roles and Responsibilities


Data Engineer - 5-6 years experience.
Responsibilities
===
Design, develop, and maintain data architectures, pipelines, and workflows for the collection, processing, storage, and retrieval of large volumes of structured and unstructured data from multiple sources.
Collaborate with cross-functional teams to identify and prioritize data engineering requirements and to develop and deploy data-driven solutions to address business challenges.
Build and maintain scalable data storage and retrieval systems (e.g., data lakes, data warehouses, databases), fault-tolerant, and high-performance data platforms on cloud infrastructure such as AWS, Azure, or Google Cloud Platform.
Develop and maintain ETL workflows, data pipelines, and data transformation processes to prepare data for machine learning and AI applications.
Implement and optimize distributed computing frameworks such as Hadoop, Spark, or Flink to support high-performance and scalable processing of large data sets.
Build and maintain monitoring, alerting, and logging systems to ensure the availability, reliability, and performance of data pipelines and data platforms.
Collaborate with Data Scientists and Machine Learning Engineers to deploy models on production environments and ensure their scalability, reliability, and accuracy.
Requirements:
===
Bachelor s or master s degree in computer science, engineering, or related field.
At least 5-6 years of experience in data engineering, with a strong background in machine learning, cloud computing and big data technologies.
Experience with at least one major cloud platform (AWS, Azure, GCP).
Proficiency in programming languages like Python, Java, and SQL.
Experience with distributed computing technologies such as Hadoop, Spark, and Kafka.
Familiarity with database technologies such as SQL, NoSQL, NewSQL.
Experience with data warehousing and ETL tools such as Redshift, Snowflake, or Airflow.
Strong problem-solving and analytical skills.
Excellent communication and teamwork skills.
Preferred qualification:
===
Experience with DevOps practices and tools such as Docker, Kubernetes, or Ansible, Terraform.
Experience with data visualization tools such as Tableau, Superset, Power BI, or Plotly, D3.js.
Experience with stream processing frameworks such as Kafka, Pulsar or Kinesis.
Experience with data governance, data security, and compliance.
Experience with software engineering best practices and methodologies such as Agile or Scrum.
Must Have Skills
===
data engineer with expertise in machine learning, cloud computing , and big data technologies.
Data Engineering Experince on multiple clouds one of them , preferably GCP
data lakes, data warehouses, databases
ETL workflows, data pipelines,data platforms
Hadoop, Spark, or Flink
Hadoop, Spark, and Kafka
SQL, NoSQL, NewSQL
Redshift, Snowflake, or Airflow

This advertiser has chosen not to accept applicants from your region.

Data Engineering

Mumbai, Maharashtra Godrej Capital

Posted today

Job Viewed

Tap Again To Close

Job Description

Godrej Capital is a subsidiary of Godrej Industries and is the holding company for Godrej Housing finance & Godrej Finance. With a digital-first approach and a keen focus on customer-centric product innovation, Godrej Capital offers Home Loans, Loan Against Property, Property Loans, Business Loans and is positioned to diversify into other customer segments and launch new products. The company is focused on building a long-term, sustainable retail financial services business in India, anchored by Godrej Group’s 125+year legacy of trust and excellence. Godrej Capital has a special focus on learning and capability development across its employee base and is committed to diversity, equity, and inclusion as a guiding principle.


The organization has been consistently recognized as a Great Place to Work™ receiving certifications in 2022 and 2023. As it stands, Godrej Capital holds a spot among India's Top 50 Best Workplaces in BFSI 2023 and is also recognized as one of India’s Great Mid-Size Workplaces 2023. Beyond that, it has also had the honor of being named the Best Organization for Women by The Economic Times in both 2022 and 2023, and the Best Workplaces for Women by Great Place to Work in 2022 and in 2023.


Function

Information Technology

Job Purpose

  • The role incumbent will be responsible for managing, expanding and optimizing our data pipeline architecture and optimizing data flow and collection for cross functional teams. The incumbent will support our team of data analysts and scientists on data initiatives and will ensure optimal and timely data delivery. Candidate must be self-driven and comfortable supporting the data needs of multiple teams, systems and products. Candidate will play a major role as we work on building a superior and scalable architecture to enable leveraging data to the fullest extent.


Role

  • Create and maintain optimal data pipeline architecture.
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Build and maintain the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability.
  • Working knowledge of message queuing, stream processing, and big data data (optional)
  • Perform sanity testing, issue reporting and tracking.
  • Assist teams in UAT testing and resolve issues as per criticality,
  • Handle audit and compliance activities for data platform.
  • Track and manage system availability and maintenance tasks.

Qualification & experience

  • Years of experience: 3-5 years of experience
  • Qualification – Engineering / Certified Data Engineer

Essential skills

  • Experience with data pipeline and workflow management tools.
  • Knowledge of AWS cloud services, Data-Lake, Glue / Python/ PySpark/ Kafka/ API/ Change Data Capture, Streaming data, data modelling will be a key advantage.
  • Experience with relational SQL and NoSQL databases.
  • Exposure to lending systems and domain
  • Machine Learning skills

Ideal candidate (in terms of current role/ organization/ industry)

  • An individual inclined to learn and explore new technologies and utilise the best out of the resources in hand.
  • Able to influence and work in a collaborative manner
This advertiser has chosen not to accept applicants from your region.

Data Engineering Consultant

Noida, Uttar Pradesh UnitedHealth Group

Posted 1 day ago

Job Viewed

Tap Again To Close

Job Description

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start **Caring. Connecting. Growing together.**
**Primary Responsibility:**
+ Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so
**Required Qualification:**
+ Undergraduate degree or equivalent experience
_At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission._
This advertiser has chosen not to accept applicants from your region.

Data Engineering Analyst

Chennai, Tamil Nadu UnitedHealth Group

Posted 5 days ago

Job Viewed

Tap Again To Close

Job Description

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start **Caring. Connecting. Growing together.**
**Primary Responsibilities:**
+ Communicating overall status of assigned tasks, achievements and POC's results at all levels in the projects/stakeholders to gain buy-in
+ Will provide recommendations and carry out POC's to improve performance, reliability, and reusability within the constraints of budget and business dependencies
+ In base and mid-level roles, engages across teams in a capacity that ranges from assisting on in-flight initiatives, up to technical briefings and demonstrations of new technologies across the organization
+ Design, Develop & Implement technology big data solution to convert raw datasets into reusable assets
+ Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regard to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so
**Required Qualifications:**
+ Undergraduate degree or equivalent experience
+ 2+ years combined experience in Solution Development, Project Deliveries, Product development
+ Skills:
+ Big Data / Databricks SQL & PySpark
+ Programming Languages - Python, Snowflake
+ Build / Deployment Automation - Github
+ Cloud - Azure
+ Knowledge of Scrum
_At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission._
This advertiser has chosen not to accept applicants from your region.

Data Engineering Consultant

Bangalore, Karnataka UnitedHealth Group

Posted 5 days ago

Job Viewed

Tap Again To Close

Job Description

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start **Caring. Connecting. Growing together.**
**Primary Responsibilities:**
+ Solid experience (3+ years !) with SQL SERVER Development (PL/SQL Programming)
+ Solid experience (2+ years !) with Database Administration (SQL Server)
+ Experience with SQL Server Management Studio (SSMS)
+ Experience with SQL Server Profiler and resolving deadlocks and blocking sessions
+ Experience with MS Azure platform
+ Experience with analysing execution Plans, followed up with Query Tuning & Optimization & Indexing Strategies / statistics
+ Ability to analyse and optimize slow-running queries
+ Experience with identifying/resolving Sleeping Sessions
+ Experience with index maintenance (rebuild/reorganize)
+ Application Performance Tuning - Understanding of connection pooling, caching, and load balancing
**Required Qualifications:**
+ Bachler's degree or equivalent experience
+ 7+ years of overall experience in Data & Analytics engineering
+ 5+ years of experience working with Azure, Databricks, and ADF, Data Lake
+ 5+ years of experience working with data platform or product using PySpark and Spark-SQL
+ Solid experience with CICD tools such as Jenkins, GitHub, Github Actions, Maven etc.
+ In-depth understanding of Azure architecture & ability to come up with efficient design & solutions
+ Highly proficient in Python and SQL
+ Proven excellent communication skills
_At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission._
_#Nic_
This advertiser has chosen not to accept applicants from your region.

Data Engineering Analyst

Hyderabad, Andhra Pradesh UnitedHealth Group

Posted 7 days ago

Job Viewed

Tap Again To Close

Job Description

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start **Caring. Connecting. Growing together.**
**Primary Responsibilities:**
+ Implementation:
+ Data Mapping and Transformation:
+ File Mapping: Complete file mapping based on layouts and requirements provided by Project Managers
+ Business Logic: Document business logic for transforming data into product specifications
+ Data Quality Checks: Run and interpret quality checks against loaded data to ensure accuracy and completeness
+ Data transformation: Author and test ETL to convert data from one format to another. This includes cleaning, filtering, aggregating, enriching, normalizing, and encoding data to make it suitable for analysis, processing or integration
+ Troubleshooting and Support:
+ Issue Resolution: Troubleshoot issues raised by project managers and cross matrix teams from root cause identification to resolution
+ Support Requests: Handle support requests and provide timely solutions to ensure client satisfaction
+ Collaboration and Communication:
+ Stakeholder Interaction: Work closely with Client, Project Managers, Product managers and other stakeholders to understand requirements and deliver solutions
+ Documentation: Contribute to technical documentation of specifications and processes
+ Communication: Effectively communicate complex concepts, both verbally and in writing, to team members and clients
+ Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so
**Required Qualifications:**
+ Bachelor's degree in Computer Science, Health Informatics, Information Technology, or other related fields.
+ 2+ years of experience working with healthcare data (EMR Clinical and Financial data HL7 v2, CCDAs, EDI data - 835s, 837s, Claims from variety of payers etc.) sent as either Flat file/JSON/XML etc.
+ 2+ years of experience working with Hive SQL, Postgres, or other data analysis language.
+ 1+ years of experience with Git/GitHub.
**Preferred Qualifications:**
+ 2+ years of experience managing clients or working with them on tasks like requirements gathering, impact analysis etc.
+ 2+ years of experience implementing and supporting client solutions on Azure Cloud platform
+ 2+ years of experience coding in Databricks/Python or Databricks/Scala
+ Experience with cutting edge technology (AI/ML)
+ Familiarity with Agile or experience working in Scrum teams
+ Proven highly analytical and think outside the box
+ Proven solid written and verbal communications skills. Ability to clearly articulate ideas and concepts
_At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission._
This advertiser has chosen not to accept applicants from your region.
Be The First To Know

About the latest Etl processes Jobs in India !

Data Engineering Consultant

Gurgaon, Haryana UnitedHealth Group

Posted 11 days ago

Job Viewed

Tap Again To Close

Job Description

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start **Caring. Connecting. Growing together.**
Data engineering emphasis supports the ongoing digital transformation and modernization of internal audit's risk assessment, automation efforts, and risk monitoring activities. This position is responsible for supporting Internal Audit engagements with scalable, end-to-end ETL and analytic processes. Additionally, the role is responsible for working closely with data analytics teams to create robust scripted data solutions, develop and support business monitoring tools, and support existing data systems and analytic reports. This includes identifying and integrating data sources, assessing data quality, and developing and executing data analytic tools/languages to support enterprise analytical risk assessments. This role is integral to our strategy to enable Internal Audit with data driven insights and bring value to our business partners.
The role will challenge you to leverage your data analytics skills on a variety of initiatives in a hands-on role, as well as the opportunity to develop your skills as an auditor in a matrixed and cross-functional internal audit department.
**Primary Responsibilities:**
+ Automation and Data Modeling
+ Design, build, and maintain automated data pipelines for extracting, transforming, and loading data from diverse sources (enterprise platforms, SharePoint, NAS drives, etc.)
+ Develop robust and scalable data models to support risk surveillance analytics and reporting needs
+ Implement and maintain workflows for scheduling and monitoring ETL/ELT jobs to ensure data freshness and reliability
+ Utilize scripting and workflow automation tools to reduce manual intervention in data movement and processing
+ Integrate new data sources and automate ingestion processes to expand surveillance coverage
+ Data Management and Governance
+ Ensure data quality, completeness, and consistency across all risk surveillance datasets
+ Develop and enforce data validation, cleansing, and transformation procedures to support accurate analysis
+ Implement data security and access controls in compliance with regulatory and organizational standards
+ Maintain detailed metadata, data dictionaries, and lineage documentation for all data assets
+ Support data governance initiatives, including data cataloguing, retention policies, and audit readiness
+ Collaboration and Communication
+ Partner with Risk Surveillance partners, data analysts, and audit teams to understand requirements and deliver analytical-ready datasets
+ Collaborate with IT, data stewards, and business partners to resolve data issues and facilitate access to new data sources
+ Communicate data pipeline status, issues, and solution approaches clearly to both technical and non-technical stakeholders
+ Provide training and support for users on data tools, repositories, and best practices
+ Document data workflows, processes, and solutions for knowledge sharing and operational continuity
+ Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regard to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so
**Required Qualifications:**
+ Overall 8+ years of program experience in Computer Science, Information Technology, Mathematics, Engineering, Data Analytics or related field
+ 4+ years of SQL programming
+ 4+ years programming in Python and/or R
+ 2+ years of data modeling and scaled automation experience
+ 2+ years of data visualization experience (Tableau and/or PowerBI)
+ Solid interpersonal and analytical skills while working effectively with a matrixed team
+ Solid oral and written communication skills
**Preferred Qualifications:**
+ 2+ years experience in developing scalable solutions with SSIS, Data Factory, Python, or R
+ Extensive program experience in Computer Science, Information Technology, Mathematics, Engineering, or related field
+ Internal Audit / Control experience
+ Cloud computing experience including Azure, AWS, Databricks, and/or Spark computing
+ Experience working in a Healthcare Industry and or a complex IT environment
+ Experience with conducting automation surrounding API calls
+ Working knowledge of Big Data tools, Cloud platforms, SQL Server database engineering
+ Data Science experience including regression analysis and machine learning techniques
+ Change management tool experience (e.g., Github, Jenkins, or similar)
_At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission._
#njp
This advertiser has chosen not to accept applicants from your region.

Data Engineering Consultant

Noida, Uttar Pradesh UnitedHealth Group

Posted 15 days ago

Job Viewed

Tap Again To Close

Job Description

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start **Caring. Connecting. Growing together.**
**Primary Responsibilities:**
+ Ingest data from multiple on-prem and cloud data sources using various tools & capabilities in Azure
+ Design and develop Azure Databricks processes using PySpark/Spark-SQL
+ Design and develop orchestration jobs using ADF, Databricks Workflow
+ Analyzing data engineering processes being developed and act as an SME to troubleshoot performance issues and suggest solutions to improve
+ Building test framework for the Databricks notebook jobs for automated testing before code deployment
+ Design and build POCs to validate new ideas, tools, and architectures in Azure
+ Continuously explore new Azure services and capabilities; assess their applicability to business needs
+ Prepare case studies and technical write-ups to showcase successful implementations and lessons learned
+ Work closely with clients, business stakeholders, and internal teams to gather requirements and translate them into technical solutions using best practices and appropriate architecture
+ Contribute to full lifecycle project implementations, from design and development to deployment and monitoring
+ Ensure solutions adhere to security, compliance, and governance standards
+ Monitor and optimize data pipelines and cloud resources for cost and performance efficiency
+ Identifies solutions to non-standard requests and problems
+ Mentor and support existing on-prem developers for cloud environment
+ Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so
**Required Qualifications:**
+ Undergraduate degree or equivalent experience
+ 7+ years of overall experience in Data & Analytics engineering
+ 5+ years of experience working with Azure, Databricks, and ADF, Data Lake
+ 5+ years of experience working with data platform or product using PySpark and Spark-SQL
+ Solid experience with CICD tools such as Jenkins, GitHub, Github Actions, Maven etc.
+ In-depth understanding of Azure architecture & ability to come up with efficient design & solutions
+ Highly proficient in Python and SQL
+ Proven excellent communication skills
+ **Key Skill:** Azure Data Engineer - Azure Databricks, Azure Data factory, Python/Pyspark, Terraform
_At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission._
_#Nic #NJP_
This advertiser has chosen not to accept applicants from your region.

Data Engineering manager

Pune, Maharashtra Panasonic Avionics Corporation

Posted 15 days ago

Job Viewed

Tap Again To Close

Job Description

**Overview**
Who We Are: 
Ever wonder who brings the entertainment to your flights? Panasonic Avionics Corporation is #1 in the industry for delivering inflight products such as movies, games, WiFi, and now Bluetooth headphone connectivity!  
How exciting would it be to be a part of the innovation that goes into creating technology that delights millions of people in an industry that's here to stay! With our company's history spanning over 40 years, you will have stability, career growth opportunities, and will work with the brightest minds in the industry. And we are committed to a diverse and inclusive culture that will help our organization thrive! We seek diversity in many areas such as background, culture, gender, ways of thinking, skills and more. 
If you want to learn more about us visit us at ( . And for a full listing of open job opportunities go to (   
**The Position:**
We are seeking a proven Data Engineering Leader to drive the design, development, and deployment of scalable, secure, and high-performance data solutions. This role will lead high-performing teams, architect cloud-native data platforms, and collaborate closely with business, AI/ML, and BI teams to deliver end-to-end data products that power innovation and strategic decision-making.
The position offers the opportunity to shape data architecture strategy, establish best practices in Lakehouse and streaming solutions, and enable advanced analytics and AI/ML at scale.
**Responsibilities**
**What We're Looking For:**
+ Proven leadership in building and mentoring high-performing data engineering teams.
+ Expertise in architecting cloud-native data platforms on AWS, leveraging services such as EMR, EKS, Glue, Redshift, S3, Lambda, and SageMaker.
+ Strong background in Lakehouse architecture (Glue Catalog, Iceberg, Delta Lake) and distributed processing frameworks (Spark, Hive, Presto).
+ Experience with real-time streaming solutions (Kafka, Kinesis, Flink).
+ Proficiency in orchestrating complex data workflows with Apache Airflow.
+ Hands-on experience with GitLab CI/CD, Terraform, CloudFormation Templates, and Infrastructure-as-Code.
+ Strong understanding of MDM strategies and data governance best practices (GDPR, HIPAA, etc.).
+ Ability to design and develop middleware APIs (REST) to seamlessly integrate data pipelines with applications and analytics platforms.
+ Experience supporting AI/ML teams with feature engineering, training, and deployment pipelines using SageMaker.
+ Solid knowledge of SQL & NoSQL databases (Redshift, DynamoDB, PostgreSQL, Elasticsearch).
+ Familiarity with BI enablement and data modeling for visualization platforms like Amazon QuickSight.
+ In-depth knowledge of security best practices in AWS-based data architectures.
+ Demonstrated success in driving AI/ML initiatives from ideation to production.
**Our Principles:** ** **
Contribution to Society | Fairness & Honesty | Cooperation & Team Spirit | Untiring Effort for Improvement | Courtesy & Humility | Adaptability | Gratitude 
**What We Offer:** ** **
At Panasonic Avionics Corporation we realize the most important aspects in leading our industry are the bright minds behind everything we do. We are proud to offer our employees a highly competitive, comprehensive and flexible benefits program. 
**Qualifications**
**Educational Background:**
+ Bachelor's degree or higher in Computer Science, Data Engineering, Aerospace Engineering, or a related field.
+ Advanced degrees (Master's/PhD) in Data Science or AI/ML are a plus.
REQ-
This advertiser has chosen not to accept applicants from your region.
 

Nearby Locations

Other Jobs Near Me

Industry

  1. request_quote Accounting
  2. work Administrative
  3. eco Agriculture Forestry
  4. smart_toy AI & Emerging Technologies
  5. school Apprenticeships & Trainee
  6. apartment Architecture
  7. palette Arts & Entertainment
  8. directions_car Automotive
  9. flight_takeoff Aviation
  10. account_balance Banking & Finance
  11. local_florist Beauty & Wellness
  12. restaurant Catering
  13. volunteer_activism Charity & Voluntary
  14. science Chemical Engineering
  15. child_friendly Childcare
  16. foundation Civil Engineering
  17. clean_hands Cleaning & Sanitation
  18. diversity_3 Community & Social Care
  19. construction Construction
  20. brush Creative & Digital
  21. currency_bitcoin Crypto & Blockchain
  22. support_agent Customer Service & Helpdesk
  23. medical_services Dental
  24. medical_services Driving & Transport
  25. medical_services E Commerce & Social Media
  26. school Education & Teaching
  27. electrical_services Electrical Engineering
  28. bolt Energy
  29. local_mall Fmcg
  30. gavel Government & Non Profit
  31. emoji_events Graduate
  32. health_and_safety Healthcare
  33. beach_access Hospitality & Tourism
  34. groups Human Resources
  35. precision_manufacturing Industrial Engineering
  36. security Information Security
  37. handyman Installation & Maintenance
  38. policy Insurance
  39. code IT & Software
  40. gavel Legal
  41. sports_soccer Leisure & Sports
  42. inventory_2 Logistics & Warehousing
  43. supervisor_account Management
  44. supervisor_account Management Consultancy
  45. supervisor_account Manufacturing & Production
  46. campaign Marketing
  47. build Mechanical Engineering
  48. perm_media Media & PR
  49. local_hospital Medical
  50. local_hospital Military & Public Safety
  51. local_hospital Mining
  52. medical_services Nursing
  53. local_gas_station Oil & Gas
  54. biotech Pharmaceutical
  55. checklist_rtl Project Management
  56. shopping_bag Purchasing
  57. home_work Real Estate
  58. person_search Recruitment Consultancy
  59. store Retail
  60. point_of_sale Sales
  61. science Scientific Research & Development
  62. wifi Telecoms
  63. psychology Therapy
  64. pets Veterinary
View All Etl Processes Jobs