Data Engineering Lead

Gurgaon, Haryana UnitedHealth Group

Posted 2 days ago

Job Viewed

Tap Again To Close

Job Description

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start **Caring. Connecting. Growing together.**
**Primary Responsibilities:**
+ Design and develop applications and services running on Azure, with a solid emphasis on Azure Databricks, ensuring optimal performance, scalability, and security
+ Build and maintain data pipelines using Azure Databricks and other Azure data integration tools
+ Write, read, and debug Spark, Scala, and Python code to process and analyze large datasets
+ Write extensive query in SQL and Snowflake
+ Implement security and access control measures and regularly audit Azure platform and infrastructure to ensure compliance
+ Create, understand, and validate design and estimated effort for given module/task, and be able to justify it
+ Implement and adhere to best engineering practices like design, unit testing, functional testing automation, continuous integration, and delivery
+ Maintain code quality by writing clean, maintainable, and testable code
+ Monitor performance and optimize resources to ensure cost-effectiveness and high availability
+ Define and document best practices and strategies regarding application deployment and infrastructure maintenance
+ Provide technical support and consultation for infrastructure questions
+ Help develop, manage, and monitor continuous integration and delivery systems
+ Take accountability and ownership of features and teamwork
+ Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regard to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so
**Required Qualifications:**
+ B. Tech or MCA (16+ years of formal education)
+ Overall 7+ years of experience
+ 5+ years of experience in writing advanced level SQL
+ 3+ years of experience in Azure (ADF), Databricks and DevOps
+ 3+ years of experience in architecting, designing, developing, and implementing cloud solutions on Azure
+ 2+ years of experience in writing, reading, and debugging Spark, Scala, and Python code
+ Experience in interacting with international customers to gather requirements and convert them into solutions using relevant skills
+ Proficiency in programming languages and scripting tools
+ Understanding of cloud data storage and database technologies such as SQL and NoSQL
+ solid troubleshooting skills and perform troubleshooting of issues in different technologies and environments
+ Familiarity with DevOps practices and tools, such as continuous integration and continuous deployment (CI/CD) and Teraform
+ Proven ability to collaborate with multidisciplinary teams of business analysts, developers, data scientists, and subject-matter experts
+ Proven proactive approach to spotting problems, areas for improvement, and performance bottlenecks
+ Proven excellent communication, writing, and presentation skills
**Preferred Qualifications:**
+ Experience and skills with Snowflake
+ Knowledge of AI/ML or LLM (GenAI)
+ Knowledge of US Healthcare domain and experience with healthcare data
_At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission._
This advertiser has chosen not to accept applicants from your region.

Data Engineering Manager

Noida, Uttar Pradesh ₹2000000 - ₹2500000 Y Uplers

Posted today

Job Viewed

Tap Again To Close

Job Description

Data Engineering Manager - Azure & Python

Experience: Years Exp.

Salary: Competitive

Preferred Notice Period: 30 Days

Shift: 10:00 AM to 7:00 PM IST

Opportunity Type: Noida (Remote for 6 months, Later Hybrid)

Placement Type: Permanent

(*Note: This is a requirement for one of Uplers' Clients)

Must have skills required :

Engineering management, Data Engineering, Azure Data Factory, Python, SQL OR NoSQL OR Azure, Backend OR FullStack

Nuaav (One of Uplers' Clients) is Looking for:

An Engineering Manager who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you.

Role Overview Description

As Engineering Manager Data Engineering at Nuaav, you will lead a talented team of data engineers focused on architecting and delivering enterprise-grade, scalable data platforms on Microsoft Azure. This role demands deep expertise in Azure cloud services and Python programming, combined with strong leadership skills to drive technical strategy, team growth, and execution of robust data infrastructures.

Key Responsibilities

  • Lead, mentor, and grow a high-performing data engineering team delivering next-generation data solutions on Azure.
  • Architect and oversee the development of scalable data pipelines and analytics platforms using Azure Data Lake, Data Factory, Databricks, and Synapse.
  • Drive technical execution of data warehousing and BI solutions with advanced Python programming (including PySpark).
  • Enforce high standards for data quality, consistency, governance, and security across data systems.
  • Collaborate cross-functionally with product managers, software engineers, data scientists, and analysts to enable business insights and ML initiatives.
  • Define and implement best practices for ETL design, data integration, and cloud-native workflows.
  • Continuously optimize data processing for performance, reliability, and cost efficiency.
  • Oversee technical documentation, onboarding, and process compliance within the engineering team.
  • Stay abreast of industry trends in data engineering, Azure technologies, and cloud security to maintain cutting-edge capabilities.

Qualifications

  • Bachelors or Masters degree in Computer Science, Engineering, or related field.
  • 5+ years data engineering experience with significant team leadership or management exposure.
  • Strong expertise in designing and building cloud data solutions on Microsoft Azure (Data Lake, Synapse, Data Factory, Databricks).
  • Advanced Python skills for data transformation, automation, and pipeline development (including PySpark).
  • Solid SQL skills; experience with big data tools like Spark, Hive, or Scala is a plus.
  • Knowledge of CI/CD pipelines, DevOps practices, and Infrastructure-as-Code (Terraform, GitHub Actions).
  • Experience with data security, governance, and compliance frameworks in cloud environments.
  • Excellent communication, leadership, and project management capabilities.

Desired Skills

  • Azure Data Factory, Databricks, Synapse Analytics expert.
  • Proficiency with Python, PySpark, SQL for ETL and data workflows.
  • Familiarity with big data frameworks (Spark, HDFS).
  • Hands-on experience with DevOps tools and container technologies (GitHub, Docker, Kubernetes).
  • Strong cross-functional collaboration and team mentoring skills.

This role offers an exciting opportunity to lead and scale mission-critical data infrastructure at Nuaav, a firm known for its boutique approach combining technical mastery with personalized client impact and agility in delivery. Candidates will thrive in a fast-paced, innovative setting with a clear path for growth and influence.

Why Join Nuaav?

  • Opportunity to work in a strong AI driven consulting firm with direct client engagement and high-impact projects.
  • Be part of a dynamic environment focused on innovation, agility, and quality over volume.
  • Exposure to cutting-edge technologies across data engineering, AI, and product platforms.
  • Work on global-scale digital transformation projects with close collaboration alongside senior consultants and corporate leaders.
  • A culture that values personalized growth, client-centric excellence, and thought leadership.

How to apply for this opportunity:

Easy 3-Step Process:

1. Click On Apply And Register or log in on our portal

  1. Upload updated Resume & Complete the Screening Form

  2. Increase your chances to get shortlisted & meet the client for the Interview

About Our Client:

Nuaav is a boutique technology consulting firm focused on delivering innovative, scalable, and secure data engineering and AI solutions.

About Uplers:

Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career.

(Note: There are many more opportunities apart from this on the portal.)

So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you

This advertiser has chosen not to accept applicants from your region.

Data Engineering Role

Delhi, Delhi 100x.inc

Posted 1 day ago

Job Viewed

Tap Again To Close

Job Description

Minimum Requirements:

- At least 3 years of professional experience in Data Engineering
- Demonstrated end-to-end ownership of ETL pipelines
- Deep, hands-on experience with AWS services: EC2, Athena, Lambda, and Step Functions (non-negotiable)
- Strong proficiency in MySQL (non-negotiable)
- Working knowledge of Docker: setup, deployment, and troubleshooting

Highly Preferred Skills:

- Experience with orchestration tools such as Airflow or similar
- Hands-on with PySpark
- Familiarity with the Python data ecosystem: SQLAlchemy, DuckDB, PyArrow, Pandas, NumPy
- Exposure to DLT (Data Load Tool)

Ideal Candidate Profile:

The role demands a builder’s mindset over a maintainer’s. Independent contributors with clear, efficient communication thrive here. Those who excel tend to embrace fast-paced startup environments, take true ownership, and are motivated by impact—not just lines of code. Candidates are expected to include the phrase Red Panda in their application to confirm they’ve read this section in full.

Key Responsibilities:

- Architect, build, and optimize scalable data pipelines and workflows
- Manage AWS resources end-to-end: from configuration to optimization and debugging
- Work closely with product and engineering to enable high-velocity business impact
- Automate and scale data processes—manual workflows are not part of the culture
- Build foundational data systems that drive critical business decisions

Compensation range: ₹8.4–12 LPA (fixed base), excluding equity, performance bonus, and revenue share components.
This advertiser has chosen not to accept applicants from your region.

Data Engineering Role

Ghaziabad, Uttar Pradesh 100x.inc

Posted today

Job Viewed

Tap Again To Close

Job Description

Minimum Requirements:

  • At least 3 years of professional experience in Data Engineering
  • Demonstrated end-to-end ownership of ETL pipelines
  • Deep, hands-on experience with AWS services: EC2, Athena, Lambda, and Step Functions (non-negotiable)
  • Strong proficiency in MySQL (non-negotiable)
  • Working knowledge of Docker: setup, deployment, and troubleshooting


Highly Preferred Skills:

  • Experience with orchestration tools such as Airflow or similar
  • Hands-on with PySpark
  • Familiarity with the Python data ecosystem: SQLAlchemy, DuckDB, PyArrow, Pandas, NumPy
  • Exposure to DLT (Data Load Tool)


Ideal Candidate Profile:

The role demands a builder’s mindset over a maintainer’s. Independent contributors with clear, efficient communication thrive here. Those who excel tend to embrace fast-paced startup environments, take true ownership, and are motivated by impact—not just lines of code. Candidates are expected to include the phrase Red Panda in their application to confirm they’ve read this section in full.


Key Responsibilities:

  • Architect, build, and optimize scalable data pipelines and workflows
  • Manage AWS resources end-to-end: from configuration to optimization and debugging
  • Work closely with product and engineering to enable high-velocity business impact
  • Automate and scale data processes—manual workflows are not part of the culture
  • Build foundational data systems that drive critical business decisions


Compensation range: ₹8.4–12 LPA (fixed base), excluding equity, performance bonus, and revenue share components.

This advertiser has chosen not to accept applicants from your region.

Data Engineering Role

New Delhi, Delhi 100x.inc

Posted 3 days ago

Job Viewed

Tap Again To Close

Job Description

Minimum Requirements:

  • At least 3 years of professional experience in Data Engineering
  • Demonstrated end-to-end ownership of ETL pipelines
  • Deep, hands-on experience with AWS services: EC2, Athena, Lambda, and Step Functions (non-negotiable)
  • Strong proficiency in MySQL (non-negotiable)
  • Working knowledge of Docker: setup, deployment, and troubleshooting


Highly Preferred Skills:

  • Experience with orchestration tools such as Airflow or similar
  • Hands-on with PySpark
  • Familiarity with the Python data ecosystem: SQLAlchemy, DuckDB, PyArrow, Pandas, NumPy
  • Exposure to DLT (Data Load Tool)


Ideal Candidate Profile:

The role demands a builder’s mindset over a maintainer’s. Independent contributors with clear, efficient communication thrive here. Those who excel tend to embrace fast-paced startup environments, take true ownership, and are motivated by impact—not just lines of code. Candidates are expected to include the phrase Red Panda in their application to confirm they’ve read this section in full.


Key Responsibilities:

  • Architect, build, and optimize scalable data pipelines and workflows
  • Manage AWS resources end-to-end: from configuration to optimization and debugging
  • Work closely with product and engineering to enable high-velocity business impact
  • Automate and scale data processes—manual workflows are not part of the culture
  • Build foundational data systems that drive critical business decisions


Compensation range: ₹8.4–12 LPA (fixed base), excluding equity, performance bonus, and revenue share components.

This advertiser has chosen not to accept applicants from your region.

Data Engineering Role

Faridabad, Haryana 100x.inc

Posted 3 days ago

Job Viewed

Tap Again To Close

Job Description

Minimum Requirements:

  • At least 3 years of professional experience in Data Engineering
  • Demonstrated end-to-end ownership of ETL pipelines
  • Deep, hands-on experience with AWS services: EC2, Athena, Lambda, and Step Functions (non-negotiable)
  • Strong proficiency in MySQL (non-negotiable)
  • Working knowledge of Docker: setup, deployment, and troubleshooting


Highly Preferred Skills:

  • Experience with orchestration tools such as Airflow or similar
  • Hands-on with PySpark
  • Familiarity with the Python data ecosystem: SQLAlchemy, DuckDB, PyArrow, Pandas, NumPy
  • Exposure to DLT (Data Load Tool)


Ideal Candidate Profile:

The role demands a builder’s mindset over a maintainer’s. Independent contributors with clear, efficient communication thrive here. Those who excel tend to embrace fast-paced startup environments, take true ownership, and are motivated by impact—not just lines of code. Candidates are expected to include the phrase Red Panda in their application to confirm they’ve read this section in full.


Key Responsibilities:

  • Architect, build, and optimize scalable data pipelines and workflows
  • Manage AWS resources end-to-end: from configuration to optimization and debugging
  • Work closely with product and engineering to enable high-velocity business impact
  • Automate and scale data processes—manual workflows are not part of the culture
  • Build foundational data systems that drive critical business decisions


Compensation range: ₹8.4–12 LPA (fixed base), excluding equity, performance bonus, and revenue share components.

This advertiser has chosen not to accept applicants from your region.

Data Engineering Role

Gurgaon, Haryana 100x.inc

Posted 3 days ago

Job Viewed

Tap Again To Close

Job Description

Minimum Requirements:

  • At least 3 years of professional experience in Data Engineering
  • Demonstrated end-to-end ownership of ETL pipelines
  • Deep, hands-on experience with AWS services: EC2, Athena, Lambda, and Step Functions (non-negotiable)
  • Strong proficiency in MySQL (non-negotiable)
  • Working knowledge of Docker: setup, deployment, and troubleshooting


Highly Preferred Skills:

  • Experience with orchestration tools such as Airflow or similar
  • Hands-on with PySpark
  • Familiarity with the Python data ecosystem: SQLAlchemy, DuckDB, PyArrow, Pandas, NumPy
  • Exposure to DLT (Data Load Tool)


Ideal Candidate Profile:

The role demands a builder’s mindset over a maintainer’s. Independent contributors with clear, efficient communication thrive here. Those who excel tend to embrace fast-paced startup environments, take true ownership, and are motivated by impact—not just lines of code. Candidates are expected to include the phrase Red Panda in their application to confirm they’ve read this section in full.


Key Responsibilities:

  • Architect, build, and optimize scalable data pipelines and workflows
  • Manage AWS resources end-to-end: from configuration to optimization and debugging
  • Work closely with product and engineering to enable high-velocity business impact
  • Automate and scale data processes—manual workflows are not part of the culture
  • Build foundational data systems that drive critical business decisions


Compensation range: ₹8.4–12 LPA (fixed base), excluding equity, performance bonus, and revenue share components.

This advertiser has chosen not to accept applicants from your region.
Be The First To Know

About the latest Big data technologies Jobs in New Delhi !

Data Engineering Role

Delhi, Delhi 100x.inc

Posted 3 days ago

Job Viewed

Tap Again To Close

Job Description

Minimum Requirements:

  • At least 3 years of professional experience in Data Engineering
  • Demonstrated end-to-end ownership of ETL pipelines
  • Deep, hands-on experience with AWS services: EC2, Athena, Lambda, and Step Functions (non-negotiable)
  • Strong proficiency in MySQL (non-negotiable)
  • Working knowledge of Docker: setup, deployment, and troubleshooting


Highly Preferred Skills:

  • Experience with orchestration tools such as Airflow or similar
  • Hands-on with PySpark
  • Familiarity with the Python data ecosystem: SQLAlchemy, DuckDB, PyArrow, Pandas, NumPy
  • Exposure to DLT (Data Load Tool)


Ideal Candidate Profile:

The role demands a builder’s mindset over a maintainer’s. Independent contributors with clear, efficient communication thrive here. Those who excel tend to embrace fast-paced startup environments, take true ownership, and are motivated by impact—not just lines of code. Candidates are expected to include the phrase Red Panda in their application to confirm they’ve read this section in full.


Key Responsibilities:

  • Architect, build, and optimize scalable data pipelines and workflows
  • Manage AWS resources end-to-end: from configuration to optimization and debugging
  • Work closely with product and engineering to enable high-velocity business impact
  • Automate and scale data processes—manual workflows are not part of the culture
  • Build foundational data systems that drive critical business decisions


Compensation range: ₹8.4–12 LPA (fixed base), excluding equity, performance bonus, and revenue share components.

This advertiser has chosen not to accept applicants from your region.

Data Engineering Role

Noida, Uttar Pradesh 100x.inc

Posted 3 days ago

Job Viewed

Tap Again To Close

Job Description

Minimum Requirements:

  • At least 3 years of professional experience in Data Engineering
  • Demonstrated end-to-end ownership of ETL pipelines
  • Deep, hands-on experience with AWS services: EC2, Athena, Lambda, and Step Functions (non-negotiable)
  • Strong proficiency in MySQL (non-negotiable)
  • Working knowledge of Docker: setup, deployment, and troubleshooting


Highly Preferred Skills:

  • Experience with orchestration tools such as Airflow or similar
  • Hands-on with PySpark
  • Familiarity with the Python data ecosystem: SQLAlchemy, DuckDB, PyArrow, Pandas, NumPy
  • Exposure to DLT (Data Load Tool)


Ideal Candidate Profile:

The role demands a builder’s mindset over a maintainer’s. Independent contributors with clear, efficient communication thrive here. Those who excel tend to embrace fast-paced startup environments, take true ownership, and are motivated by impact—not just lines of code. Candidates are expected to include the phrase Red Panda in their application to confirm they’ve read this section in full.


Key Responsibilities:

  • Architect, build, and optimize scalable data pipelines and workflows
  • Manage AWS resources end-to-end: from configuration to optimization and debugging
  • Work closely with product and engineering to enable high-velocity business impact
  • Automate and scale data processes—manual workflows are not part of the culture
  • Build foundational data systems that drive critical business decisions


Compensation range: ₹8.4–12 LPA (fixed base), excluding equity, performance bonus, and revenue share components.

This advertiser has chosen not to accept applicants from your region.

Data Engineering Role

Ghaziabad, Uttar Pradesh 100x.inc

Posted 3 days ago

Job Viewed

Tap Again To Close

Job Description

Minimum Requirements:

  • At least 3 years of professional experience in Data Engineering
  • Demonstrated end-to-end ownership of ETL pipelines
  • Deep, hands-on experience with AWS services: EC2, Athena, Lambda, and Step Functions (non-negotiable)
  • Strong proficiency in MySQL (non-negotiable)
  • Working knowledge of Docker: setup, deployment, and troubleshooting


Highly Preferred Skills:

  • Experience with orchestration tools such as Airflow or similar
  • Hands-on with PySpark
  • Familiarity with the Python data ecosystem: SQLAlchemy, DuckDB, PyArrow, Pandas, NumPy
  • Exposure to DLT (Data Load Tool)


Ideal Candidate Profile:

The role demands a builder’s mindset over a maintainer’s. Independent contributors with clear, efficient communication thrive here. Those who excel tend to embrace fast-paced startup environments, take true ownership, and are motivated by impact—not just lines of code. Candidates are expected to include the phrase Red Panda in their application to confirm they’ve read this section in full.


Key Responsibilities:

  • Architect, build, and optimize scalable data pipelines and workflows
  • Manage AWS resources end-to-end: from configuration to optimization and debugging
  • Work closely with product and engineering to enable high-velocity business impact
  • Automate and scale data processes—manual workflows are not part of the culture
  • Build foundational data systems that drive critical business decisions


Compensation range: ₹8.4–12 LPA (fixed base), excluding equity, performance bonus, and revenue share components.

This advertiser has chosen not to accept applicants from your region.
 

Nearby Locations

Other Jobs Near Me

Industry

  1. request_quote Accounting
  2. work Administrative
  3. eco Agriculture Forestry
  4. smart_toy AI & Emerging Technologies
  5. school Apprenticeships & Trainee
  6. apartment Architecture
  7. palette Arts & Entertainment
  8. directions_car Automotive
  9. flight_takeoff Aviation
  10. account_balance Banking & Finance
  11. local_florist Beauty & Wellness
  12. restaurant Catering
  13. volunteer_activism Charity & Voluntary
  14. science Chemical Engineering
  15. child_friendly Childcare
  16. foundation Civil Engineering
  17. clean_hands Cleaning & Sanitation
  18. diversity_3 Community & Social Care
  19. construction Construction
  20. brush Creative & Digital
  21. currency_bitcoin Crypto & Blockchain
  22. support_agent Customer Service & Helpdesk
  23. medical_services Dental
  24. medical_services Driving & Transport
  25. medical_services E Commerce & Social Media
  26. school Education & Teaching
  27. electrical_services Electrical Engineering
  28. bolt Energy
  29. local_mall Fmcg
  30. gavel Government & Non Profit
  31. emoji_events Graduate
  32. health_and_safety Healthcare
  33. beach_access Hospitality & Tourism
  34. groups Human Resources
  35. precision_manufacturing Industrial Engineering
  36. security Information Security
  37. handyman Installation & Maintenance
  38. policy Insurance
  39. code IT & Software
  40. gavel Legal
  41. sports_soccer Leisure & Sports
  42. inventory_2 Logistics & Warehousing
  43. supervisor_account Management
  44. supervisor_account Management Consultancy
  45. supervisor_account Manufacturing & Production
  46. campaign Marketing
  47. build Mechanical Engineering
  48. perm_media Media & PR
  49. local_hospital Medical
  50. local_hospital Military & Public Safety
  51. local_hospital Mining
  52. medical_services Nursing
  53. local_gas_station Oil & Gas
  54. biotech Pharmaceutical
  55. checklist_rtl Project Management
  56. shopping_bag Purchasing
  57. home_work Real Estate
  58. person_search Recruitment Consultancy
  59. store Retail
  60. point_of_sale Sales
  61. science Scientific Research & Development
  62. wifi Telecoms
  63. psychology Therapy
  64. pets Veterinary
View All Big Data Technologies Jobs View All Jobs in New Delhi