200 Big Data Technologies jobs in New Delhi
Data Engineering Lead

Posted 2 days ago
Job Viewed
Job Description
**Primary Responsibilities:**
+ Design and develop applications and services running on Azure, with a solid emphasis on Azure Databricks, ensuring optimal performance, scalability, and security
+ Build and maintain data pipelines using Azure Databricks and other Azure data integration tools
+ Write, read, and debug Spark, Scala, and Python code to process and analyze large datasets
+ Write extensive query in SQL and Snowflake
+ Implement security and access control measures and regularly audit Azure platform and infrastructure to ensure compliance
+ Create, understand, and validate design and estimated effort for given module/task, and be able to justify it
+ Implement and adhere to best engineering practices like design, unit testing, functional testing automation, continuous integration, and delivery
+ Maintain code quality by writing clean, maintainable, and testable code
+ Monitor performance and optimize resources to ensure cost-effectiveness and high availability
+ Define and document best practices and strategies regarding application deployment and infrastructure maintenance
+ Provide technical support and consultation for infrastructure questions
+ Help develop, manage, and monitor continuous integration and delivery systems
+ Take accountability and ownership of features and teamwork
+ Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regard to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so
**Required Qualifications:**
+ B. Tech or MCA (16+ years of formal education)
+ Overall 7+ years of experience
+ 5+ years of experience in writing advanced level SQL
+ 3+ years of experience in Azure (ADF), Databricks and DevOps
+ 3+ years of experience in architecting, designing, developing, and implementing cloud solutions on Azure
+ 2+ years of experience in writing, reading, and debugging Spark, Scala, and Python code
+ Experience in interacting with international customers to gather requirements and convert them into solutions using relevant skills
+ Proficiency in programming languages and scripting tools
+ Understanding of cloud data storage and database technologies such as SQL and NoSQL
+ solid troubleshooting skills and perform troubleshooting of issues in different technologies and environments
+ Familiarity with DevOps practices and tools, such as continuous integration and continuous deployment (CI/CD) and Teraform
+ Proven ability to collaborate with multidisciplinary teams of business analysts, developers, data scientists, and subject-matter experts
+ Proven proactive approach to spotting problems, areas for improvement, and performance bottlenecks
+ Proven excellent communication, writing, and presentation skills
**Preferred Qualifications:**
+ Experience and skills with Snowflake
+ Knowledge of AI/ML or LLM (GenAI)
+ Knowledge of US Healthcare domain and experience with healthcare data
_At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission._
Data Engineering Manager
Posted today
Job Viewed
Job Description
Data Engineering Manager - Azure & Python
Experience: Years Exp.
Salary: Competitive
Preferred Notice Period: 30 Days
Shift: 10:00 AM to 7:00 PM IST
Opportunity Type: Noida (Remote for 6 months, Later Hybrid)
Placement Type: Permanent
(*Note: This is a requirement for one of Uplers' Clients)
Must have skills required :
Engineering management, Data Engineering, Azure Data Factory, Python, SQL OR NoSQL OR Azure, Backend OR FullStack
Nuaav (One of Uplers' Clients) is Looking for:
An Engineering Manager who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you.
Role Overview Description
As Engineering Manager Data Engineering at Nuaav, you will lead a talented team of data engineers focused on architecting and delivering enterprise-grade, scalable data platforms on Microsoft Azure. This role demands deep expertise in Azure cloud services and Python programming, combined with strong leadership skills to drive technical strategy, team growth, and execution of robust data infrastructures.
Key Responsibilities
- Lead, mentor, and grow a high-performing data engineering team delivering next-generation data solutions on Azure.
- Architect and oversee the development of scalable data pipelines and analytics platforms using Azure Data Lake, Data Factory, Databricks, and Synapse.
- Drive technical execution of data warehousing and BI solutions with advanced Python programming (including PySpark).
- Enforce high standards for data quality, consistency, governance, and security across data systems.
- Collaborate cross-functionally with product managers, software engineers, data scientists, and analysts to enable business insights and ML initiatives.
- Define and implement best practices for ETL design, data integration, and cloud-native workflows.
- Continuously optimize data processing for performance, reliability, and cost efficiency.
- Oversee technical documentation, onboarding, and process compliance within the engineering team.
- Stay abreast of industry trends in data engineering, Azure technologies, and cloud security to maintain cutting-edge capabilities.
Qualifications
- Bachelors or Masters degree in Computer Science, Engineering, or related field.
- 5+ years data engineering experience with significant team leadership or management exposure.
- Strong expertise in designing and building cloud data solutions on Microsoft Azure (Data Lake, Synapse, Data Factory, Databricks).
- Advanced Python skills for data transformation, automation, and pipeline development (including PySpark).
- Solid SQL skills; experience with big data tools like Spark, Hive, or Scala is a plus.
- Knowledge of CI/CD pipelines, DevOps practices, and Infrastructure-as-Code (Terraform, GitHub Actions).
- Experience with data security, governance, and compliance frameworks in cloud environments.
- Excellent communication, leadership, and project management capabilities.
Desired Skills
- Azure Data Factory, Databricks, Synapse Analytics expert.
- Proficiency with Python, PySpark, SQL for ETL and data workflows.
- Familiarity with big data frameworks (Spark, HDFS).
- Hands-on experience with DevOps tools and container technologies (GitHub, Docker, Kubernetes).
- Strong cross-functional collaboration and team mentoring skills.
This role offers an exciting opportunity to lead and scale mission-critical data infrastructure at Nuaav, a firm known for its boutique approach combining technical mastery with personalized client impact and agility in delivery. Candidates will thrive in a fast-paced, innovative setting with a clear path for growth and influence.
Why Join Nuaav?
- Opportunity to work in a strong AI driven consulting firm with direct client engagement and high-impact projects.
- Be part of a dynamic environment focused on innovation, agility, and quality over volume.
- Exposure to cutting-edge technologies across data engineering, AI, and product platforms.
- Work on global-scale digital transformation projects with close collaboration alongside senior consultants and corporate leaders.
- A culture that values personalized growth, client-centric excellence, and thought leadership.
How to apply for this opportunity:
Easy 3-Step Process:
1. Click On Apply And Register or log in on our portal
Upload updated Resume & Complete the Screening Form
Increase your chances to get shortlisted & meet the client for the Interview
About Our Client:
Nuaav is a boutique technology consulting firm focused on delivering innovative, scalable, and secure data engineering and AI solutions.
About Uplers:
Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career.
(Note: There are many more opportunities apart from this on the portal.)
So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you
Data Engineering Role
Posted 1 day ago
Job Viewed
Job Description
- At least 3 years of professional experience in Data Engineering
- Demonstrated end-to-end ownership of ETL pipelines
- Deep, hands-on experience with AWS services: EC2, Athena, Lambda, and Step Functions (non-negotiable)
- Strong proficiency in MySQL (non-negotiable)
- Working knowledge of Docker: setup, deployment, and troubleshooting
Highly Preferred Skills:
- Experience with orchestration tools such as Airflow or similar
- Hands-on with PySpark
- Familiarity with the Python data ecosystem: SQLAlchemy, DuckDB, PyArrow, Pandas, NumPy
- Exposure to DLT (Data Load Tool)
Ideal Candidate Profile:
The role demands a builder’s mindset over a maintainer’s. Independent contributors with clear, efficient communication thrive here. Those who excel tend to embrace fast-paced startup environments, take true ownership, and are motivated by impact—not just lines of code. Candidates are expected to include the phrase Red Panda in their application to confirm they’ve read this section in full.
Key Responsibilities:
- Architect, build, and optimize scalable data pipelines and workflows
- Manage AWS resources end-to-end: from configuration to optimization and debugging
- Work closely with product and engineering to enable high-velocity business impact
- Automate and scale data processes—manual workflows are not part of the culture
- Build foundational data systems that drive critical business decisions
Compensation range: ₹8.4–12 LPA (fixed base), excluding equity, performance bonus, and revenue share components.
Data Engineering Role
Posted today
Job Viewed
Job Description
Minimum Requirements:
- At least 3 years of professional experience in Data Engineering
- Demonstrated end-to-end ownership of ETL pipelines
- Deep, hands-on experience with AWS services: EC2, Athena, Lambda, and Step Functions (non-negotiable)
- Strong proficiency in MySQL (non-negotiable)
- Working knowledge of Docker: setup, deployment, and troubleshooting
Highly Preferred Skills:
- Experience with orchestration tools such as Airflow or similar
- Hands-on with PySpark
- Familiarity with the Python data ecosystem: SQLAlchemy, DuckDB, PyArrow, Pandas, NumPy
- Exposure to DLT (Data Load Tool)
Ideal Candidate Profile:
The role demands a builder’s mindset over a maintainer’s. Independent contributors with clear, efficient communication thrive here. Those who excel tend to embrace fast-paced startup environments, take true ownership, and are motivated by impact—not just lines of code. Candidates are expected to include the phrase Red Panda in their application to confirm they’ve read this section in full.
Key Responsibilities:
- Architect, build, and optimize scalable data pipelines and workflows
- Manage AWS resources end-to-end: from configuration to optimization and debugging
- Work closely with product and engineering to enable high-velocity business impact
- Automate and scale data processes—manual workflows are not part of the culture
- Build foundational data systems that drive critical business decisions
Compensation range: ₹8.4–12 LPA (fixed base), excluding equity, performance bonus, and revenue share components.
Data Engineering Role
Posted 3 days ago
Job Viewed
Job Description
Minimum Requirements:
- At least 3 years of professional experience in Data Engineering
- Demonstrated end-to-end ownership of ETL pipelines
- Deep, hands-on experience with AWS services: EC2, Athena, Lambda, and Step Functions (non-negotiable)
- Strong proficiency in MySQL (non-negotiable)
- Working knowledge of Docker: setup, deployment, and troubleshooting
Highly Preferred Skills:
- Experience with orchestration tools such as Airflow or similar
- Hands-on with PySpark
- Familiarity with the Python data ecosystem: SQLAlchemy, DuckDB, PyArrow, Pandas, NumPy
- Exposure to DLT (Data Load Tool)
Ideal Candidate Profile:
The role demands a builder’s mindset over a maintainer’s. Independent contributors with clear, efficient communication thrive here. Those who excel tend to embrace fast-paced startup environments, take true ownership, and are motivated by impact—not just lines of code. Candidates are expected to include the phrase Red Panda in their application to confirm they’ve read this section in full.
Key Responsibilities:
- Architect, build, and optimize scalable data pipelines and workflows
- Manage AWS resources end-to-end: from configuration to optimization and debugging
- Work closely with product and engineering to enable high-velocity business impact
- Automate and scale data processes—manual workflows are not part of the culture
- Build foundational data systems that drive critical business decisions
Compensation range: ₹8.4–12 LPA (fixed base), excluding equity, performance bonus, and revenue share components.
Data Engineering Role
Posted 3 days ago
Job Viewed
Job Description
Minimum Requirements:
- At least 3 years of professional experience in Data Engineering
- Demonstrated end-to-end ownership of ETL pipelines
- Deep, hands-on experience with AWS services: EC2, Athena, Lambda, and Step Functions (non-negotiable)
- Strong proficiency in MySQL (non-negotiable)
- Working knowledge of Docker: setup, deployment, and troubleshooting
Highly Preferred Skills:
- Experience with orchestration tools such as Airflow or similar
- Hands-on with PySpark
- Familiarity with the Python data ecosystem: SQLAlchemy, DuckDB, PyArrow, Pandas, NumPy
- Exposure to DLT (Data Load Tool)
Ideal Candidate Profile:
The role demands a builder’s mindset over a maintainer’s. Independent contributors with clear, efficient communication thrive here. Those who excel tend to embrace fast-paced startup environments, take true ownership, and are motivated by impact—not just lines of code. Candidates are expected to include the phrase Red Panda in their application to confirm they’ve read this section in full.
Key Responsibilities:
- Architect, build, and optimize scalable data pipelines and workflows
- Manage AWS resources end-to-end: from configuration to optimization and debugging
- Work closely with product and engineering to enable high-velocity business impact
- Automate and scale data processes—manual workflows are not part of the culture
- Build foundational data systems that drive critical business decisions
Compensation range: ₹8.4–12 LPA (fixed base), excluding equity, performance bonus, and revenue share components.
Data Engineering Role
Posted 3 days ago
Job Viewed
Job Description
Minimum Requirements:
- At least 3 years of professional experience in Data Engineering
- Demonstrated end-to-end ownership of ETL pipelines
- Deep, hands-on experience with AWS services: EC2, Athena, Lambda, and Step Functions (non-negotiable)
- Strong proficiency in MySQL (non-negotiable)
- Working knowledge of Docker: setup, deployment, and troubleshooting
Highly Preferred Skills:
- Experience with orchestration tools such as Airflow or similar
- Hands-on with PySpark
- Familiarity with the Python data ecosystem: SQLAlchemy, DuckDB, PyArrow, Pandas, NumPy
- Exposure to DLT (Data Load Tool)
Ideal Candidate Profile:
The role demands a builder’s mindset over a maintainer’s. Independent contributors with clear, efficient communication thrive here. Those who excel tend to embrace fast-paced startup environments, take true ownership, and are motivated by impact—not just lines of code. Candidates are expected to include the phrase Red Panda in their application to confirm they’ve read this section in full.
Key Responsibilities:
- Architect, build, and optimize scalable data pipelines and workflows
- Manage AWS resources end-to-end: from configuration to optimization and debugging
- Work closely with product and engineering to enable high-velocity business impact
- Automate and scale data processes—manual workflows are not part of the culture
- Build foundational data systems that drive critical business decisions
Compensation range: ₹8.4–12 LPA (fixed base), excluding equity, performance bonus, and revenue share components.
Be The First To Know
About the latest Big data technologies Jobs in New Delhi !
Data Engineering Role
Posted 3 days ago
Job Viewed
Job Description
Minimum Requirements:
- At least 3 years of professional experience in Data Engineering
- Demonstrated end-to-end ownership of ETL pipelines
- Deep, hands-on experience with AWS services: EC2, Athena, Lambda, and Step Functions (non-negotiable)
- Strong proficiency in MySQL (non-negotiable)
- Working knowledge of Docker: setup, deployment, and troubleshooting
Highly Preferred Skills:
- Experience with orchestration tools such as Airflow or similar
- Hands-on with PySpark
- Familiarity with the Python data ecosystem: SQLAlchemy, DuckDB, PyArrow, Pandas, NumPy
- Exposure to DLT (Data Load Tool)
Ideal Candidate Profile:
The role demands a builder’s mindset over a maintainer’s. Independent contributors with clear, efficient communication thrive here. Those who excel tend to embrace fast-paced startup environments, take true ownership, and are motivated by impact—not just lines of code. Candidates are expected to include the phrase Red Panda in their application to confirm they’ve read this section in full.
Key Responsibilities:
- Architect, build, and optimize scalable data pipelines and workflows
- Manage AWS resources end-to-end: from configuration to optimization and debugging
- Work closely with product and engineering to enable high-velocity business impact
- Automate and scale data processes—manual workflows are not part of the culture
- Build foundational data systems that drive critical business decisions
Compensation range: ₹8.4–12 LPA (fixed base), excluding equity, performance bonus, and revenue share components.
Data Engineering Role
Posted 3 days ago
Job Viewed
Job Description
Minimum Requirements:
- At least 3 years of professional experience in Data Engineering
- Demonstrated end-to-end ownership of ETL pipelines
- Deep, hands-on experience with AWS services: EC2, Athena, Lambda, and Step Functions (non-negotiable)
- Strong proficiency in MySQL (non-negotiable)
- Working knowledge of Docker: setup, deployment, and troubleshooting
Highly Preferred Skills:
- Experience with orchestration tools such as Airflow or similar
- Hands-on with PySpark
- Familiarity with the Python data ecosystem: SQLAlchemy, DuckDB, PyArrow, Pandas, NumPy
- Exposure to DLT (Data Load Tool)
Ideal Candidate Profile:
The role demands a builder’s mindset over a maintainer’s. Independent contributors with clear, efficient communication thrive here. Those who excel tend to embrace fast-paced startup environments, take true ownership, and are motivated by impact—not just lines of code. Candidates are expected to include the phrase Red Panda in their application to confirm they’ve read this section in full.
Key Responsibilities:
- Architect, build, and optimize scalable data pipelines and workflows
- Manage AWS resources end-to-end: from configuration to optimization and debugging
- Work closely with product and engineering to enable high-velocity business impact
- Automate and scale data processes—manual workflows are not part of the culture
- Build foundational data systems that drive critical business decisions
Compensation range: ₹8.4–12 LPA (fixed base), excluding equity, performance bonus, and revenue share components.
Data Engineering Role
Posted 3 days ago
Job Viewed
Job Description
Minimum Requirements:
- At least 3 years of professional experience in Data Engineering
- Demonstrated end-to-end ownership of ETL pipelines
- Deep, hands-on experience with AWS services: EC2, Athena, Lambda, and Step Functions (non-negotiable)
- Strong proficiency in MySQL (non-negotiable)
- Working knowledge of Docker: setup, deployment, and troubleshooting
Highly Preferred Skills:
- Experience with orchestration tools such as Airflow or similar
- Hands-on with PySpark
- Familiarity with the Python data ecosystem: SQLAlchemy, DuckDB, PyArrow, Pandas, NumPy
- Exposure to DLT (Data Load Tool)
Ideal Candidate Profile:
The role demands a builder’s mindset over a maintainer’s. Independent contributors with clear, efficient communication thrive here. Those who excel tend to embrace fast-paced startup environments, take true ownership, and are motivated by impact—not just lines of code. Candidates are expected to include the phrase Red Panda in their application to confirm they’ve read this section in full.
Key Responsibilities:
- Architect, build, and optimize scalable data pipelines and workflows
- Manage AWS resources end-to-end: from configuration to optimization and debugging
- Work closely with product and engineering to enable high-velocity business impact
- Automate and scale data processes—manual workflows are not part of the culture
- Build foundational data systems that drive critical business decisions
Compensation range: ₹8.4–12 LPA (fixed base), excluding equity, performance bonus, and revenue share components.