520 Data Engineer jobs in Delhi

Big Data Engineer

Delhi, Delhi Alef Education

Posted today

Job Viewed

Tap Again To Close

Job Description

Who we are

Alef Education began with a bold idea: that every learner deserves a personalised and meaningful education experience. What started in 2016 as a small pilot programme in Abu Dhabi has evolved into one of the world’s most dynamic EdTech companies—reshaping how millions of students engage with learning across the globe.

Today, Alef is proudly headquartered in the UAE, working hand-in-hand with ministries of education, schools, and teachers to bring smart, data-powered platforms into classrooms in over 14,000 schools.

Supporting over 1.1 million students and 50,000 teachers across the UAE, Indonesia & Morocco our AI-driven platforms generate 16+ million data points every day, helping drive smarter learning decisions. Whether it’s improving national exam results, boosting classroom engagement, or supporting educators with world-class tools, Alef is committed to impact at scale.

In 2024, Alef made history as the first EdTech company to list on the Abu Dhabi Securities Exchange (ADX), cementing our role as a regional innovator with global reach.

About The Role

As an ALEF Big Data Engineer you will have a strong understanding of big data technologies with an exceptional ability to code. You will provide technical leadership, working closely with the wider team to ensure high quality code is delivered in line with the project goals and delivery cycles. You will work closely with other teams to deliver rapid prototypes as well as production code for which you will ensure high accessibility standards are upheld. We expect familiarity with modern frameworks and languages, as well as working practices such as Clean Code, TDD, BDD, continuous integration, continuous delivery, and DevOps.

Key Responsibilities

Defining and developing services and solutions

- Define, design, and develop services and solutions around large data ingestion, storage, and management such as withRDBMS, No SQL DBs, Log Files, Events.
- Define, design, and run robust data pipelines/batch jobs in a production environment.
- Architecting highly scalable, highly concurrent, and low latency systems

Maintain, support, and enhance current systems.

- Contribute to paying down technical debt and use development approaches that minimize the growth of new technical debt.
- Contribute feedback to improve the quality, readability, and testability of the code base within your team.
- Mentor and train other developers in a non-line management capacity.
- Build tools (One of SBT, Gradle, Maven).
- Ensure all software built is robust and scalable.

Collaborating with Internal and external stakeholders

- Participating in sprint planning to work with developers and project teams to ensure projects are deployable and monitorable from the outside.
- Work with third-party and other internal providers to support a variety of integrations.
- As part of the team, you may be expected to participate in some of the 2nd line in-house support and Out-of-Hours support rotas.
- Proactively advise on best practices.

To Be The Right Fit, You'll Need

- Degree in Computer Science, Software Engineering or related preferred
- Minimum of 5 years experience in a Big Data
- Follow Clean Code/Solid principles
- Adhere and use TDD/BDD.
- Outstanding ability to develop efficient, readable, highly optimized/maintainable and clear code.
- Highly Proficient in either Functional Java or Scala, Python
- Knowledge of Azure Big Data/Analytics services – ADLS (Azure Data Lake Storage), HDInsight, Azure Data Factory, Azure Synapse Analytics, Azure Fabric, Azure Event Hubs, Azure Stream Analytics, Azure Databricks
- Experience of Storing Data in systems such as Hadoop HDFS, ADLS, Event Hubs
- Experience of designing, setting up and running big data tech stacks such as Hadoop, Azure Databricks, Spark and distributed datastores such as Cassandra, DocumentDBs, MongoDB, Event Hubs
- In-depth knowledge of Hadoop technology ecosystem – HDFS, Spark, Hive, HBase, Event Hubs, Flume, Sqoop, Oozie, SPARK, Avro, Parquet
- Experience debugging a complex multi-server service.
- In depth knowledge and experience in IaaS/PaaS solutions (eg AWS Infrastructure hosting and managed services)
- Familiarity with network protocols - TCP/IP, HTTP, SSL, etc.
- Knowledge of relational and non-relational database systems
- Understanding continuous integration and delivery.
- Mocking (any of the following Mockito, ScalaTest Spock, Jasmine, Mocha).
- IDE Intellij or Eclipse.
- Build tools (One of SBT, Gradle, Maven).
- Ensure all software built is robust and scalable.
- An ability to communicate technical concepts to a non-technical audience.
- Working knowledge of unix-like operating systems such as Linux and/or Mac OS X.
- Knowledge of the git version control system.
- Ability to quickly research and learn new programming tools and techniques.
This advertiser has chosen not to accept applicants from your region.

Data Engineer

Delhi, Delhi Tata Consultancy Services

Posted 1 day ago

Job Viewed

Tap Again To Close

Job Description

Required Information

Role**

Microsoft Azure Data Engineer

Required Technical Skill Set**

SQL, ADF, ADB, ETL/Data background


Desired Experience Range 4

Location of Requirement

India


Desired Competencies (Technical/Behavioral Competency)

Must-Have**

(Ideally should not be more than 3-5)

Strong handson with Azure Data Factory (ADF), Azure Databricks, ADLS, SQL, ETL/ELT Pipelines – building, orchestrating, and optimizing data pipelines. DevOps (version control (Git)

Good-to-Have

Water industry domain knowledge


SN

Responsibility of / Expectations from the Role

1

Deliver clean, reliable and scalable data pipelines

2

Ensure data availability and quality

3

Excellent communication and documentation abilities

4

Strong analytical skil

This advertiser has chosen not to accept applicants from your region.

Data Engineer

Delhi, Delhi Everise

Posted today

Job Viewed

Tap Again To Close

Job Description

Company Overview

Join us on our mission to elevate customer experiences for people around the world. As a member of the Everise family, you will be part of a global experience company that believes in being people-first, celebrating diversity and incubating innovation. Our dedication to our purpose and people is being recognized by our employees and the industry. Our 4.6/5 rating on Glassdoor and our shiny, growing wall of Best Place to Work awards is a testament to our investment in our culture. Through the power of diversity, we celebrate all cultures for their uniqueness and strengths. With 13 centers around the world and a robust work at home program, we believe great things happen when we work with people who think differently from us. Find a job you’ll love today!

We are looking for a skilled and experienced Data Engineer to design, build, and optimize scalable data pipelines and architectures that power data-driven decision-making across the organization. Candidate with a proven track record of writing complex stored procedures and optimizing query performance on large datasets.

Requirement:

- Architect, develop, and maintain scalable and secure data pipelines to process structured and unstructured data from diverse sources.
- Collaborate with data scientists, BI analysts and business stakeholders to understand data requirements.
- Optimize data workflows and processing for performance, ensure data quality, reliability and governance
- Hands-on experience with modern data platforms such as Snowflake, Redshift, BigQuery, or Databricks.
- Strong knowledge of T-SQL and SQL Server Management Studio (SSMS)
- Experience in writing complex stored procedures, Views and query performance tuning on large datasets
- Strong understanding of database management systems (SQL,NoSQL) and data warehousing concepts.
- Good knowledge and hands on experience in tuning the Database at Memory level, able to tweak SQL queries.
- In-depth knowledge of data modeling principles and methodologies (e.g., relational, dimensional, NoSQL).
- Excellent analytical and problem-solving skills with a meticulous attention to detail.
- Hands-on experience with data transformation techniques, including data mapping, cleansing, and validation.
- Proven ability to work independently and manage multiple priorities in a fast-paced environment.
- Work closely with cross-functional teams to gather and analyse requirements, develop database solutions, and support application development efforts
- Knowledge of cloud database solutions (e.g., Azure SQL Database, AWS RDS).

If you’ve got the skills to succeed and the motivation to make it happen, we look forward to hearing from you.
This advertiser has chosen not to accept applicants from your region.

Data Engineer

Delhi, Delhi Digivance Solutions

Posted today

Job Viewed

Tap Again To Close

Job Description

Position: Data Engineer

Experience: 5–10 Years

Location: Chennai, Bengaluru, Pune, Hyderabad, Mumbai, Delhi NCR

(candidates will be required to work at any of these locations in hybrid mode)

Key Responsibilities

- Collaborate with business and technology stakeholders to understand current and future data requirements.
- Design, build, and maintain reliable, efficient, and scalable data infrastructure for data collection, storage, transformation, and analysis.
- Plan, design, and develop scalable data solutions, including data pipelines, data models, and applications for efficient and reliable workflows.
- Design, implement, and maintain data platforms such as data warehouses, data lakes, and lakehouses for structured and unstructured data.
- Develop and optimize analytical tools, algorithms, and programs to support data engineering activities, including scripting and task automation.
- Monitor and ensure optimum system performance while identifying opportunities for continuous improvement.

Required Skills

- BigQuery and Google Cloud Platform (GCP) (BigQuery, Dataflow, Dataproc, Data Fusion, Cloud SQL).
- Airflow, PySpark, Python, and API development.
- Terraform and Tekton for infrastructure as code and CI/CD automation.
- Strong knowledge of PostgreSQL and SQL optimization.
This advertiser has chosen not to accept applicants from your region.

Data Engineer

Delhi, Delhi RevX

Posted today

Job Viewed

Tap Again To Close

Job Description

Data Engineer

About RevX

RevX helps app businesses acquire and reengage users via programmatic to retain, monetize, and accelerate revenue. We're all about taking your app businesses to a new growth level. We rely on data science, innovative technology, and AI, and a skilled team, to create and deliver seamless ad experiences to delight your app users. That’s why RevX is the ideal partner for app marketers that demand trustworthy insights, a hands-on team, and a commitment to growth. We help you build sound mobile strategies, combining programmatic UA, app re engagement, and performance branding to drive real and verifiable results so you can scale your business: with real users, high retention, and incremental revenue.

About the Role

We are seeking a forward-thinking Data Engineer who can bridge the gap between traditional data pipelines and modern Generative AI (GenAI)-enabled analytics tools. You'll design intelligent internal analytics systems using SQL, automation platforms like n8n, BI tools like Looker, and GenAI interfaces such as ChatGPT, Gemini, or LangChain.

This is a unique opportunity to innovate at the intersection of data engineering, AI, and product analytics.

Key Responsibilities

- Design, build, and maintain analytics workflows/tools leveraging GenAI platforms (e.g., ChatGPT, Gemini etc.) and automation tools (e.g., n8n, Looker etc.).
- Collaborate with product, marketing, and engineering teams to identify and deliver data-driven insights.
- Use SQL to query data from data warehouses (BigQuery, Redshift, Snowflake, etc.) and transform it for analysis or reporting.
- Build automated reporting and insight generation systems using visual dashboards and GenAI-based interfaces.
- Evaluate GenAI tools and APIs for applicability in data analytics workflows.
- Explore use cases where GenAI can assist in natural language querying, automated summarization, and explanatory analytics.
- Work closely with business teams to enable self-service analytics via intuitive GenAI-powered interfaces.
- Design and maintain robust data pipelines to ensure timely and accurate ingestion, transformation, and availability of data across systems.
- Implement best practices in data modeling, testing, and monitoring to ensure data quality and reliability in analytics workflows.

Requirements

- 3+ years of experience in data analysis or a related field.
- Strong proficiency in SQL with the ability to work across large datasets.
- Hands-on experience building data tools/workflows using any of the following: n8n, Looker/LookML, ChatGPT API, Gemini, LangChain, or similar.
- Familiarity with GenAI concepts, LLMs, prompt engineering, and their practical application in data querying and summarization.
- Excellent problem-solving skills and a mindset to automate and optimize wherever possible.
- Strong communication skills with the ability to translate complex data into actionable insights for non-technical stakeholders.

Nice to Have

- Prior experience in AdTech (ad operations, performance marketing, attribution, audience insights, etc.).
- Experience with Python, Jupyter Notebooks, or scripting for data manipulation.
- Familiarity with cloud platforms like Google Cloud Platform (GCP) or AWS.
- Knowledge of data visualization tools like Tableau, Power BI, or Looker etc.

Why Join Us?

- Work on the cutting edge of GenAI and data analytics innovation.
- Contribute to building scalable analytics tools that empower entire teams.
- Be part of a fast-moving, experimentation-driven culture where your ideas matter.

For more information visit
This advertiser has chosen not to accept applicants from your region.

Data Engineer

Delhi, Delhi Insight Global

Posted today

Job Viewed

Tap Again To Close

Job Description

Position: GCP Data Engineer

Location: 100% Remote in India

Duration: 12 month contract + extensions + conversions

Package: 10 LPA- 26 LPA

Interview Process: 2 Rounds

REQUIRED SKILLS AND EXPERIENCE

- 6+ Years of experience as a Data Engineer
- Experience with GCP Data ie. Big Query, Cloud Storage, BigTable, Airflow, Dataproc, Dataflow
- Strong SQL experience (NoSQL, SQL, Postgres)
- Exposure to Java programming
- Experience leading and/or mentoring other team members

JOB DESCRIPTION

An Insight Global client in the Financial Services industry is looking for GCP Data Engineers to join their team. This is an existing team and project but net new additions to help with productivity and time to deliver. The client is looking for GCP Data Engineers who have experience in building data pipelines, as well as experience managing, transforming, and organizing big data. Strong background in SQL will be required as well as that is the underlying structure. This role will also require hands-on experience with GCP product offerings and Google's big data technologies. Responsibilities will center around the current challenges in the client's business: - Scale of Data: every customer and transaction are eligible for Anti Money Laundering investigations - Breadth of Product and Jurisdiction: money Laundering is a global issue, with local regulation - Technology Fragmentation: existing processes are split over many platforms
This advertiser has chosen not to accept applicants from your region.

Data Engineer

Delhi, Delhi Tata Consultancy Services

Posted today

Job Viewed

Tap Again To Close

Job Description

Required Information

Role**

Microsoft Azure Data Engineer

Required Technical Skill Set**

SQL, ADF, ADB, ETL/Data background

Desired Experience Range 4

Location of Requirement

India

Desired Competencies (Technical/Behavioral Competency)

Must-Have**

(Ideally should not be more than 3-5)

Strong handson with Azure Data Factory (ADF), Azure Databricks, ADLS, SQL, ETL/ELT Pipelines – building, orchestrating, and optimizing data pipelines. DevOps (version control (Git)

Good-to-Have

Water industry domain knowledge

SN

Responsibility of / Expectations from the Role

1

Deliver clean, reliable and scalable data pipelines

2

Ensure data availability and quality

3

Excellent communication and documentation abilities

4

Strong analytical skil
This advertiser has chosen not to accept applicants from your region.
Be The First To Know

About the latest Data engineer Jobs in Delhi !

Data Engineer

Delhi, Delhi Jaipur Rugs

Posted today

Job Viewed

Tap Again To Close

Job Description

Organization Description: Jaipur Rugs is a social enterprise that connects rural craftsmanship with global markets through its luxurious handmade carpets. It is a family-run business that offers an exclusive range of hand-knotted and hand-woven rugs made using 2500 years old traditional art forms. The founder, Mr. Nand Kishore Chaudhary created a unique business model, which provides livelihood to the artisans at their doorstep. This changed the standard practice of involving middlemen to work with artisanal communities. The company currently has a network of over 40,000 artisans spread across 600 rural Indian villages in five states of India. It has an end-to-end business model, right from sourcing of wool to exporting a finished handmade rug.

The modern and eclectic collection of rugs, made using the finest wool and silk, has won numerous global awards and is currently exported to more than 45 countries with the US sales arm, Jaipur Living, Inc. located in Georgia, Atlanta.

Job Description: The specific responsibilities of the position holder will be (though not restricted to) the following:

- The Data Engineer will be responsible for building and maintaining end-to-end data pipelines for analytics solutions, leveraging Microsoft Fabric to integrate Business applications to Jaipur Living BI platform
- Build and maintain end-to-end data pipelines across business application and BI platform
- Develop and implement solutions on the MS Fabric platform to support a modern enterprise data platform by implementing Kimball Data Lakehouse (Fact and Dim)
- Design, develop, and implement analytics solutions on MS Fabric and PowerBI
- Develop ETL/ELT processes for large-scale data ingestion, ensuring data quality and pipeline performance
- Transform and model data to meet business requirements, loading it into Fabric Data Lakehouse (bronze, silver, gold layers)
- Implement monitoring and error handling processes for reliable data integration
- Optimize pipelines for cost and performance through query tuning, caching, and resource management
- Automate repeatable data preparation tasks to reduce manual processes
- Provide troubleshooting, analysis, and production support for existing solutions, including enhancements
- Integrate GitHub for artifact management and versioning
- Continuous Improvement – Identifies opportunities, generates ideas, and implements solutions to improve processes and conditions

Skills

- Minimum of 5 years of hands-on experience building data pipelines, data models, and supporting enterprise data warehouse solutions
- Hands-on experience with Microsoft Fabric for data pipelines, Gen2 Data Flows, Activity optimization, Data Modeling, and analytics
- Proficiency with Microsoft Fabric data services, including Azure Synapse Analytics and Dataverse
- Strong SQL, data modeling, and ETL/ELT development skills
- Experience working within a scaled agile framework for data engineering product delivery
- 4+ years of experience with ETL and cloud data technologies, including Azure Data Lake, Azure Data Factory, Azure Synapse, Azure Functions, Azure Data Explorer, and Power BI (or equivalent platforms) and Google Big Query
- 4+ years of experience in big data scripting languages such as Python, or SQL
- 4+ years of experience in one scripting language for data retrieval and manipulation (e.g., SQL)
- Strong SQL, data modeling, data warehouse, and OLAP concepts
- Experience with Azure Data Lake Storage, Azure Synapse Analytics, Fabric Spark Pools, Fabric Notebooks, Python, DevOps, and CI/CD
- Familiarity with data lake medallion architecture and unified data models for BI platform
- Familiarity with Scaled Agile, DevOps, Scrum, and ITIL concepts
- Retail experience will be a plus
- WMS, ERP, CRM, PIM, Google Analytics experience is a plus
- Microsoft Dynamics ERP experience is a plus
- Experience with Fabric AI services will be a plus
This advertiser has chosen not to accept applicants from your region.

Data Engineer

Delhi, Delhi KPG99 INC

Posted today

Job Viewed

Tap Again To Close

Job Description

Role- Databricks Engineer

Location- Remote

Duration- 12+ months with Extensions

REQUIRED SKILLS AND EXPERIENCE

- 3–5 years of experience in data engineering roles

- Strong hands-on experience with Databricks for data processing and pipeline development.

- Proficiency in SQL for data querying, transformation, and troubleshooting.

- Solid programming skills in Python for data manipulation and automation.

- Proven experience working with pharmaceutical or life sciences data, including familiarity with industry-specific data structures and compliance considerations

JOB DESCRIPTION

We are seeking 3 skilled and motivated Databricks Data Engineers with 3–5 years of experience to support data engineering initiatives in the pharmaceutical domain. The ideal candidate will have hands-on expertise in Databricks, SQL, and Python, and a strong understanding of pharma/life sciences data. This role involves building and optimizing data pipelines, transforming complex datasets, and enabling scalable data solutions that support analytics and business intelligence efforts. The candidate should be comfortable working in a fast-paced environment and collaborating with cross-functional teams to ensure data quality, accessibility, and performance.
This advertiser has chosen not to accept applicants from your region.

Data Engineer

Delhi, Delhi NuVision Auto Glass

Posted today

Job Viewed

Tap Again To Close

Job Description

NuVision Auto Glass is a leading auto glass service provider in the USA, serving customers across Arizona, Florida, South Carolina, and Colorado. Known for delivering reliable mobile windshield replacement and expert auto glass services, ensuring convenience and safety at every step.

With seamless insurance claims, easy financing options for cash payments, and advanced ADAS calibration through a partnership with ADAS360, NuVision Auto Glass guarantees exceptional service. A strong reputation and growing presence in multiple regions reflect its commitment to quality and customer satisfaction.

About the Role

We are looking for a Data Engineer to lead the design and implementation of NuVision’s new data platform from the ground up. You’ll own the creation of a scalable data lake and analytics backbone that powers critical business decisions.

This is an exciting opportunity to work with cutting-edge tools on Google Cloud Platform (GCP) — including Cloud Storage, BigQuery, Cloud Functions, and Looker Studio — to centralize, transform, and model operational data for company-wide reporting and insights.

What You’ll Do

- Design, build, and maintain the end-to-end data pipeline on GCP.
- Ingest and organize raw data in Google Cloud Storage (GCS) for a reliable data lake.
- Set up and manage large-scale data lakes, ensuring scalability, reliability, and optimized data flow.
- Automate ongoing data ingestion using Cloud Functions + Cloud Scheduler.
- Develop robust ETL/ELT processes using Python and advanced SQL.
- Transform and model raw operational data into analytics-ready BigQuery tables.
- Partner with business teams to turn reporting needs into efficient data models.
- Serve as the backbone for Looker Studio dashboards, ensuring performance, accuracy, and scalability.
- Establish and enforce data quality standards, documentation, and governance practices.

What We're Looking For

- Minimum 10 years of experience in Data Engineering or a closely related field.
- Bachelor’s or Master’s degree in Computer Science, Data Engineering, Information Systems, Mathematics, Statistics, or a related quantitative field.
- Extensive experience setting up and running Data Lakes on cloud environments (preferably GCP).
- Strong knowledge of Google Cloud Platform (GCS, BigQuery, Cloud Functions, Cloud Scheduler).
- Hands-on expertise with Advanced SQL, Python scripting, and building Looker dashboards.
- Proven experience designing and optimizing ETL/ELT pipelines and data modeling.
- Deep understanding of BI tools (preferably Looker Studio) for reporting and dashboards.
- Passion for building scalable, automated, and well-documented data systems.
- Strong problem-solving skills and ability to translate business requirements into data solutions.
This advertiser has chosen not to accept applicants from your region.
 

Nearby Locations

Other Jobs Near Me

Industry

  1. request_quote Accounting
  2. work Administrative
  3. eco Agriculture Forestry
  4. smart_toy AI & Emerging Technologies
  5. school Apprenticeships & Trainee
  6. apartment Architecture
  7. palette Arts & Entertainment
  8. directions_car Automotive
  9. flight_takeoff Aviation
  10. account_balance Banking & Finance
  11. local_florist Beauty & Wellness
  12. restaurant Catering
  13. volunteer_activism Charity & Voluntary
  14. science Chemical Engineering
  15. child_friendly Childcare
  16. foundation Civil Engineering
  17. clean_hands Cleaning & Sanitation
  18. diversity_3 Community & Social Care
  19. construction Construction
  20. brush Creative & Digital
  21. currency_bitcoin Crypto & Blockchain
  22. support_agent Customer Service & Helpdesk
  23. medical_services Dental
  24. medical_services Driving & Transport
  25. medical_services E Commerce & Social Media
  26. school Education & Teaching
  27. electrical_services Electrical Engineering
  28. bolt Energy
  29. local_mall Fmcg
  30. gavel Government & Non Profit
  31. emoji_events Graduate
  32. health_and_safety Healthcare
  33. beach_access Hospitality & Tourism
  34. groups Human Resources
  35. precision_manufacturing Industrial Engineering
  36. security Information Security
  37. handyman Installation & Maintenance
  38. policy Insurance
  39. code IT & Software
  40. gavel Legal
  41. sports_soccer Leisure & Sports
  42. inventory_2 Logistics & Warehousing
  43. supervisor_account Management
  44. supervisor_account Management Consultancy
  45. supervisor_account Manufacturing & Production
  46. campaign Marketing
  47. build Mechanical Engineering
  48. perm_media Media & PR
  49. local_hospital Medical
  50. local_hospital Military & Public Safety
  51. local_hospital Mining
  52. medical_services Nursing
  53. local_gas_station Oil & Gas
  54. biotech Pharmaceutical
  55. checklist_rtl Project Management
  56. shopping_bag Purchasing
  57. home_work Real Estate
  58. person_search Recruitment Consultancy
  59. store Retail
  60. point_of_sale Sales
  61. science Scientific Research & Development
  62. wifi Telecoms
  63. psychology Therapy
  64. pets Veterinary
View All Data Engineer Jobs View All Jobs in Delhi