What Jobs are available for Data Professionals in Delhi?

Showing 125 Data Professionals jobs in Delhi

Data Engineer

Delhi, Delhi Tata Consultancy Services

Posted 9 days ago

Job Viewed

Tap Again To Close

Job Description

Required Information

Role**

Microsoft Azure Data Engineer

Required Technical Skill Set**

SQL, ADF, ADB, ETL/Data background


Desired Experience Range 4

Location of Requirement

India


Desired Competencies (Technical/Behavioral Competency)

Must-Have**

(Ideally should not be more than 3-5)

Strong handson with Azure Data Factory (ADF), Azure Databricks, ADLS, SQL, ETL/ELT Pipelines – building, orchestrating, and optimizing data pipelines. DevOps (version control (Git)

Good-to-Have

Water industry domain knowledge


SN

Responsibility of / Expectations from the Role

1

Deliver clean, reliable and scalable data pipelines

2

Ensure data availability and quality

3

Excellent communication and documentation abilities

4

Strong analytical skil

Is this job a match or a miss?
This advertiser has chosen not to accept applicants from your region.

Data Scientist/Engineer

New Delhi, Delhi Cayuse Holdings

Posted 5 days ago

Job Viewed

Tap Again To Close

Job Description

**Overview**
**Job** **Title:** Data Scientist/Engineer
**Location:** Remote
**Type:** Corp to Corp
**Start** **Date:** ASAP
**Pay Rate:** $28-$0 per hour
**Contract Length:** 12 months - potential conversion to FTE
We are seeking a highly skilled and motivated Data Scientist/Engineer to join our dynamic and innovative team. The ideal candidate will have hands-on experience designing, building, and maintaining scalable data processing pipelines, implementing machine learning solutions, and ensuring data quality across the organization. This role requires a strong technical foundation in Azure cloud platforms, data engineering, and applied data science to support critical business decisions and technological advancements.
**Responsibilities**
**Data Engineering**
+ Build and Maintain Data Pipelines: Develop and manage scalable data pipelines using Azure Data Factory, Azure Synapse Analytics, or Azure Databricks to process large volumes of data.
+ Data Quality and Transformation: Ensure the transformation, cleansing, and ingestion of data from a wide range of structured and unstructured sources with appropriate error handling.
+ Optimize Data Storage: Utilize and optimize data storage solutions, such as Azure Data Lake and Blob Storage, to ensure cost-effective and efficient data storage practices.
**Machine Learning Support**
+ Collaboration with ML Engineers and Architects: Work with Machine Learning Engineers and Solution Architects to seamlessly deploy machine learning models into production environments.
+ Automated Retraining Pipelines: Build automated systems to monitor model performance, detect model drift, and trigger retraining processes as needed.
+ Experiment Reproducibility: Ensure reproducibility of ML experiments by maintaining proper version control for models, data, and code.
**Data Analysis and Preprocessing**
+ Data Ingestion and Exploration: Ingest, explore, and preprocess both structured and unstructured data with tools such as:
+ Azure Data Lake Storage
+ Azure Synapse Analytics
+ Azure Data Factory
+ Exploratory Data Analysis (EDA): Perform exploratory data analysis using notebooks like Azure Machine Learning Notebooks or Azure Databricks to derive actionable insights.
+ Data Quality Assessments: Identify data anomalies, evaluate data quality, and recommend appropriate data cleansing or remediation strategies.
**General Responsibilities**
+ Pipeline Monitoring and Optimization: Continuously monitor the performance of data pipelines and workloads, identifying opportunities for optimization and improvement.
+ Collaboration and Communication: Communicate findings and technical requirements effectively with cross-functional teams, including data scientists, software engineers, and business stakeholders.
+ Documentation: Document all data workflows, experiments, and model implementations to facilitate knowledge sharing and maintain continuity of operations.
**Qualifications**
+ Proven experience in building and managing data pipelines using Azure Data Factory, Azure Synapse Analytics, or Databricks.
+ Strong knowledge of Azure storage solutions, including Azure Data Lake and Blob Storage.
+ Familiarity with data transformation, ingestion techniques, and data quality methodologies.
+ Proficiency in programming languages such as Python or Scala for data processing and ML integration.
+ Experience in exploratory data analysis and working with notebooks like Jupyter, Azure Machine Learning Notebooks, or Azure Databricks.
+ Solid understanding of machine learning lifecycle management and model deployment in production environments.
+ Strong problem-solving skills with experience detecting and addressing data anomalies.
**Other Duties:** _Please note this job description is not designed to cover or contain a comprehensive list of activities, duties or responsibilities that are required of the employee for this job. Duties, responsibilities, and activities may change at any time with or without notice._
_Cayuse is an Equal Opportunity Employer. All employment decisions are based on merit, qualifications, skills, and abilities. All qualified applicants will receive consideration for employment in accordance with any applicable federal, state, or local l_ _aw._
**Pay Range**
USD 28.00 - USD 30.00 /Hr.
Submit a Referral ( find the right opportunity?**
Join ourTalent Community ( orLanguage Services Talent Community ( and be among the first to discover exciting new possibilities!
**Location** _IN-New Delhi_
**ID** _ _
**Category** _Information Technology_
**Position Type** _Independent Contractor_
**Remote** _Yes_
**Clearance Required** _None_
Is this job a match or a miss?
This advertiser has chosen not to accept applicants from your region.

Principal Data Engineer

New Delhi, Delhi Autodesk

Posted 5 days ago

Job Viewed

Tap Again To Close

Job Description

**Job Requisition ID #**
25WD91911
**Position Overview**
Autodesk is seeking a Principal Data Engineer to lead the design and development of data architecture, pipeline for our data team. In this role, you will shape the future of our data ecosystem, driving innovation across data pipelines, architecture, and cloud platforms. You'll partner closely with analysts, data scientists, AI/ML Engineers and product teams to deliver scalable solutions that power insights and decision-making across the company.
This is an exciting opportunity for a principal data engineer who thrives on solving complex problems, driving best practices, and mentoring high-performing teams.
**Responsibilities**
+ Lead and mentor a team of data engineers responsible for building and maintaining scalable data pipelines and infrastructure on AWS, Snowflake and Azure
+ Architect and implement end-to-end data pipeline solutions, ensuring high performance, resilience, and cost efficiency across both batch and real-time data flows
+ Define and drive the long-term vision for data engineering in alignment with Autodesk's data platform strategy and analytics roadmap
+ Collaborate with analysts, data scientists, FinOps engineers, and product/engineering teams to translate business needs into reliable, scalable data solutions
+ Establish and enforce standards for data quality, governance, observability, and operational excellence, defining "what good looks like" across the data lifecycle
+ Design and optimize data models, ELT/ETL processes, and data architectures to support analytics, BI, and machine learning workloads
+ Best practices in CI/CD, testing frameworks, and deploying data pipelines
+ Leverage modern data integration tools such as Fivetran, Nexla and Airflow to batch ingestion and transformation workflows
+ Apply AI-driven approaches for anomaly detection, pipeline optimization, and automation
+ Stay current with emerging trends in data engineering and proactively evolve the team's capabilities and toolset
**Minimum Qualifications**
+ 10+ years of experience in data engineering, with at least 3 years in a lead role
+ Demonstrated success in delivering large-scale, enterprise-grade data pipeline architectures and leading technical teams
+ Expertise with cloud data platforms AWS and Azure experience is a strong plus
+ Proficiency in SQL, Python, and modern data modeling practices
+ Hands-on experience with batch and streaming frameworks (e.g., Spark, Kafka, Kinesis, Hadoop)
+ Proven track record of building and maintaining real-time and batch data pipelines at scale
+ Deep understanding of ETL and ELT paradigms, including traditional ETL and modern ELT tools
+ Experience with data integration tools (Fivetran, Nexla, etc.) and orchestration platforms
+ Familiarity with Data Lakehouse architectures, data mesh concepts, and hybrid/multi-cloud strategies
+ Strong communication, leadership, and stakeholder management skills
+ Ability to drive scalable architecture decisions through platform systems design and modern engineering patterns
#LI-NB1
**Learn More**
**About Autodesk**
Welcome to Autodesk! Amazing things are created every day with our software - from the greenest buildings and cleanest cars to the smartest factories and biggest hit movies. We help innovators turn their ideas into reality, transforming not only how things are made, but what can be made.
We take great pride in our culture here at Autodesk - it's at the core of everything we do. Our culture guides the way we work and treat each other, informs how we connect with customers and partners, and defines how we show up in the world.
When you're an Autodesker, you can do meaningful work that helps build a better world designed and made for all. Ready to shape the world and your future? Join us!
**Salary transparency**
Salary is one part of Autodesk's competitive compensation package. Offers are based on the candidate's experience and geographic location. In addition to base salaries, our compensation package may include annual cash bonuses, commissions for sales roles, stock grants, and a comprehensive benefits package.
**Diversity & Belonging**
We take pride in cultivating a culture of belonging where everyone can thrive. Learn more here: you an existing contractor or consultant with Autodesk?**
Please search for open jobs and apply internally (not on this external site).
Is this job a match or a miss?
This advertiser has chosen not to accept applicants from your region.

AWS Data Engineer

Delhi, Delhi Tata Consultancy Services

Posted 1 day ago

Job Viewed

Tap Again To Close

Job Description

Role ** - AWS Data Engineer

Technical Skill Set -Aws data engineer having strong experience of Python

Experience Range -6 to 8


Technical/Behavioral Competency


1. Proficient in Python, with experience in deploying Python packages and OOP, Experience in ingesting data from different data sources (APIs, Web scraping, Flat-files, Databases)

2. hands-on experience in designing ETL processes, utilizing various architectures, orchestration tools (e.g. airflow), and data quality testing.

3. experience with Snowflake, dbt and data warehousing, experience with AWS Cloud Services, including Lambda, S3, DynamoDB and Glue.

4. Python for data engineering and strong SQL development skills.

5. Proven track record of software development . Ability to maintain CI/CD processes and drive continuous improvements.

Is this job a match or a miss?
This advertiser has chosen not to accept applicants from your region.

Azure Data Engineer

Delhi, Delhi Tata Consultancy Services

Posted 8 days ago

Job Viewed

Tap Again To Close

Job Description

Role - Azure Data Engineer

Required Technical Skill Set

Azure Data Factory + Azure Databricks.

Desired Experience Range - 8 TO 12 years

Location of Requirement

Mumbai, Bangalore, Hyderabad, Chennai, Gurgaon, Pune.


Desired Competencies (Technical/Behavioral Competency)

Must-Have

Azure Data Factory, Azure Databricks

Good-to-Have

Python/Pyspark


Responsibility of / Expectations from the Role

1 Dev Implementing highly performant, scalable and re-usable data ingestion and transformation pipelines across Azure components and services

2 Dev Strong experience using Logic Apps, Functions & Data Factory

3 Dev Strong experience of development using Microsoft Azure as well as experience of developing integrations for large, complex pieces of software.

4 Dev Strong understanding of the technical side of CI / CD

5 Dev Strong understanding of Agile

6 Des Designing and implementing a data warehouse on Azure using Azure HDInsight, Azure Data Factory, ADLS, Databricks, SQL Server, SQL DWH, Analytics Service, Event Hubs, KeyVault and other Azure services

7 Des Designing, orchestrating and implementing highly performant, scalable and re-usable data ingestion and transformation pipelines across Azure components and services

8 Des Designing and implementing Event based and streaming data ingestion and processing using Azure PaaS services

9 Des Data ingestion, data engineering and/or data curation using native Azure services or tools available in Azure Marketplace

10 Des Designing and implementing data governance, data cataloguing and data lineage solutions using tools such as Azure Data Catalog/Informatica Data Catalog

11 Des Developing physical data models in MongoDB, SQL DWH , .etc.

12 Des Developing APIs

13 Des Have worked in agile delivery teams with DevOps ways of working to continuously deliver iterative deployments with experience in using Jira, Git Repositories, Confluence, or similar

14 Des Have had experience in data migration activities in the past including migration strategy and approach, source and target system discovery, analysis, mapping, development and reconciliation

Is this job a match or a miss?
This advertiser has chosen not to accept applicants from your region.

Senior Data Engineer

Delhi, Delhi Baazi Games

Posted 9 days ago

Job Viewed

Tap Again To Close

Job Description

As a Data Engineer at Baazi Games, you will be focused on delivering data-driven insights to various functional teams enabling them to make strategic decisions that add value to the top or bottom line of the business.


What you will do


● Design, build and own all the components of a high-volume data hub.

● Build efficient data models using industry best practices and metadata for ad hoc and pre-built reporting.

● Interface with business customers, gathering requirements and delivering complete data solutions & reporting.

● Work on solutions owning the design, development, and maintenance of ongoing metrics, reports, analyses, dashboards, etc. to drive key business decisions.

● Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers.

● Interface with other technology teams to extract, transform, and load (ETL) data from a wide variety of data sources.

● Own the functional and non-functional scaling of software systems in your area.

● Provides input and recommendations on technical issues to BI Engineers, Business & Data Analysts, and Data Scientists.


What we are looking for

● 4-7 years of experience in data engineering.

● Strong understanding of ETL concepts and experience building them with large-scale, complex datasets using distributed computing technologies.

● Strong data modelling skills with solid knowledge of various industry standards such as dimensional modelling, star schemas etc.

● Extremely proficient in writing performant SQL working with large data volumes.

● Experience designing and operating very large Datalakes/Data Warehouses

● Experience with scripting for automation (e.g., UNIX Shell scripting, Python).

● Good to have experience working on the AWS stack

● Clear thinker with superb problem-solving skills to prioritize and stay focused on big needle movers.

● Curious, self-motivated & a self-starter with a ‘can-do attitude’. Comfortable working in a fast-paced dynamic environment.


Key technologies

● Must have excellent knowledge of Advanced SQL working with large data sets.

● Must have knowledge of Apache Spark.

● Should be proficient with any of the following languages: Java/Scala/Python.

● Must have knowledge of working with Apache Airflow or Nifi.

● Should be comfortable with any of the MPP querying engines like Impala, Presto or Athena.

● Good to have experience with AWS technologies including Redshift, RDS, S3, EMR, Glue, Athena etc.

Is this job a match or a miss?
This advertiser has chosen not to accept applicants from your region.

GCP Data Engineer

Delhi, Delhi Tata Consultancy Services

Posted 9 days ago

Job Viewed

Tap Again To Close

Job Description

Job Title : GCP Data Engineer

Job Location – Chennai / Hyderabd / Bangalore / Pune / Gurgoan/ Noida / NCR

Experience: 5 to 10 Years of experience in IT industry in Planning, deploying, and configuring GCP based solutions.

Requirement:

  • Mandatory to have knowledge of Big Data Architecture Patterns and experience in delivery of BigData and Hadoop Ecosystems.
  • Strong experience required in GCP . Must have done multiple large projects with GCP Big Query and ETL
  • Experience working in GCP based Big Data deployments (Batch/Realtime) leveraging components like GCP Big Query, air flow, Google Cloud Storage, Data fusion, Data flow, Data Proc etc
  • Should have experience in SQL/Data Warehouse
  • Expert in programming languages like Java, Hadoop, Scala
  • Expert in at least one distributed data processing frameworks: like Spark (Core, Streaming , SQL), Storm or Flink etc.
  • Should have worked on any of Orchestration tools – Oozie , Airflow , Ctr-M or similar, Kubernetes.
  • Worked on Performance Tuning, Optimization and Data security
  • Preferred Experience and Knowledge:
  • Excellent understanding of data technologies landscape / ecosystem.
  • Good Exposure in development with CI / CD pipelines. Knowledge of containerization, orchestration and Kubernetes engine would be an added advantage.
  • Well versed with pros and cons of various database technologies like Relational, BigQuery, Columnar databases, NOSQL
  • Exposure in data governance, catalog, lineage and associated tools would be an added advantage.
  • Well versed with SaaS, PaaS and IaaS concepts and can drive clients to a decisions
  • Good skills in Python Language and PYSPARK

Key word:

GCP , BigQuery , Python, Pyspark

Is this job a match or a miss?
This advertiser has chosen not to accept applicants from your region.
Be The First To Know

About the latest Data professionals Jobs in Delhi !

GCP Data engineer

Delhi, Delhi LTIMindtree

Posted 9 days ago

Job Viewed

Tap Again To Close

Job Description

Is this job a match or a miss?
This advertiser has chosen not to accept applicants from your region.

ETL Data Engineer

Delhi, Delhi The Techgalore

Posted 26 days ago

Job Viewed

Tap Again To Close

Job Description

remote

Pls rate the candidate (from 1 to 5, 1 lowest, 5 highest ) in these areas 

  1. Big Data
  2. PySpark
  3. AWS
  4. Redshift

Position Summary

Experienced ETL Developers and Data Engineers to ingest and analyze data from multiple enterprise sources into Adobe Experience Platform

 Requirements 

  • About 4-6 years of professional technology experience mostly focused on the following: 
  •  4+ year of experience on developing data ingestion pipelines using Pyspark(batch and streaming).
  • 4+ years experience on multiple Data engineering related services on AWS, e.g. Glue, Athena, DynamoDb, Kinesis, Kafka, Lambda, Redshift etc.
  •   1+ years of experience of working with Redshift esp the following.

o   Experience and knowledge of loading data from various sources, e.g. s3 bucket and on-prem data sources into Redshift.

o   Experience of optimizing data ingestion into Redshift.

o   Experience of designing, developing and optimizing queries on Redshift using SQL or PySparkSQL

o   Experience of designing tables in Redshift(distribution key, compression etc., vacuuming,etc. ) 

  Experience of developing applications that consume the services exposed as ReST APIs.   Experience and ability to write and analyze complex and performant SQLs

Special Consideration given for  

  • 2 years of Developing and supporting ETL pipelines using enterprise-grade ETL tools like Pentaho, Informatica, Talend
  • Good knowledge on Data Modellin g(design patterns and best practices)
  •   Experience with Reporting Technologies (i.e. Tableau, PowerBI)

What youll do

  Analyze and understand customers use case and data sources and extract, transform and load data from multitude of customers enterprise sources and ingest into Adobe Experience Platform

  Design and build data ingestion pipelines into the platform using PySpark

  Ensure ingestion is designed and implemented in a performant manner to support the throughout and latency needed.

  Develop and test complex SQLs to extractanalyze and report the data ingested into the Adobe Experience platform.

  Ensure the SQLs are implemented in compliance with the best practice to they are performant.

  Migrate platform configurations, including the data ingestion pipelines and SQL, across various sandboxes.

  Debug any issues reported on data ingestion, SQL or any other functionalities of the platform and resolve the issues.

  Support Data Architects in implementing data model in the platform.

  Contribute to the innovation charter and develop intellectual property for the organization.

  Present on advanced features and complex use case implementations at multiple forums.  

  Attend regular scrum events or equivalent and provide update on the deliverables.

  Work independently across multiple engagements with none or minimum supervision.



Is this job a match or a miss?
This advertiser has chosen not to accept applicants from your region.

Data Scientist

Delhi, Delhi Fiddlehead Technology

Posted 8 days ago

Job Viewed

Tap Again To Close

Job Description

Role: Data Scientist, Delhi (Gurugram), India - Salary Band P2


Fiddlehead Technology is a Canadian leader in advanced analytics and AI-driven solutions, helping global companies unlock value from their data. We specialize in applying machine learning, predictive forecasting, and Generative AI to solve complex business problems and empower smarter decision-making.


Our culture thrives on innovation, collaboration, and continuous learning. We invest in our people by offering structured opportunities for professional development, a healthy work-life balance, and exposure to cutting-edge AI/ML projects across industries. At Fiddlehead, employees are encouraged to explore, create, and grow while contributing to high-impact solutions.


Fiddlehead Technology is a data science company with over 10 years of experience helping

consumer-packaged goods (CPG) companies harness the power of machine learning and

AI. We transform data into actionable insights, building predictive models that drive

efficiency, growth, and competitive advantage. With increasing demand for our solutions,

we’re expanding our global team.


We are seeking Data Scientists to collaborate with our team based in Canada in developing

advanced forecasting models and optimization algorithms for leading CPG manufacturers

and service providers. In this role, you’ll monitor model performance in production,

addressing challenges like data drift and concept drift, while delivering data-driven insights

that shape business decisions.


What You’ll Bring


• Education and/or professional experience in data science and forecasting

• Proficiency with forecasting tools and libraries, ideally in Python

• Knowledge of machine learning and statistical concepts

• Strong analytical and problem-solving abilities

• Ability to communicate complex findings to non-technical stakeholders

• High attention to detail and data accuracy

• Degree in Statistics, Data Science, Computer Science, Engineering, or related field

(Bachelor’s, Master’s, or PhD)


At Fiddlehead, you’ll work on meaningful projects that advance predictive forecasting and

sustainability in the CPG industry. We offer a collaborative, inclusive, and supportive

environment that prioritizes professional development, work-life balance, and continuous

learning. Our team members enjoy dedicated time to expand their skills while contributing

to innovative solutions with real-world impact.


We carefully review every application and are committed to providing a response. Candidates selected will be invited to an in-person or virtual interview. To ensure equal access, we provide accommodations during the recruitment process for applicants with disabilities. If you require accommodations, please reach out to our team through the contact page on our website. At Fiddlehead, we are dedicated to fostering an inclusive and accessible environment where every employee and customer is respected, valued, and supported. We welcome applications from women, Indigenous peoples, persons with disabilities, ethnic and visible minorities, members of the LGBT+ community, and others who can help enrich the diversity of our workforce.


We offer a competitive compensation package with performance-based incentives and opportunities to contribute to impactful projects. Employees benefit from mentorship, training, and active participation in AI communities, all within a collaborative culture that values innovation, creativity, and professional growth.

Is this job a match or a miss?
This advertiser has chosen not to accept applicants from your region.
 

Nearby Locations

Other Jobs Near Me

Industry

  1. request_quote Accounting
  2. work Administrative
  3. eco Agriculture Forestry
  4. smart_toy AI & Emerging Technologies
  5. school Apprenticeships & Trainee
  6. apartment Architecture
  7. palette Arts & Entertainment
  8. directions_car Automotive
  9. flight_takeoff Aviation
  10. account_balance Banking & Finance
  11. local_florist Beauty & Wellness
  12. restaurant Catering
  13. volunteer_activism Charity & Voluntary
  14. science Chemical Engineering
  15. child_friendly Childcare
  16. foundation Civil Engineering
  17. clean_hands Cleaning & Sanitation
  18. diversity_3 Community & Social Care
  19. construction Construction
  20. brush Creative & Digital
  21. currency_bitcoin Crypto & Blockchain
  22. support_agent Customer Service & Helpdesk
  23. medical_services Dental
  24. medical_services Driving & Transport
  25. medical_services E Commerce & Social Media
  26. school Education & Teaching
  27. electrical_services Electrical Engineering
  28. bolt Energy
  29. local_mall Fmcg
  30. gavel Government & Non Profit
  31. emoji_events Graduate
  32. health_and_safety Healthcare
  33. beach_access Hospitality & Tourism
  34. groups Human Resources
  35. precision_manufacturing Industrial Engineering
  36. security Information Security
  37. handyman Installation & Maintenance
  38. policy Insurance
  39. code IT & Software
  40. gavel Legal
  41. sports_soccer Leisure & Sports
  42. inventory_2 Logistics & Warehousing
  43. supervisor_account Management
  44. supervisor_account Management Consultancy
  45. supervisor_account Manufacturing & Production
  46. campaign Marketing
  47. build Mechanical Engineering
  48. perm_media Media & PR
  49. local_hospital Medical
  50. local_hospital Military & Public Safety
  51. local_hospital Mining
  52. medical_services Nursing
  53. local_gas_station Oil & Gas
  54. biotech Pharmaceutical
  55. checklist_rtl Project Management
  56. shopping_bag Purchasing
  57. home_work Real Estate
  58. person_search Recruitment Consultancy
  59. store Retail
  60. point_of_sale Sales
  61. science Scientific Research & Development
  62. wifi Telecoms
  63. psychology Therapy
  64. pets Veterinary
View All Data Professionals Jobs View All Jobs in Delhi