262 Data Scientists jobs in Delhi

Big Data Developer

New Delhi, Delhi Ravant Media

Posted 8 days ago

Job Viewed

Tap Again To Close

Job Description

We are seeking a highly skilled Big Data Engineer to join our growing team. In this role, you will be responsible for designing, building, and maintaining robust data pipelines that handle high-volume financial data, including stocks, cryptocurrencies, and third-party data sources. You will play a critical role in ensuring data integrity, scalability, and real-time availability across our platforms.


Key Responsibilities :-

  • Design, develop, and manage end-to-end data pipelines for stocks, crypto, and other financial datasets.
  • Integrate third-party APIs and data feeds into internal systems.
  • Build and optimize data ingestion, storage, and transformation workflows (batch and real-time).
  • Ensure data quality, consistency, and reliability across all pipelines.
  • Collaborate with data scientists, analysts, and backend engineers to provide clean, structured, and scalable datasets.
  • Monitor, troubleshoot, and optimize pipeline performance.
  • Implement ETL/ELT best practices, data governance, and security protocols.
  • Contribute to the scalability and automation of our data infrastructure.


Requirements :-

  • Proven experience as a Big Data Engineer / Data Engineer (preferably in financial or crypto domains).
  • Strong expertise in Python, SQL, and distributed data systems.
  • Hands-on experience with data pipeline tools (e.g., Apache Spark, Kafka, Airflow, Flink, Prefect).
  • Experience with cloud platforms (AWS, GCP, or Azure) and data warehousing (Snowflake, BigQuery, Redshift, etc.).
  • Knowledge of API integrations and handling real-time streaming data.
  • Familiarity with databases (relational and NoSQL) and data modeling.
  • Solid understanding of stocks, cryptocurrencies, and financial data structures (preferred).
  • Strong problem-solving skills with the ability to handle large-scale data challenges.
This advertiser has chosen not to accept applicants from your region.

Big Data Developer

Narela, Delhi Ravant Media

Posted 8 days ago

Job Viewed

Tap Again To Close

Job Description

We are seeking a highly skilled Big Data Engineer to join our growing team. In this role, you will be responsible for designing, building, and maintaining robust data pipelines that handle high-volume financial data, including stocks, cryptocurrencies, and third-party data sources. You will play a critical role in ensuring data integrity, scalability, and real-time availability across our platforms.


Key Responsibilities :-

  • Design, develop, and manage end-to-end data pipelines for stocks, crypto, and other financial datasets.
  • Integrate third-party APIs and data feeds into internal systems.
  • Build and optimize data ingestion, storage, and transformation workflows (batch and real-time).
  • Ensure data quality, consistency, and reliability across all pipelines.
  • Collaborate with data scientists, analysts, and backend engineers to provide clean, structured, and scalable datasets.
  • Monitor, troubleshoot, and optimize pipeline performance.
  • Implement ETL/ELT best practices, data governance, and security protocols.
  • Contribute to the scalability and automation of our data infrastructure.


Requirements :-

  • Proven experience as a Big Data Engineer / Data Engineer (preferably in financial or crypto domains).
  • Strong expertise in Python, SQL, and distributed data systems.
  • Hands-on experience with data pipeline tools (e.g., Apache Spark, Kafka, Airflow, Flink, Prefect).
  • Experience with cloud platforms (AWS, GCP, or Azure) and data warehousing (Snowflake, BigQuery, Redshift, etc.).
  • Knowledge of API integrations and handling real-time streaming data.
  • Familiarity with databases (relational and NoSQL) and data modeling.
  • Solid understanding of stocks, cryptocurrencies, and financial data structures (preferred).
  • Strong problem-solving skills with the ability to handle large-scale data challenges.
This advertiser has chosen not to accept applicants from your region.

Big Data Developer

Delhi, Delhi Ravant Media

Posted 8 days ago

Job Viewed

Tap Again To Close

Job Description

We are seeking a highly skilled Big Data Engineer to join our growing team. In this role, you will be responsible for designing, building, and maintaining robust data pipelines that handle high-volume financial data, including stocks, cryptocurrencies, and third-party data sources. You will play a critical role in ensuring data integrity, scalability, and real-time availability across our platforms.


Key Responsibilities :-

  • Design, develop, and manage end-to-end data pipelines for stocks, crypto, and other financial datasets.
  • Integrate third-party APIs and data feeds into internal systems.
  • Build and optimize data ingestion, storage, and transformation workflows (batch and real-time).
  • Ensure data quality, consistency, and reliability across all pipelines.
  • Collaborate with data scientists, analysts, and backend engineers to provide clean, structured, and scalable datasets.
  • Monitor, troubleshoot, and optimize pipeline performance.
  • Implement ETL/ELT best practices, data governance, and security protocols.
  • Contribute to the scalability and automation of our data infrastructure.


Requirements :-

  • Proven experience as a Big Data Engineer / Data Engineer (preferably in financial or crypto domains).
  • Strong expertise in Python, SQL, and distributed data systems.
  • Hands-on experience with data pipeline tools (e.g., Apache Spark, Kafka, Airflow, Flink, Prefect).
  • Experience with cloud platforms (AWS, GCP, or Azure) and data warehousing (Snowflake, BigQuery, Redshift, etc.).
  • Knowledge of API integrations and handling real-time streaming data.
  • Familiarity with databases (relational and NoSQL) and data modeling.
  • Solid understanding of stocks, cryptocurrencies, and financial data structures (preferred).
  • Strong problem-solving skills with the ability to handle large-scale data challenges.
This advertiser has chosen not to accept applicants from your region.

Big Data Specialist

New Delhi, Delhi Brillio

Posted 11 days ago

Job Viewed

Tap Again To Close

Job Description

Role Overview

We are seeking a highly skilled Big Data Engineer to join our team. The ideal candidate will have strong experience in building, maintaining, and optimizing large-scale data pipelines and distributed data processing systems. This role involves working closely with cross-functional teams to ensure the reliability, scalability, and performance of data solutions.


Key Responsibilities

  • Design, develop, and maintain scalable data pipelines and ETL processes.
  • Work with large datasets using Hadoop ecosystem tools (Hive, Spark).
  • Build and optimize real-time and batch data processing solutions using Kafka and Spark Streaming.
  • Write efficient, high-performance SQL queries to extract, transform, and load data.
  • Develop reusable data frameworks and utilities in Python.
  • Collaborate with data scientists, analysts, and product teams to deliver reliable data solutions.
  • Monitor, troubleshoot, and optimize big data workflows for performance and cost efficiency.


Must-Have Skills

  • Strong hands-on experience with Hive and SQL for querying and data transformation.
  • Proficiency in Python for data manipulation and automation.
  • Expertise in Apache Spark (batch and streaming).
  • Experience working with Kafka for streaming data pipelines.


Good-to-Have Skills

  • Experience with workflow orchestration tools (Airflow etc.)
  • Knowledge of cloud-based big data platforms (AWS EMR, GCP Dataproc, Azure HDInsight).
  • Familiarity with CI/CD pipelines and version control (Git).
This advertiser has chosen not to accept applicants from your region.

Big Data Specialist

Narela, Delhi Brillio

Posted 11 days ago

Job Viewed

Tap Again To Close

Job Description

Role Overview

We are seeking a highly skilled Big Data Engineer to join our team. The ideal candidate will have strong experience in building, maintaining, and optimizing large-scale data pipelines and distributed data processing systems. This role involves working closely with cross-functional teams to ensure the reliability, scalability, and performance of data solutions.


Key Responsibilities

  • Design, develop, and maintain scalable data pipelines and ETL processes.
  • Work with large datasets using Hadoop ecosystem tools (Hive, Spark).
  • Build and optimize real-time and batch data processing solutions using Kafka and Spark Streaming.
  • Write efficient, high-performance SQL queries to extract, transform, and load data.
  • Develop reusable data frameworks and utilities in Python.
  • Collaborate with data scientists, analysts, and product teams to deliver reliable data solutions.
  • Monitor, troubleshoot, and optimize big data workflows for performance and cost efficiency.


Must-Have Skills

  • Strong hands-on experience with Hive and SQL for querying and data transformation.
  • Proficiency in Python for data manipulation and automation.
  • Expertise in Apache Spark (batch and streaming).
  • Experience working with Kafka for streaming data pipelines.


Good-to-Have Skills

  • Experience with workflow orchestration tools (Airflow etc.)
  • Knowledge of cloud-based big data platforms (AWS EMR, GCP Dataproc, Azure HDInsight).
  • Familiarity with CI/CD pipelines and version control (Git).
This advertiser has chosen not to accept applicants from your region.

Big Data Specialist

Delhi, Delhi Brillio

Posted 11 days ago

Job Viewed

Tap Again To Close

Job Description

Role Overview

We are seeking a highly skilled Big Data Engineer to join our team. The ideal candidate will have strong experience in building, maintaining, and optimizing large-scale data pipelines and distributed data processing systems. This role involves working closely with cross-functional teams to ensure the reliability, scalability, and performance of data solutions.


Key Responsibilities

  • Design, develop, and maintain scalable data pipelines and ETL processes.
  • Work with large datasets using Hadoop ecosystem tools (Hive, Spark).
  • Build and optimize real-time and batch data processing solutions using Kafka and Spark Streaming.
  • Write efficient, high-performance SQL queries to extract, transform, and load data.
  • Develop reusable data frameworks and utilities in Python.
  • Collaborate with data scientists, analysts, and product teams to deliver reliable data solutions.
  • Monitor, troubleshoot, and optimize big data workflows for performance and cost efficiency.


Must-Have Skills

  • Strong hands-on experience with Hive and SQL for querying and data transformation.
  • Proficiency in Python for data manipulation and automation.
  • Expertise in Apache Spark (batch and streaming).
  • Experience working with Kafka for streaming data pipelines.


Good-to-Have Skills

  • Experience with workflow orchestration tools (Airflow etc.)
  • Knowledge of cloud-based big data platforms (AWS EMR, GCP Dataproc, Azure HDInsight).
  • Familiarity with CI/CD pipelines and version control (Git).
This advertiser has chosen not to accept applicants from your region.

Sn. Data Scientists- AI/ML- GEN AI- Work location : Across india | EXP: 4 - 12 years

New Delhi, Delhi Capgemini Engineering

Posted 4 days ago

Job Viewed

Tap Again To Close

Job Description

Data Scientists- AI/ML- GEN AI- Across india | EXP: 4 - 10 years


data scientists with total of around 4-10 years of experience and atleast 4-10 years of relevant data science, analytics, and AI/ML Python; data science; AI/ML; GEN AI


Primary Skills :


- Excellent understanding and hand-on experience of data-science and machine learning techniques & algorithms for supervised & unsupervised problems, NLP and computer vision and GEN AI. Good applied statistics skills, such as distributions, statistical inference & testing, etc.


- Excellent understanding and hand-on experience on building Deep-learning models for text & image analytics (such as ANNs, CNNs, LSTM, Transfer Learning, Encoder and decoder, etc).


- Proficient in coding in common data science language & tools such as R, Python.


- Experience with common data science toolkits, such as NumPy, Pandas, Matplotlib, StatsModel, Scikitlearn, SciPy, NLTK, Spacy, OpenCV etc.


- Experience with common data science frameworks such as Tensorflow, Keras, PyTorch, XGBoost,etc.


- Exposure or knowledge in cloud (Azure/AWS).


- Experience on deployment of model in production.

This advertiser has chosen not to accept applicants from your region.
Be The First To Know

About the latest Data scientists Jobs in Delhi !

Sn. Data Scientists- AI/ML- GEN AI- Work location : Across india | EXP: 4 - 12 years

Narela, Delhi Capgemini Engineering

Posted 4 days ago

Job Viewed

Tap Again To Close

Job Description

Data Scientists- AI/ML- GEN AI- Across india | EXP: 4 - 10 years


data scientists with total of around 4-10 years of experience and atleast 4-10 years of relevant data science, analytics, and AI/ML Python; data science; AI/ML; GEN AI


Primary Skills :


- Excellent understanding and hand-on experience of data-science and machine learning techniques & algorithms for supervised & unsupervised problems, NLP and computer vision and GEN AI. Good applied statistics skills, such as distributions, statistical inference & testing, etc.


- Excellent understanding and hand-on experience on building Deep-learning models for text & image analytics (such as ANNs, CNNs, LSTM, Transfer Learning, Encoder and decoder, etc).


- Proficient in coding in common data science language & tools such as R, Python.


- Experience with common data science toolkits, such as NumPy, Pandas, Matplotlib, StatsModel, Scikitlearn, SciPy, NLTK, Spacy, OpenCV etc.


- Experience with common data science frameworks such as Tensorflow, Keras, PyTorch, XGBoost,etc.


- Exposure or knowledge in cloud (Azure/AWS).


- Experience on deployment of model in production.

This advertiser has chosen not to accept applicants from your region.

Sn. Data Scientists- AI/ML- GEN AI- Work location : Across india | EXP: 4 - 12 years

Delhi, Delhi Capgemini Engineering

Posted 4 days ago

Job Viewed

Tap Again To Close

Job Description

Data Scientists- AI/ML- GEN AI- Across india | EXP: 4 - 10 years


data scientists with total of around 4-10 years of experience and atleast 4-10 years of relevant data science, analytics, and AI/ML Python; data science; AI/ML; GEN AI


Primary Skills :


- Excellent understanding and hand-on experience of data-science and machine learning techniques & algorithms for supervised & unsupervised problems, NLP and computer vision and GEN AI. Good applied statistics skills, such as distributions, statistical inference & testing, etc.


- Excellent understanding and hand-on experience on building Deep-learning models for text & image analytics (such as ANNs, CNNs, LSTM, Transfer Learning, Encoder and decoder, etc).


- Proficient in coding in common data science language & tools such as R, Python.


- Experience with common data science toolkits, such as NumPy, Pandas, Matplotlib, StatsModel, Scikitlearn, SciPy, NLTK, Spacy, OpenCV etc.


- Experience with common data science frameworks such as Tensorflow, Keras, PyTorch, XGBoost,etc.


- Exposure or knowledge in cloud (Azure/AWS).


- Experience on deployment of model in production.

This advertiser has chosen not to accept applicants from your region.

Big Data Engineer - Scala

Narela, Delhi Idyllic Services

Posted 1 day ago

Job Viewed

Tap Again To Close

Job Description

Job Title: Big Data Engineer – Scala

Location: Bangalore, Chennai, Gurgaon, Pune, Mumbai.

Experience: 7–10 Years (Minimum 3+ years in Scala)

Notice Period: Immediate to 30 Days

Mode of Work: Hybrid


Role Overview

We are looking for a highly skilled Big Data Engineer (Scala) with strong expertise in Scala, Spark, Python, NiFi, and Apache Kafka to join our data engineering team. The ideal candidate will have a proven track record in building, scaling, and optimizing big data pipelines , and hands-on experience in distributed data systems and cloud-based solutions.


Key Responsibilities

- Design, develop, and optimize large-scale data pipelines and distributed data processing systems.

- Work extensively with Scala, Spark (PySpark), and Python for data processing and transformation.

- Develop and integrate streaming solutions using Apache Kafka and orchestration tools like NiFi / Airflow .

- Write efficient queries and perform data analysis using Jupyter Notebooks and SQL .

- Collaborate with cross-functional teams to design scalable cloud-based data architectures .

- Ensure delivery of high-quality code through code reviews, performance tuning, and best practices .

- Build monitoring and alerting systems leveraging Splunk or equivalent tools .

- Participate in CI/CD workflows using Git, Jenkins, and other DevOps tools.

- Contribute to product development with a focus on scalability, maintainability, and performance.


Mandatory Skills

- Scala – Minimum 3+ years of hands-on experience.

- Strong expertise in Spark (PySpark) and Python .

- Hands-on experience with Apache Kafka .

- Knowledge of NiFi / Airflow for orchestration.

- Strong experience in Distributed Data Systems (5+ years) .

- Proficiency in SQL and query optimization.

- Good understanding of Cloud Architecture .


Preferred Skills

- Exposure to messaging technologies like Apache Kafka or equivalent.

- Experience in designing intuitive, responsive UIs for data analytics visualization.

- Familiarity with Splunk or other monitoring/alerting solutions .

- Hands-on experience with CI/CD tools (Git, Jenkins).

- Strong grasp of software engineering concepts, data modeling, and optimization techniques .

This advertiser has chosen not to accept applicants from your region.
 

Nearby Locations

Other Jobs Near Me

Industry

  1. request_quote Accounting
  2. work Administrative
  3. eco Agriculture Forestry
  4. smart_toy AI & Emerging Technologies
  5. school Apprenticeships & Trainee
  6. apartment Architecture
  7. palette Arts & Entertainment
  8. directions_car Automotive
  9. flight_takeoff Aviation
  10. account_balance Banking & Finance
  11. local_florist Beauty & Wellness
  12. restaurant Catering
  13. volunteer_activism Charity & Voluntary
  14. science Chemical Engineering
  15. child_friendly Childcare
  16. foundation Civil Engineering
  17. clean_hands Cleaning & Sanitation
  18. diversity_3 Community & Social Care
  19. construction Construction
  20. brush Creative & Digital
  21. currency_bitcoin Crypto & Blockchain
  22. support_agent Customer Service & Helpdesk
  23. medical_services Dental
  24. medical_services Driving & Transport
  25. medical_services E Commerce & Social Media
  26. school Education & Teaching
  27. electrical_services Electrical Engineering
  28. bolt Energy
  29. local_mall Fmcg
  30. gavel Government & Non Profit
  31. emoji_events Graduate
  32. health_and_safety Healthcare
  33. beach_access Hospitality & Tourism
  34. groups Human Resources
  35. precision_manufacturing Industrial Engineering
  36. security Information Security
  37. handyman Installation & Maintenance
  38. policy Insurance
  39. code IT & Software
  40. gavel Legal
  41. sports_soccer Leisure & Sports
  42. inventory_2 Logistics & Warehousing
  43. supervisor_account Management
  44. supervisor_account Management Consultancy
  45. supervisor_account Manufacturing & Production
  46. campaign Marketing
  47. build Mechanical Engineering
  48. perm_media Media & PR
  49. local_hospital Medical
  50. local_hospital Military & Public Safety
  51. local_hospital Mining
  52. medical_services Nursing
  53. local_gas_station Oil & Gas
  54. biotech Pharmaceutical
  55. checklist_rtl Project Management
  56. shopping_bag Purchasing
  57. home_work Real Estate
  58. person_search Recruitment Consultancy
  59. store Retail
  60. point_of_sale Sales
  61. science Scientific Research & Development
  62. wifi Telecoms
  63. psychology Therapy
  64. pets Veterinary
View All Data Scientists Jobs View All Jobs in Delhi