262 Data Scientists jobs in Delhi
Big Data Developer
Posted 8 days ago
Job Viewed
Job Description
We are seeking a highly skilled Big Data Engineer to join our growing team. In this role, you will be responsible for designing, building, and maintaining robust data pipelines that handle high-volume financial data, including stocks, cryptocurrencies, and third-party data sources. You will play a critical role in ensuring data integrity, scalability, and real-time availability across our platforms.
Key Responsibilities :-
- Design, develop, and manage end-to-end data pipelines for stocks, crypto, and other financial datasets.
- Integrate third-party APIs and data feeds into internal systems.
- Build and optimize data ingestion, storage, and transformation workflows (batch and real-time).
- Ensure data quality, consistency, and reliability across all pipelines.
- Collaborate with data scientists, analysts, and backend engineers to provide clean, structured, and scalable datasets.
- Monitor, troubleshoot, and optimize pipeline performance.
- Implement ETL/ELT best practices, data governance, and security protocols.
- Contribute to the scalability and automation of our data infrastructure.
Requirements :-
- Proven experience as a Big Data Engineer / Data Engineer (preferably in financial or crypto domains).
- Strong expertise in Python, SQL, and distributed data systems.
- Hands-on experience with data pipeline tools (e.g., Apache Spark, Kafka, Airflow, Flink, Prefect).
- Experience with cloud platforms (AWS, GCP, or Azure) and data warehousing (Snowflake, BigQuery, Redshift, etc.).
- Knowledge of API integrations and handling real-time streaming data.
- Familiarity with databases (relational and NoSQL) and data modeling.
- Solid understanding of stocks, cryptocurrencies, and financial data structures (preferred).
- Strong problem-solving skills with the ability to handle large-scale data challenges.
Big Data Developer
Posted 8 days ago
Job Viewed
Job Description
We are seeking a highly skilled Big Data Engineer to join our growing team. In this role, you will be responsible for designing, building, and maintaining robust data pipelines that handle high-volume financial data, including stocks, cryptocurrencies, and third-party data sources. You will play a critical role in ensuring data integrity, scalability, and real-time availability across our platforms.
Key Responsibilities :-
- Design, develop, and manage end-to-end data pipelines for stocks, crypto, and other financial datasets.
- Integrate third-party APIs and data feeds into internal systems.
- Build and optimize data ingestion, storage, and transformation workflows (batch and real-time).
- Ensure data quality, consistency, and reliability across all pipelines.
- Collaborate with data scientists, analysts, and backend engineers to provide clean, structured, and scalable datasets.
- Monitor, troubleshoot, and optimize pipeline performance.
- Implement ETL/ELT best practices, data governance, and security protocols.
- Contribute to the scalability and automation of our data infrastructure.
Requirements :-
- Proven experience as a Big Data Engineer / Data Engineer (preferably in financial or crypto domains).
- Strong expertise in Python, SQL, and distributed data systems.
- Hands-on experience with data pipeline tools (e.g., Apache Spark, Kafka, Airflow, Flink, Prefect).
- Experience with cloud platforms (AWS, GCP, or Azure) and data warehousing (Snowflake, BigQuery, Redshift, etc.).
- Knowledge of API integrations and handling real-time streaming data.
- Familiarity with databases (relational and NoSQL) and data modeling.
- Solid understanding of stocks, cryptocurrencies, and financial data structures (preferred).
- Strong problem-solving skills with the ability to handle large-scale data challenges.
Big Data Developer
Posted 8 days ago
Job Viewed
Job Description
We are seeking a highly skilled Big Data Engineer to join our growing team. In this role, you will be responsible for designing, building, and maintaining robust data pipelines that handle high-volume financial data, including stocks, cryptocurrencies, and third-party data sources. You will play a critical role in ensuring data integrity, scalability, and real-time availability across our platforms.
Key Responsibilities :-
- Design, develop, and manage end-to-end data pipelines for stocks, crypto, and other financial datasets.
- Integrate third-party APIs and data feeds into internal systems.
- Build and optimize data ingestion, storage, and transformation workflows (batch and real-time).
- Ensure data quality, consistency, and reliability across all pipelines.
- Collaborate with data scientists, analysts, and backend engineers to provide clean, structured, and scalable datasets.
- Monitor, troubleshoot, and optimize pipeline performance.
- Implement ETL/ELT best practices, data governance, and security protocols.
- Contribute to the scalability and automation of our data infrastructure.
Requirements :-
- Proven experience as a Big Data Engineer / Data Engineer (preferably in financial or crypto domains).
- Strong expertise in Python, SQL, and distributed data systems.
- Hands-on experience with data pipeline tools (e.g., Apache Spark, Kafka, Airflow, Flink, Prefect).
- Experience with cloud platforms (AWS, GCP, or Azure) and data warehousing (Snowflake, BigQuery, Redshift, etc.).
- Knowledge of API integrations and handling real-time streaming data.
- Familiarity with databases (relational and NoSQL) and data modeling.
- Solid understanding of stocks, cryptocurrencies, and financial data structures (preferred).
- Strong problem-solving skills with the ability to handle large-scale data challenges.
Big Data Specialist
Posted 11 days ago
Job Viewed
Job Description
Role Overview
We are seeking a highly skilled Big Data Engineer to join our team. The ideal candidate will have strong experience in building, maintaining, and optimizing large-scale data pipelines and distributed data processing systems. This role involves working closely with cross-functional teams to ensure the reliability, scalability, and performance of data solutions.
Key Responsibilities
- Design, develop, and maintain scalable data pipelines and ETL processes.
- Work with large datasets using Hadoop ecosystem tools (Hive, Spark).
- Build and optimize real-time and batch data processing solutions using Kafka and Spark Streaming.
- Write efficient, high-performance SQL queries to extract, transform, and load data.
- Develop reusable data frameworks and utilities in Python.
- Collaborate with data scientists, analysts, and product teams to deliver reliable data solutions.
- Monitor, troubleshoot, and optimize big data workflows for performance and cost efficiency.
Must-Have Skills
- Strong hands-on experience with Hive and SQL for querying and data transformation.
- Proficiency in Python for data manipulation and automation.
- Expertise in Apache Spark (batch and streaming).
- Experience working with Kafka for streaming data pipelines.
Good-to-Have Skills
- Experience with workflow orchestration tools (Airflow etc.)
- Knowledge of cloud-based big data platforms (AWS EMR, GCP Dataproc, Azure HDInsight).
- Familiarity with CI/CD pipelines and version control (Git).
Big Data Specialist
Posted 11 days ago
Job Viewed
Job Description
Role Overview
We are seeking a highly skilled Big Data Engineer to join our team. The ideal candidate will have strong experience in building, maintaining, and optimizing large-scale data pipelines and distributed data processing systems. This role involves working closely with cross-functional teams to ensure the reliability, scalability, and performance of data solutions.
Key Responsibilities
- Design, develop, and maintain scalable data pipelines and ETL processes.
- Work with large datasets using Hadoop ecosystem tools (Hive, Spark).
- Build and optimize real-time and batch data processing solutions using Kafka and Spark Streaming.
- Write efficient, high-performance SQL queries to extract, transform, and load data.
- Develop reusable data frameworks and utilities in Python.
- Collaborate with data scientists, analysts, and product teams to deliver reliable data solutions.
- Monitor, troubleshoot, and optimize big data workflows for performance and cost efficiency.
Must-Have Skills
- Strong hands-on experience with Hive and SQL for querying and data transformation.
- Proficiency in Python for data manipulation and automation.
- Expertise in Apache Spark (batch and streaming).
- Experience working with Kafka for streaming data pipelines.
Good-to-Have Skills
- Experience with workflow orchestration tools (Airflow etc.)
- Knowledge of cloud-based big data platforms (AWS EMR, GCP Dataproc, Azure HDInsight).
- Familiarity with CI/CD pipelines and version control (Git).
Big Data Specialist
Posted 11 days ago
Job Viewed
Job Description
Role Overview
We are seeking a highly skilled Big Data Engineer to join our team. The ideal candidate will have strong experience in building, maintaining, and optimizing large-scale data pipelines and distributed data processing systems. This role involves working closely with cross-functional teams to ensure the reliability, scalability, and performance of data solutions.
Key Responsibilities
- Design, develop, and maintain scalable data pipelines and ETL processes.
- Work with large datasets using Hadoop ecosystem tools (Hive, Spark).
- Build and optimize real-time and batch data processing solutions using Kafka and Spark Streaming.
- Write efficient, high-performance SQL queries to extract, transform, and load data.
- Develop reusable data frameworks and utilities in Python.
- Collaborate with data scientists, analysts, and product teams to deliver reliable data solutions.
- Monitor, troubleshoot, and optimize big data workflows for performance and cost efficiency.
Must-Have Skills
- Strong hands-on experience with Hive and SQL for querying and data transformation.
- Proficiency in Python for data manipulation and automation.
- Expertise in Apache Spark (batch and streaming).
- Experience working with Kafka for streaming data pipelines.
Good-to-Have Skills
- Experience with workflow orchestration tools (Airflow etc.)
- Knowledge of cloud-based big data platforms (AWS EMR, GCP Dataproc, Azure HDInsight).
- Familiarity with CI/CD pipelines and version control (Git).
Sn. Data Scientists- AI/ML- GEN AI- Work location : Across india | EXP: 4 - 12 years
Posted 4 days ago
Job Viewed
Job Description
Data Scientists- AI/ML- GEN AI- Across india | EXP: 4 - 10 years
data scientists with total of around 4-10 years of experience and atleast 4-10 years of relevant data science, analytics, and AI/ML Python; data science; AI/ML; GEN AI
Primary Skills :
- Excellent understanding and hand-on experience of data-science and machine learning techniques & algorithms for supervised & unsupervised problems, NLP and computer vision and GEN AI. Good applied statistics skills, such as distributions, statistical inference & testing, etc.
- Excellent understanding and hand-on experience on building Deep-learning models for text & image analytics (such as ANNs, CNNs, LSTM, Transfer Learning, Encoder and decoder, etc).
- Proficient in coding in common data science language & tools such as R, Python.
- Experience with common data science toolkits, such as NumPy, Pandas, Matplotlib, StatsModel, Scikitlearn, SciPy, NLTK, Spacy, OpenCV etc.
- Experience with common data science frameworks such as Tensorflow, Keras, PyTorch, XGBoost,etc.
- Exposure or knowledge in cloud (Azure/AWS).
- Experience on deployment of model in production.
Be The First To Know
About the latest Data scientists Jobs in Delhi !
Sn. Data Scientists- AI/ML- GEN AI- Work location : Across india | EXP: 4 - 12 years
Posted 4 days ago
Job Viewed
Job Description
Data Scientists- AI/ML- GEN AI- Across india | EXP: 4 - 10 years
data scientists with total of around 4-10 years of experience and atleast 4-10 years of relevant data science, analytics, and AI/ML Python; data science; AI/ML; GEN AI
Primary Skills :
- Excellent understanding and hand-on experience of data-science and machine learning techniques & algorithms for supervised & unsupervised problems, NLP and computer vision and GEN AI. Good applied statistics skills, such as distributions, statistical inference & testing, etc.
- Excellent understanding and hand-on experience on building Deep-learning models for text & image analytics (such as ANNs, CNNs, LSTM, Transfer Learning, Encoder and decoder, etc).
- Proficient in coding in common data science language & tools such as R, Python.
- Experience with common data science toolkits, such as NumPy, Pandas, Matplotlib, StatsModel, Scikitlearn, SciPy, NLTK, Spacy, OpenCV etc.
- Experience with common data science frameworks such as Tensorflow, Keras, PyTorch, XGBoost,etc.
- Exposure or knowledge in cloud (Azure/AWS).
- Experience on deployment of model in production.
Sn. Data Scientists- AI/ML- GEN AI- Work location : Across india | EXP: 4 - 12 years
Posted 4 days ago
Job Viewed
Job Description
Data Scientists- AI/ML- GEN AI- Across india | EXP: 4 - 10 years
data scientists with total of around 4-10 years of experience and atleast 4-10 years of relevant data science, analytics, and AI/ML Python; data science; AI/ML; GEN AI
Primary Skills :
- Excellent understanding and hand-on experience of data-science and machine learning techniques & algorithms for supervised & unsupervised problems, NLP and computer vision and GEN AI. Good applied statistics skills, such as distributions, statistical inference & testing, etc.
- Excellent understanding and hand-on experience on building Deep-learning models for text & image analytics (such as ANNs, CNNs, LSTM, Transfer Learning, Encoder and decoder, etc).
- Proficient in coding in common data science language & tools such as R, Python.
- Experience with common data science toolkits, such as NumPy, Pandas, Matplotlib, StatsModel, Scikitlearn, SciPy, NLTK, Spacy, OpenCV etc.
- Experience with common data science frameworks such as Tensorflow, Keras, PyTorch, XGBoost,etc.
- Exposure or knowledge in cloud (Azure/AWS).
- Experience on deployment of model in production.
Big Data Engineer - Scala
Posted 1 day ago
Job Viewed
Job Description
Job Title: Big Data Engineer – Scala
Location: Bangalore, Chennai, Gurgaon, Pune, Mumbai.
Experience: 7–10 Years (Minimum 3+ years in Scala)
Notice Period: Immediate to 30 Days
Mode of Work: Hybrid
Role Overview
We are looking for a highly skilled Big Data Engineer (Scala) with strong expertise in Scala, Spark, Python, NiFi, and Apache Kafka to join our data engineering team. The ideal candidate will have a proven track record in building, scaling, and optimizing big data pipelines , and hands-on experience in distributed data systems and cloud-based solutions.
Key Responsibilities
- Design, develop, and optimize large-scale data pipelines and distributed data processing systems.
- Work extensively with Scala, Spark (PySpark), and Python for data processing and transformation.
- Develop and integrate streaming solutions using Apache Kafka and orchestration tools like NiFi / Airflow .
- Write efficient queries and perform data analysis using Jupyter Notebooks and SQL .
- Collaborate with cross-functional teams to design scalable cloud-based data architectures .
- Ensure delivery of high-quality code through code reviews, performance tuning, and best practices .
- Build monitoring and alerting systems leveraging Splunk or equivalent tools .
- Participate in CI/CD workflows using Git, Jenkins, and other DevOps tools.
- Contribute to product development with a focus on scalability, maintainability, and performance.
Mandatory Skills
- Scala – Minimum 3+ years of hands-on experience.
- Strong expertise in Spark (PySpark) and Python .
- Hands-on experience with Apache Kafka .
- Knowledge of NiFi / Airflow for orchestration.
- Strong experience in Distributed Data Systems (5+ years) .
- Proficiency in SQL and query optimization.
- Good understanding of Cloud Architecture .
Preferred Skills
- Exposure to messaging technologies like Apache Kafka or equivalent.
- Experience in designing intuitive, responsive UIs for data analytics visualization.
- Familiarity with Splunk or other monitoring/alerting solutions .
- Hands-on experience with CI/CD tools (Git, Jenkins).
- Strong grasp of software engineering concepts, data modeling, and optimization techniques .