557 Data Scientists jobs in Noida
Big Data Engineer
Posted today
Job Viewed
Job Description
We are looking for passionate B.Tech freshers with strong programming skills in Java who are eager to start their career in Big Data technologies . The role offers exciting opportunities to work on real-time big data projects, data pipelines, and cloud-based data solutions.
Requirements
Assist in designing, developing, and maintaining big data solutions .
Write efficient code in Java and integrate with big data frameworks.
Support in building data ingestion, transformation, and processing pipelines .
Work with distributed systems and learn technologies like Hadoop, Spark, Kafka, Hive, HBase .
Collaborate with senior engineers on data-related problem-solving and performance optimization.
Participate in debugging, testing, and documentation of big data workflows.
Strong knowledge of Core Java & OOPs concepts .
Good understanding of SQL and database concepts .
Familiarity with data structures & algorithms .
Basic knowledge of Big Data frameworks (Hadoop/Spark/Kafka) is an added advantage.
Problem-solving skills and eagerness to learn new technologies.
Education: B.Tech (CSE/IT or related fields).
Batch: (specific, e.g., 2024/2025 pass outs).
Experience: Fresher (0–1 year)
Benefits
Training and mentoring in cutting-edge Big Data tools & technologies .
Exposure to live projects from day one.
A fast-paced, learning-oriented work culture.
Big Data Engineer
Posted today
Job Viewed
Job Description
We are looking for passionate B.Tech freshers with strong programming skills in Java who are eager to start their career in Big Data technologies . The role offers exciting opportunities to work on real-time big data projects, data pipelines, and cloud-based data solutions.
Requirements
Assist in designing, developing, and maintaining big data solutions .
Write efficient code in Java and integrate with big data frameworks.
Support in building data ingestion, transformation, and processing pipelines .
Work with distributed systems and learn technologies like Hadoop, Spark, Kafka, Hive, HBase .
Collaborate with senior engineers on data-related problem-solving and performance optimization.
Participate in debugging, testing, and documentation of big data workflows.
Strong knowledge of Core Java & OOPs concepts .
Good understanding of SQL and database concepts .
Familiarity with data structures & algorithms .
Basic knowledge of Big Data frameworks (Hadoop/Spark/Kafka) is an added advantage.
Problem-solving skills and eagerness to learn new technologies.
Education: B.Tech (CSE/IT or related fields).
Batch: (specific, e.g., 2024/2025 pass outs).
Experience: Fresher (0–1 year)
Benefits
Training and mentoring in cutting-edge Big Data tools & technologies .
Exposure to live projects from day one.
A fast-paced, learning-oriented work culture.
Requirements
Strong knowledge of Core Java & OOPs concepts. Good understanding of SQL and database concepts. Familiarity with data structures & algorithms.
Big Data Administrator
Posted today
Job Viewed
Job Description
• experience working on batch processing and tools in the Hadoop technical stack (e.g., MapReduce, Yarn, Hive, HDFS, Oozie)
• The candidate must have experience in Ambari setup and management
• 1 to 2 years of MapRcluster management/administration ·
• 2+ years of administration experience working with tools in the stream processing technical stack (e.g., Kudu, Spark, Kafka, Avro)
• Hadoop Administration experience with NoSQL stores (especially - HBase)
• Hands-on experience monitoring and reporting on Hadoop resource utilization and troubleshoot
• Hands-on experience supporting code deployments (Spark, Hive, Ab Initio, etc.) into the Hadoop cluster
• 3+ years as a systems integrator with Linux (SUSE, Ubuntu) systems and shell scripting
• 2+ years of DevOps (Dockers, Ansible, Kubernetes, Mesos) tool administration
• Certification in MapR and Linux administration highly preferred - Cloudera certification preferred.
Big Data Developer
Posted 8 days ago
Job Viewed
Job Description
We are seeking a highly skilled Big Data Engineer to join our growing team. In this role, you will be responsible for designing, building, and maintaining robust data pipelines that handle high-volume financial data, including stocks, cryptocurrencies, and third-party data sources. You will play a critical role in ensuring data integrity, scalability, and real-time availability across our platforms.
Key Responsibilities :-
- Design, develop, and manage end-to-end data pipelines for stocks, crypto, and other financial datasets.
- Integrate third-party APIs and data feeds into internal systems.
- Build and optimize data ingestion, storage, and transformation workflows (batch and real-time).
- Ensure data quality, consistency, and reliability across all pipelines.
- Collaborate with data scientists, analysts, and backend engineers to provide clean, structured, and scalable datasets.
- Monitor, troubleshoot, and optimize pipeline performance.
- Implement ETL/ELT best practices, data governance, and security protocols.
- Contribute to the scalability and automation of our data infrastructure.
Requirements :-
- Proven experience as a Big Data Engineer / Data Engineer (preferably in financial or crypto domains).
- Strong expertise in Python, SQL, and distributed data systems.
- Hands-on experience with data pipeline tools (e.g., Apache Spark, Kafka, Airflow, Flink, Prefect).
- Experience with cloud platforms (AWS, GCP, or Azure) and data warehousing (Snowflake, BigQuery, Redshift, etc.).
- Knowledge of API integrations and handling real-time streaming data.
- Familiarity with databases (relational and NoSQL) and data modeling.
- Solid understanding of stocks, cryptocurrencies, and financial data structures (preferred).
- Strong problem-solving skills with the ability to handle large-scale data challenges.
Big Data Developer
Posted 8 days ago
Job Viewed
Job Description
We are seeking a highly skilled Big Data Engineer to join our growing team. In this role, you will be responsible for designing, building, and maintaining robust data pipelines that handle high-volume financial data, including stocks, cryptocurrencies, and third-party data sources. You will play a critical role in ensuring data integrity, scalability, and real-time availability across our platforms.
Key Responsibilities :-
- Design, develop, and manage end-to-end data pipelines for stocks, crypto, and other financial datasets.
- Integrate third-party APIs and data feeds into internal systems.
- Build and optimize data ingestion, storage, and transformation workflows (batch and real-time).
- Ensure data quality, consistency, and reliability across all pipelines.
- Collaborate with data scientists, analysts, and backend engineers to provide clean, structured, and scalable datasets.
- Monitor, troubleshoot, and optimize pipeline performance.
- Implement ETL/ELT best practices, data governance, and security protocols.
- Contribute to the scalability and automation of our data infrastructure.
Requirements :-
- Proven experience as a Big Data Engineer / Data Engineer (preferably in financial or crypto domains).
- Strong expertise in Python, SQL, and distributed data systems.
- Hands-on experience with data pipeline tools (e.g., Apache Spark, Kafka, Airflow, Flink, Prefect).
- Experience with cloud platforms (AWS, GCP, or Azure) and data warehousing (Snowflake, BigQuery, Redshift, etc.).
- Knowledge of API integrations and handling real-time streaming data.
- Familiarity with databases (relational and NoSQL) and data modeling.
- Solid understanding of stocks, cryptocurrencies, and financial data structures (preferred).
- Strong problem-solving skills with the ability to handle large-scale data challenges.
Big Data Developer
Posted 8 days ago
Job Viewed
Job Description
We are seeking a highly skilled Big Data Engineer to join our growing team. In this role, you will be responsible for designing, building, and maintaining robust data pipelines that handle high-volume financial data, including stocks, cryptocurrencies, and third-party data sources. You will play a critical role in ensuring data integrity, scalability, and real-time availability across our platforms.
Key Responsibilities :-
- Design, develop, and manage end-to-end data pipelines for stocks, crypto, and other financial datasets.
- Integrate third-party APIs and data feeds into internal systems.
- Build and optimize data ingestion, storage, and transformation workflows (batch and real-time).
- Ensure data quality, consistency, and reliability across all pipelines.
- Collaborate with data scientists, analysts, and backend engineers to provide clean, structured, and scalable datasets.
- Monitor, troubleshoot, and optimize pipeline performance.
- Implement ETL/ELT best practices, data governance, and security protocols.
- Contribute to the scalability and automation of our data infrastructure.
Requirements :-
- Proven experience as a Big Data Engineer / Data Engineer (preferably in financial or crypto domains).
- Strong expertise in Python, SQL, and distributed data systems.
- Hands-on experience with data pipeline tools (e.g., Apache Spark, Kafka, Airflow, Flink, Prefect).
- Experience with cloud platforms (AWS, GCP, or Azure) and data warehousing (Snowflake, BigQuery, Redshift, etc.).
- Knowledge of API integrations and handling real-time streaming data.
- Familiarity with databases (relational and NoSQL) and data modeling.
- Solid understanding of stocks, cryptocurrencies, and financial data structures (preferred).
- Strong problem-solving skills with the ability to handle large-scale data challenges.
Big Data Developer
Posted 8 days ago
Job Viewed
Job Description
We are seeking a highly skilled Big Data Engineer to join our growing team. In this role, you will be responsible for designing, building, and maintaining robust data pipelines that handle high-volume financial data, including stocks, cryptocurrencies, and third-party data sources. You will play a critical role in ensuring data integrity, scalability, and real-time availability across our platforms.
Key Responsibilities :-
- Design, develop, and manage end-to-end data pipelines for stocks, crypto, and other financial datasets.
- Integrate third-party APIs and data feeds into internal systems.
- Build and optimize data ingestion, storage, and transformation workflows (batch and real-time).
- Ensure data quality, consistency, and reliability across all pipelines.
- Collaborate with data scientists, analysts, and backend engineers to provide clean, structured, and scalable datasets.
- Monitor, troubleshoot, and optimize pipeline performance.
- Implement ETL/ELT best practices, data governance, and security protocols.
- Contribute to the scalability and automation of our data infrastructure.
Requirements :-
- Proven experience as a Big Data Engineer / Data Engineer (preferably in financial or crypto domains).
- Strong expertise in Python, SQL, and distributed data systems.
- Hands-on experience with data pipeline tools (e.g., Apache Spark, Kafka, Airflow, Flink, Prefect).
- Experience with cloud platforms (AWS, GCP, or Azure) and data warehousing (Snowflake, BigQuery, Redshift, etc.).
- Knowledge of API integrations and handling real-time streaming data.
- Familiarity with databases (relational and NoSQL) and data modeling.
- Solid understanding of stocks, cryptocurrencies, and financial data structures (preferred).
- Strong problem-solving skills with the ability to handle large-scale data challenges.
Be The First To Know
About the latest Data scientists Jobs in Noida !
Big Data Developer
Posted 8 days ago
Job Viewed
Job Description
We are seeking a highly skilled Big Data Engineer to join our growing team. In this role, you will be responsible for designing, building, and maintaining robust data pipelines that handle high-volume financial data, including stocks, cryptocurrencies, and third-party data sources. You will play a critical role in ensuring data integrity, scalability, and real-time availability across our platforms.
Key Responsibilities :-
- Design, develop, and manage end-to-end data pipelines for stocks, crypto, and other financial datasets.
- Integrate third-party APIs and data feeds into internal systems.
- Build and optimize data ingestion, storage, and transformation workflows (batch and real-time).
- Ensure data quality, consistency, and reliability across all pipelines.
- Collaborate with data scientists, analysts, and backend engineers to provide clean, structured, and scalable datasets.
- Monitor, troubleshoot, and optimize pipeline performance.
- Implement ETL/ELT best practices, data governance, and security protocols.
- Contribute to the scalability and automation of our data infrastructure.
Requirements :-
- Proven experience as a Big Data Engineer / Data Engineer (preferably in financial or crypto domains).
- Strong expertise in Python, SQL, and distributed data systems.
- Hands-on experience with data pipeline tools (e.g., Apache Spark, Kafka, Airflow, Flink, Prefect).
- Experience with cloud platforms (AWS, GCP, or Azure) and data warehousing (Snowflake, BigQuery, Redshift, etc.).
- Knowledge of API integrations and handling real-time streaming data.
- Familiarity with databases (relational and NoSQL) and data modeling.
- Solid understanding of stocks, cryptocurrencies, and financial data structures (preferred).
- Strong problem-solving skills with the ability to handle large-scale data challenges.
Big Data Specialist
Posted 11 days ago
Job Viewed
Job Description
Role Overview
We are seeking a highly skilled Big Data Engineer to join our team. The ideal candidate will have strong experience in building, maintaining, and optimizing large-scale data pipelines and distributed data processing systems. This role involves working closely with cross-functional teams to ensure the reliability, scalability, and performance of data solutions.
Key Responsibilities
- Design, develop, and maintain scalable data pipelines and ETL processes.
- Work with large datasets using Hadoop ecosystem tools (Hive, Spark).
- Build and optimize real-time and batch data processing solutions using Kafka and Spark Streaming.
- Write efficient, high-performance SQL queries to extract, transform, and load data.
- Develop reusable data frameworks and utilities in Python.
- Collaborate with data scientists, analysts, and product teams to deliver reliable data solutions.
- Monitor, troubleshoot, and optimize big data workflows for performance and cost efficiency.
Must-Have Skills
- Strong hands-on experience with Hive and SQL for querying and data transformation.
- Proficiency in Python for data manipulation and automation.
- Expertise in Apache Spark (batch and streaming).
- Experience working with Kafka for streaming data pipelines.
Good-to-Have Skills
- Experience with workflow orchestration tools (Airflow etc.)
- Knowledge of cloud-based big data platforms (AWS EMR, GCP Dataproc, Azure HDInsight).
- Familiarity with CI/CD pipelines and version control (Git).
Big Data Specialist
Posted 11 days ago
Job Viewed
Job Description
Role Overview
We are seeking a highly skilled Big Data Engineer to join our team. The ideal candidate will have strong experience in building, maintaining, and optimizing large-scale data pipelines and distributed data processing systems. This role involves working closely with cross-functional teams to ensure the reliability, scalability, and performance of data solutions.
Key Responsibilities
- Design, develop, and maintain scalable data pipelines and ETL processes.
- Work with large datasets using Hadoop ecosystem tools (Hive, Spark).
- Build and optimize real-time and batch data processing solutions using Kafka and Spark Streaming.
- Write efficient, high-performance SQL queries to extract, transform, and load data.
- Develop reusable data frameworks and utilities in Python.
- Collaborate with data scientists, analysts, and product teams to deliver reliable data solutions.
- Monitor, troubleshoot, and optimize big data workflows for performance and cost efficiency.
Must-Have Skills
- Strong hands-on experience with Hive and SQL for querying and data transformation.
- Proficiency in Python for data manipulation and automation.
- Expertise in Apache Spark (batch and streaming).
- Experience working with Kafka for streaming data pipelines.
Good-to-Have Skills
- Experience with workflow orchestration tools (Airflow etc.)
- Knowledge of cloud-based big data platforms (AWS EMR, GCP Dataproc, Azure HDInsight).
- Familiarity with CI/CD pipelines and version control (Git).