681 Senior Data Engineer jobs in Noida
Big Data Engineer
Posted today
Job Viewed
Job Description
We are looking for passionate B.Tech freshers with strong programming skills in Java who are eager to start their career in Big Data technologies . The role offers exciting opportunities to work on real-time big data projects, data pipelines, and cloud-based data solutions.
Requirements
Assist in designing, developing, and maintaining big data solutions .
Write efficient code in Java and integrate with big data frameworks.
Support in building data ingestion, transformation, and processing pipelines .
Work with distributed systems and learn technologies like Hadoop, Spark, Kafka, Hive, HBase .
Collaborate with senior engineers on data-related problem-solving and performance optimization.
Participate in debugging, testing, and documentation of big data workflows.
Strong knowledge of Core Java & OOPs concepts .
Good understanding of SQL and database concepts .
Familiarity with data structures & algorithms .
Basic knowledge of Big Data frameworks (Hadoop/Spark/Kafka) is an added advantage.
Problem-solving skills and eagerness to learn new technologies.
Education: B.Tech (CSE/IT or related fields).
Batch: (specific, e.g., 2024/2025 pass outs).
Experience: Fresher (0–1 year)
Benefits
Training and mentoring in cutting-edge Big Data tools & technologies .
Exposure to live projects from day one.
A fast-paced, learning-oriented work culture.
Big Data Engineer
Posted today
Job Viewed
Job Description
We are looking for passionate B.Tech freshers with strong programming skills in Java who are eager to start their career in Big Data technologies . The role offers exciting opportunities to work on real-time big data projects, data pipelines, and cloud-based data solutions.
Requirements
Assist in designing, developing, and maintaining big data solutions .
Write efficient code in Java and integrate with big data frameworks.
Support in building data ingestion, transformation, and processing pipelines .
Work with distributed systems and learn technologies like Hadoop, Spark, Kafka, Hive, HBase .
Collaborate with senior engineers on data-related problem-solving and performance optimization.
Participate in debugging, testing, and documentation of big data workflows.
Strong knowledge of Core Java & OOPs concepts .
Good understanding of SQL and database concepts .
Familiarity with data structures & algorithms .
Basic knowledge of Big Data frameworks (Hadoop/Spark/Kafka) is an added advantage.
Problem-solving skills and eagerness to learn new technologies.
Education: B.Tech (CSE/IT or related fields).
Batch: (specific, e.g., 2024/2025 pass outs).
Experience: Fresher (0–1 year)
Benefits
Training and mentoring in cutting-edge Big Data tools & technologies .
Exposure to live projects from day one.
A fast-paced, learning-oriented work culture.
Requirements
Strong knowledge of Core Java & OOPs concepts. Good understanding of SQL and database concepts. Familiarity with data structures & algorithms.
Big Data Engineer - Scala
Posted 1 day ago
Job Viewed
Job Description
Job Title: Big Data Engineer – Scala
Location: Bangalore, Chennai, Gurgaon, Pune, Mumbai.
Experience: 7–10 Years (Minimum 3+ years in Scala)
Notice Period: Immediate to 30 Days
Mode of Work: Hybrid
Role Overview
We are looking for a highly skilled Big Data Engineer (Scala) with strong expertise in Scala, Spark, Python, NiFi, and Apache Kafka to join our data engineering team. The ideal candidate will have a proven track record in building, scaling, and optimizing big data pipelines , and hands-on experience in distributed data systems and cloud-based solutions.
Key Responsibilities
- Design, develop, and optimize large-scale data pipelines and distributed data processing systems.
- Work extensively with Scala, Spark (PySpark), and Python for data processing and transformation.
- Develop and integrate streaming solutions using Apache Kafka and orchestration tools like NiFi / Airflow .
- Write efficient queries and perform data analysis using Jupyter Notebooks and SQL .
- Collaborate with cross-functional teams to design scalable cloud-based data architectures .
- Ensure delivery of high-quality code through code reviews, performance tuning, and best practices .
- Build monitoring and alerting systems leveraging Splunk or equivalent tools .
- Participate in CI/CD workflows using Git, Jenkins, and other DevOps tools.
- Contribute to product development with a focus on scalability, maintainability, and performance.
Mandatory Skills
- Scala – Minimum 3+ years of hands-on experience.
- Strong expertise in Spark (PySpark) and Python .
- Hands-on experience with Apache Kafka .
- Knowledge of NiFi / Airflow for orchestration.
- Strong experience in Distributed Data Systems (5+ years) .
- Proficiency in SQL and query optimization.
- Good understanding of Cloud Architecture .
Preferred Skills
- Exposure to messaging technologies like Apache Kafka or equivalent.
- Experience in designing intuitive, responsive UIs for data analytics visualization.
- Familiarity with Splunk or other monitoring/alerting solutions .
- Hands-on experience with CI/CD tools (Git, Jenkins).
- Strong grasp of software engineering concepts, data modeling, and optimization techniques .
Big Data Engineer - Scala
Posted 1 day ago
Job Viewed
Job Description
Job Title: Big Data Engineer – Scala
Location: Bangalore, Chennai, Gurgaon, Pune, Mumbai.
Experience: 7–10 Years (Minimum 3+ years in Scala)
Notice Period: Immediate to 30 Days
Mode of Work: Hybrid
Role Overview
We are looking for a highly skilled Big Data Engineer (Scala) with strong expertise in Scala, Spark, Python, NiFi, and Apache Kafka to join our data engineering team. The ideal candidate will have a proven track record in building, scaling, and optimizing big data pipelines , and hands-on experience in distributed data systems and cloud-based solutions.
Key Responsibilities
- Design, develop, and optimize large-scale data pipelines and distributed data processing systems.
- Work extensively with Scala, Spark (PySpark), and Python for data processing and transformation.
- Develop and integrate streaming solutions using Apache Kafka and orchestration tools like NiFi / Airflow .
- Write efficient queries and perform data analysis using Jupyter Notebooks and SQL .
- Collaborate with cross-functional teams to design scalable cloud-based data architectures .
- Ensure delivery of high-quality code through code reviews, performance tuning, and best practices .
- Build monitoring and alerting systems leveraging Splunk or equivalent tools .
- Participate in CI/CD workflows using Git, Jenkins, and other DevOps tools.
- Contribute to product development with a focus on scalability, maintainability, and performance.
Mandatory Skills
- Scala – Minimum 3+ years of hands-on experience.
- Strong expertise in Spark (PySpark) and Python .
- Hands-on experience with Apache Kafka .
- Knowledge of NiFi / Airflow for orchestration.
- Strong experience in Distributed Data Systems (5+ years) .
- Proficiency in SQL and query optimization.
- Good understanding of Cloud Architecture .
Preferred Skills
- Exposure to messaging technologies like Apache Kafka or equivalent.
- Experience in designing intuitive, responsive UIs for data analytics visualization.
- Familiarity with Splunk or other monitoring/alerting solutions .
- Hands-on experience with CI/CD tools (Git, Jenkins).
- Strong grasp of software engineering concepts, data modeling, and optimization techniques .
Big Data Engineer - Scala
Posted 1 day ago
Job Viewed
Job Description
Job Title: Big Data Engineer – Scala
Location: Bangalore, Chennai, Gurgaon, Pune, Mumbai.
Experience: 7–10 Years (Minimum 3+ years in Scala)
Notice Period: Immediate to 30 Days
Mode of Work: Hybrid
Role Overview
We are looking for a highly skilled Big Data Engineer (Scala) with strong expertise in Scala, Spark, Python, NiFi, and Apache Kafka to join our data engineering team. The ideal candidate will have a proven track record in building, scaling, and optimizing big data pipelines , and hands-on experience in distributed data systems and cloud-based solutions.
Key Responsibilities
- Design, develop, and optimize large-scale data pipelines and distributed data processing systems.
- Work extensively with Scala, Spark (PySpark), and Python for data processing and transformation.
- Develop and integrate streaming solutions using Apache Kafka and orchestration tools like NiFi / Airflow .
- Write efficient queries and perform data analysis using Jupyter Notebooks and SQL .
- Collaborate with cross-functional teams to design scalable cloud-based data architectures .
- Ensure delivery of high-quality code through code reviews, performance tuning, and best practices .
- Build monitoring and alerting systems leveraging Splunk or equivalent tools .
- Participate in CI/CD workflows using Git, Jenkins, and other DevOps tools.
- Contribute to product development with a focus on scalability, maintainability, and performance.
Mandatory Skills
- Scala – Minimum 3+ years of hands-on experience.
- Strong expertise in Spark (PySpark) and Python .
- Hands-on experience with Apache Kafka .
- Knowledge of NiFi / Airflow for orchestration.
- Strong experience in Distributed Data Systems (5+ years) .
- Proficiency in SQL and query optimization.
- Good understanding of Cloud Architecture .
Preferred Skills
- Exposure to messaging technologies like Apache Kafka or equivalent.
- Experience in designing intuitive, responsive UIs for data analytics visualization.
- Familiarity with Splunk or other monitoring/alerting solutions .
- Hands-on experience with CI/CD tools (Git, Jenkins).
- Strong grasp of software engineering concepts, data modeling, and optimization techniques .
Big Data Engineer - Scala
Posted 1 day ago
Job Viewed
Job Description
Job Title: Big Data Engineer – Scala
Location: Bangalore, Chennai, Gurgaon, Pune, Mumbai.
Experience: 7–10 Years (Minimum 3+ years in Scala)
Notice Period: Immediate to 30 Days
Mode of Work: Hybrid
Role Overview
We are looking for a highly skilled Big Data Engineer (Scala) with strong expertise in Scala, Spark, Python, NiFi, and Apache Kafka to join our data engineering team. The ideal candidate will have a proven track record in building, scaling, and optimizing big data pipelines , and hands-on experience in distributed data systems and cloud-based solutions.
Key Responsibilities
- Design, develop, and optimize large-scale data pipelines and distributed data processing systems.
- Work extensively with Scala, Spark (PySpark), and Python for data processing and transformation.
- Develop and integrate streaming solutions using Apache Kafka and orchestration tools like NiFi / Airflow .
- Write efficient queries and perform data analysis using Jupyter Notebooks and SQL .
- Collaborate with cross-functional teams to design scalable cloud-based data architectures .
- Ensure delivery of high-quality code through code reviews, performance tuning, and best practices .
- Build monitoring and alerting systems leveraging Splunk or equivalent tools .
- Participate in CI/CD workflows using Git, Jenkins, and other DevOps tools.
- Contribute to product development with a focus on scalability, maintainability, and performance.
Mandatory Skills
- Scala – Minimum 3+ years of hands-on experience.
- Strong expertise in Spark (PySpark) and Python .
- Hands-on experience with Apache Kafka .
- Knowledge of NiFi / Airflow for orchestration.
- Strong experience in Distributed Data Systems (5+ years) .
- Proficiency in SQL and query optimization.
- Good understanding of Cloud Architecture .
Preferred Skills
- Exposure to messaging technologies like Apache Kafka or equivalent.
- Experience in designing intuitive, responsive UIs for data analytics visualization.
- Familiarity with Splunk or other monitoring/alerting solutions .
- Hands-on experience with CI/CD tools (Git, Jenkins).
- Strong grasp of software engineering concepts, data modeling, and optimization techniques .
Big Data Engineer - Scala
Posted 1 day ago
Job Viewed
Job Description
Job Title: Big Data Engineer – Scala
Location: Bangalore, Chennai, Gurgaon, Pune, Mumbai.
Experience: 7–10 Years (Minimum 3+ years in Scala)
Notice Period: Immediate to 30 Days
Mode of Work: Hybrid
Role Overview
We are looking for a highly skilled Big Data Engineer (Scala) with strong expertise in Scala, Spark, Python, NiFi, and Apache Kafka to join our data engineering team. The ideal candidate will have a proven track record in building, scaling, and optimizing big data pipelines , and hands-on experience in distributed data systems and cloud-based solutions.
Key Responsibilities
- Design, develop, and optimize large-scale data pipelines and distributed data processing systems.
- Work extensively with Scala, Spark (PySpark), and Python for data processing and transformation.
- Develop and integrate streaming solutions using Apache Kafka and orchestration tools like NiFi / Airflow .
- Write efficient queries and perform data analysis using Jupyter Notebooks and SQL .
- Collaborate with cross-functional teams to design scalable cloud-based data architectures .
- Ensure delivery of high-quality code through code reviews, performance tuning, and best practices .
- Build monitoring and alerting systems leveraging Splunk or equivalent tools .
- Participate in CI/CD workflows using Git, Jenkins, and other DevOps tools.
- Contribute to product development with a focus on scalability, maintainability, and performance.
Mandatory Skills
- Scala – Minimum 3+ years of hands-on experience.
- Strong expertise in Spark (PySpark) and Python .
- Hands-on experience with Apache Kafka .
- Knowledge of NiFi / Airflow for orchestration.
- Strong experience in Distributed Data Systems (5+ years) .
- Proficiency in SQL and query optimization.
- Good understanding of Cloud Architecture .
Preferred Skills
- Exposure to messaging technologies like Apache Kafka or equivalent.
- Experience in designing intuitive, responsive UIs for data analytics visualization.
- Familiarity with Splunk or other monitoring/alerting solutions .
- Hands-on experience with CI/CD tools (Git, Jenkins).
- Strong grasp of software engineering concepts, data modeling, and optimization techniques .
Be The First To Know
About the latest Senior data engineer Jobs in Noida !
Big Data Engineer - Scala
Posted today
Job Viewed
Job Description
Job Title: Big Data Engineer – Scala
Location: Bangalore, Chennai, Gurgaon, Pune, Mumbai.
Experience: 7–10 Years (Minimum 3+ years in Scala)
Notice Period: Immediate to 30 Days
Mode of Work: Hybrid
Role Overview
We are looking for a highly skilled Big Data Engineer (Scala) with strong expertise in Scala, Spark, Python, NiFi, and Apache Kafka to join our data engineering team. The ideal candidate will have a proven track record in building, scaling, and optimizing big data pipelines , and hands-on experience in distributed data systems and cloud-based solutions.
Key Responsibilities
- Design, develop, and optimize large-scale data pipelines and distributed data processing systems.
- Work extensively with Scala, Spark (PySpark), and Python for data processing and transformation.
- Develop and integrate streaming solutions using Apache Kafka and orchestration tools like NiFi / Airflow .
- Write efficient queries and perform data analysis using Jupyter Notebooks and SQL .
- Collaborate with cross-functional teams to design scalable cloud-based data architectures .
- Ensure delivery of high-quality code through code reviews, performance tuning, and best practices .
- Build monitoring and alerting systems leveraging Splunk or equivalent tools .
- Participate in CI/CD workflows using Git, Jenkins, and other DevOps tools.
- Contribute to product development with a focus on scalability, maintainability, and performance.
Mandatory Skills
- Scala – Minimum 3+ years of hands-on experience.
- Strong expertise in Spark (PySpark) and Python .
- Hands-on experience with Apache Kafka .
- Knowledge of NiFi / Airflow for orchestration.
- Strong experience in Distributed Data Systems (5+ years) .
- Proficiency in SQL and query optimization.
- Good understanding of Cloud Architecture .
Preferred Skills
- Exposure to messaging technologies like Apache Kafka or equivalent.
- Experience in designing intuitive, responsive UIs for data analytics visualization.
- Familiarity with Splunk or other monitoring/alerting solutions .
- Hands-on experience with CI/CD tools (Git, Jenkins).
- Strong grasp of software engineering concepts, data modeling, and optimization techniques .
Senior Cloud Big Data Engineer
Posted today
Job Viewed
Job Description
Candidate should have:
• Experience designing, developing, and testing applications using proven or emerging technologies, in a variety of technologies and environments.
• Experience in using and tuning relational databases (Azure SQL Datawarehouse and SQL DB, MS SQL Server, or other RDBMS is a plus).
• Experience with Data Lake implementations and design patterns
• Experience with Lambda and Kappa architecture implementations
• Knowledge and experience with Azure Data Factory (Informatica is a plus) as an ETL environment.
• Knowledge and exposures to the cloud or on-premises MPP data warehousing systems (Azure SQL Data Warehouse)
• Knowledge and experience in .Net Framework 4.6 and above, .Net Core, and .Net Standard.
• Knowledge and experience in Azure Storage such as Blob Storage, Data Lake Store, Cosmos DB, Azure SQL
• Knowledge and exposure to Big Data technologies Hadoop, HDFS, Hive, and Apache Spark/DataBricks, etc.
• Knowledge and experience in Azure DevOps (Build CI/CD Pipelines) and TFS.
• Knowledge and experience in Serverless Azure Compute Services such as Azure Functions, Logic Apps, App service, Service Bus, and Web Jobs.
• Demonstrated knowledge of data management concepts as well as an outstanding command of the SQL standard.
• Experience with C# required
• Object-Oriented Programming proficiency using .Net technology stack.
• At least 6 years of experience with Cloud-based analytic, data management, and visualization technologies
• Bachelor's degree in Programming/Systems, Computer Science or equivalent work experience.
QA Engineer-Big Data
Posted today
Job Viewed
Job Description
BOLD is an established and fast-growing product company that transforms work lives. Since 2005, BOLD has delivered award-winning career services that have a meaningful and positive impact on job seekers and employers. BOLD’s robust product line includes a professional resume and cover letter writing services, scientifically validated career tests, and employer tools that help companies hire, onboard, and communicate with their staff.
Big Data is all about making dry figures accessible and useful to the right audience. Our team is working on latest tools for ETL, reporting, analysis and is providing performance monitoring and insights via dashboards, scorecards and ad hoc analysis which help create Customer Engagement reporting and modelling using our event stream.
Job description:
Role:
Hadoop QA as part of Big Data team will support big data operations. QA engineer Ensures that every phase and feature of the software solution is tested and any potential issue is identified and fixed before product goes live.
Responsibilities:
Required Skills:
Work Experience:
3-5 years
Educational Qualification:
Engineering/ Master’s Degree from a good Institute (preferably Computer Science or related)
About BOLD
BOLD is a fast-paced, product company founded by two entrepreneurs passionate about helping people achieve their dreams. We stand together as a team empowering people to reach their professional aspirations.With our headquarters in Puerto Rico and offices in San Francisco and India, we’re a global organization on a path to change the career industry . Our vision is to revolutionize the online career world by creating transformational products that help people find jobs and companies hire the best candidates.A career at BOLD promises great challenges, opportunity, culture and the environment and you forge your own path ahead. Join us and discover what a great place BOLD is!