Big Data Engineer

Noida, Uttar Pradesh Training Basket

Posted today

Job Viewed

Tap Again To Close

Job Description

We are looking for passionate   B.Tech freshers   with strong programming skills in   Java   who are eager to start their career in   Big Data technologies . The role offers exciting opportunities to work on real-time big data projects, data pipelines, and cloud-based data solutions.


Requirements
  • Assist in designing, developing, and maintaining   big data solutions .

  • Write efficient code in   Java   and integrate with big data frameworks.

  • Support in building   data ingestion, transformation, and processing pipelines .

  • Work with   distributed systems   and learn technologies like   Hadoop, Spark, Kafka, Hive, HBase .

  • Collaborate with senior engineers on data-related problem-solving and performance optimization.

  • Participate in   debugging, testing, and documentation   of big data workflows.

Required Skills:
  • Strong knowledge of   Core Java & OOPs concepts .

  • Good understanding of   SQL and database concepts .

  • Familiarity with   data structures & algorithms .

  • Basic knowledge of   Big Data frameworks   (Hadoop/Spark/Kafka) is an added advantage.

  • Problem-solving skills and eagerness to learn new technologies.

Eligibility Criteria:
  • Education:   B.Tech (CSE/IT or related fields).

  • Batch:   (specific, e.g., 2024/2025 pass outs).

  • Experience:   Fresher (0–1 year)



Benefits
  • Training and mentoring in   cutting-edge Big Data tools & technologies .

  • Exposure to   live projects   from day one.

  • A fast-paced, learning-oriented work culture.



This advertiser has chosen not to accept applicants from your region.

Big Data Engineer

Noida, Uttar Pradesh Training Basket

Posted today

Job Viewed

Tap Again To Close

Job Description

Job Description

We are looking for passionate   B.Tech freshers   with strong programming skills in   Java   who are eager to start their career in   Big Data technologies . The role offers exciting opportunities to work on real-time big data projects, data pipelines, and cloud-based data solutions.


Requirements
  • Assist in designing, developing, and maintaining   big data solutions .

  • Write efficient code in   Java   and integrate with big data frameworks.

  • Support in building   data ingestion, transformation, and processing pipelines .

  • Work with   distributed systems   and learn technologies like   Hadoop, Spark, Kafka, Hive, HBase .

  • Collaborate with senior engineers on data-related problem-solving and performance optimization.

  • Participate in   debugging, testing, and documentation   of big data workflows.

Required Skills:
  • Strong knowledge of   Core Java & OOPs concepts .

  • Good understanding of   SQL and database concepts .

  • Familiarity with   data structures & algorithms .

  • Basic knowledge of   Big Data frameworks   (Hadoop/Spark/Kafka) is an added advantage.

  • Problem-solving skills and eagerness to learn new technologies.

Eligibility Criteria:
  • Education:   B.Tech (CSE/IT or related fields).

  • Batch:   (specific, e.g., 2024/2025 pass outs).

  • Experience:   Fresher (0–1 year)



Benefits
  • Training and mentoring in   cutting-edge Big Data tools & technologies .

  • Exposure to   live projects   from day one.

  • A fast-paced, learning-oriented work culture.




Requirements
Strong knowledge of Core Java & OOPs concepts. Good understanding of SQL and database concepts. Familiarity with data structures & algorithms.
This advertiser has chosen not to accept applicants from your region.

Big Data Engineer - Scala

Faridabad, Haryana Idyllic Services

Posted 1 day ago

Job Viewed

Tap Again To Close

Job Description

Job Title: Big Data Engineer – Scala

Location: Bangalore, Chennai, Gurgaon, Pune, Mumbai.

Experience: 7–10 Years (Minimum 3+ years in Scala)

Notice Period: Immediate to 30 Days

Mode of Work: Hybrid


Role Overview

We are looking for a highly skilled Big Data Engineer (Scala) with strong expertise in Scala, Spark, Python, NiFi, and Apache Kafka to join our data engineering team. The ideal candidate will have a proven track record in building, scaling, and optimizing big data pipelines , and hands-on experience in distributed data systems and cloud-based solutions.


Key Responsibilities

- Design, develop, and optimize large-scale data pipelines and distributed data processing systems.

- Work extensively with Scala, Spark (PySpark), and Python for data processing and transformation.

- Develop and integrate streaming solutions using Apache Kafka and orchestration tools like NiFi / Airflow .

- Write efficient queries and perform data analysis using Jupyter Notebooks and SQL .

- Collaborate with cross-functional teams to design scalable cloud-based data architectures .

- Ensure delivery of high-quality code through code reviews, performance tuning, and best practices .

- Build monitoring and alerting systems leveraging Splunk or equivalent tools .

- Participate in CI/CD workflows using Git, Jenkins, and other DevOps tools.

- Contribute to product development with a focus on scalability, maintainability, and performance.


Mandatory Skills

- Scala – Minimum 3+ years of hands-on experience.

- Strong expertise in Spark (PySpark) and Python .

- Hands-on experience with Apache Kafka .

- Knowledge of NiFi / Airflow for orchestration.

- Strong experience in Distributed Data Systems (5+ years) .

- Proficiency in SQL and query optimization.

- Good understanding of Cloud Architecture .


Preferred Skills

- Exposure to messaging technologies like Apache Kafka or equivalent.

- Experience in designing intuitive, responsive UIs for data analytics visualization.

- Familiarity with Splunk or other monitoring/alerting solutions .

- Hands-on experience with CI/CD tools (Git, Jenkins).

- Strong grasp of software engineering concepts, data modeling, and optimization techniques .

This advertiser has chosen not to accept applicants from your region.

Big Data Engineer - Scala

Delhi, Delhi Idyllic Services

Posted 1 day ago

Job Viewed

Tap Again To Close

Job Description

Job Title: Big Data Engineer – Scala

Location: Bangalore, Chennai, Gurgaon, Pune, Mumbai.

Experience: 7–10 Years (Minimum 3+ years in Scala)

Notice Period: Immediate to 30 Days

Mode of Work: Hybrid


Role Overview

We are looking for a highly skilled Big Data Engineer (Scala) with strong expertise in Scala, Spark, Python, NiFi, and Apache Kafka to join our data engineering team. The ideal candidate will have a proven track record in building, scaling, and optimizing big data pipelines , and hands-on experience in distributed data systems and cloud-based solutions.


Key Responsibilities

- Design, develop, and optimize large-scale data pipelines and distributed data processing systems.

- Work extensively with Scala, Spark (PySpark), and Python for data processing and transformation.

- Develop and integrate streaming solutions using Apache Kafka and orchestration tools like NiFi / Airflow .

- Write efficient queries and perform data analysis using Jupyter Notebooks and SQL .

- Collaborate with cross-functional teams to design scalable cloud-based data architectures .

- Ensure delivery of high-quality code through code reviews, performance tuning, and best practices .

- Build monitoring and alerting systems leveraging Splunk or equivalent tools .

- Participate in CI/CD workflows using Git, Jenkins, and other DevOps tools.

- Contribute to product development with a focus on scalability, maintainability, and performance.


Mandatory Skills

- Scala – Minimum 3+ years of hands-on experience.

- Strong expertise in Spark (PySpark) and Python .

- Hands-on experience with Apache Kafka .

- Knowledge of NiFi / Airflow for orchestration.

- Strong experience in Distributed Data Systems (5+ years) .

- Proficiency in SQL and query optimization.

- Good understanding of Cloud Architecture .


Preferred Skills

- Exposure to messaging technologies like Apache Kafka or equivalent.

- Experience in designing intuitive, responsive UIs for data analytics visualization.

- Familiarity with Splunk or other monitoring/alerting solutions .

- Hands-on experience with CI/CD tools (Git, Jenkins).

- Strong grasp of software engineering concepts, data modeling, and optimization techniques .

This advertiser has chosen not to accept applicants from your region.

Big Data Engineer - Scala

Noida, Uttar Pradesh Idyllic Services

Posted 1 day ago

Job Viewed

Tap Again To Close

Job Description

Job Title: Big Data Engineer – Scala

Location: Bangalore, Chennai, Gurgaon, Pune, Mumbai.

Experience: 7–10 Years (Minimum 3+ years in Scala)

Notice Period: Immediate to 30 Days

Mode of Work: Hybrid


Role Overview

We are looking for a highly skilled Big Data Engineer (Scala) with strong expertise in Scala, Spark, Python, NiFi, and Apache Kafka to join our data engineering team. The ideal candidate will have a proven track record in building, scaling, and optimizing big data pipelines , and hands-on experience in distributed data systems and cloud-based solutions.


Key Responsibilities

- Design, develop, and optimize large-scale data pipelines and distributed data processing systems.

- Work extensively with Scala, Spark (PySpark), and Python for data processing and transformation.

- Develop and integrate streaming solutions using Apache Kafka and orchestration tools like NiFi / Airflow .

- Write efficient queries and perform data analysis using Jupyter Notebooks and SQL .

- Collaborate with cross-functional teams to design scalable cloud-based data architectures .

- Ensure delivery of high-quality code through code reviews, performance tuning, and best practices .

- Build monitoring and alerting systems leveraging Splunk or equivalent tools .

- Participate in CI/CD workflows using Git, Jenkins, and other DevOps tools.

- Contribute to product development with a focus on scalability, maintainability, and performance.


Mandatory Skills

- Scala – Minimum 3+ years of hands-on experience.

- Strong expertise in Spark (PySpark) and Python .

- Hands-on experience with Apache Kafka .

- Knowledge of NiFi / Airflow for orchestration.

- Strong experience in Distributed Data Systems (5+ years) .

- Proficiency in SQL and query optimization.

- Good understanding of Cloud Architecture .


Preferred Skills

- Exposure to messaging technologies like Apache Kafka or equivalent.

- Experience in designing intuitive, responsive UIs for data analytics visualization.

- Familiarity with Splunk or other monitoring/alerting solutions .

- Hands-on experience with CI/CD tools (Git, Jenkins).

- Strong grasp of software engineering concepts, data modeling, and optimization techniques .

This advertiser has chosen not to accept applicants from your region.

Big Data Engineer - Scala

Ghaziabad, Uttar Pradesh Idyllic Services

Posted 1 day ago

Job Viewed

Tap Again To Close

Job Description

Job Title: Big Data Engineer – Scala

Location: Bangalore, Chennai, Gurgaon, Pune, Mumbai.

Experience: 7–10 Years (Minimum 3+ years in Scala)

Notice Period: Immediate to 30 Days

Mode of Work: Hybrid


Role Overview

We are looking for a highly skilled Big Data Engineer (Scala) with strong expertise in Scala, Spark, Python, NiFi, and Apache Kafka to join our data engineering team. The ideal candidate will have a proven track record in building, scaling, and optimizing big data pipelines , and hands-on experience in distributed data systems and cloud-based solutions.


Key Responsibilities

- Design, develop, and optimize large-scale data pipelines and distributed data processing systems.

- Work extensively with Scala, Spark (PySpark), and Python for data processing and transformation.

- Develop and integrate streaming solutions using Apache Kafka and orchestration tools like NiFi / Airflow .

- Write efficient queries and perform data analysis using Jupyter Notebooks and SQL .

- Collaborate with cross-functional teams to design scalable cloud-based data architectures .

- Ensure delivery of high-quality code through code reviews, performance tuning, and best practices .

- Build monitoring and alerting systems leveraging Splunk or equivalent tools .

- Participate in CI/CD workflows using Git, Jenkins, and other DevOps tools.

- Contribute to product development with a focus on scalability, maintainability, and performance.


Mandatory Skills

- Scala – Minimum 3+ years of hands-on experience.

- Strong expertise in Spark (PySpark) and Python .

- Hands-on experience with Apache Kafka .

- Knowledge of NiFi / Airflow for orchestration.

- Strong experience in Distributed Data Systems (5+ years) .

- Proficiency in SQL and query optimization.

- Good understanding of Cloud Architecture .


Preferred Skills

- Exposure to messaging technologies like Apache Kafka or equivalent.

- Experience in designing intuitive, responsive UIs for data analytics visualization.

- Familiarity with Splunk or other monitoring/alerting solutions .

- Hands-on experience with CI/CD tools (Git, Jenkins).

- Strong grasp of software engineering concepts, data modeling, and optimization techniques .

This advertiser has chosen not to accept applicants from your region.

Big Data Engineer - Scala

New Delhi, Delhi Idyllic Services

Posted 1 day ago

Job Viewed

Tap Again To Close

Job Description

Job Title: Big Data Engineer – Scala

Location: Bangalore, Chennai, Gurgaon, Pune, Mumbai.

Experience: 7–10 Years (Minimum 3+ years in Scala)

Notice Period: Immediate to 30 Days

Mode of Work: Hybrid


Role Overview

We are looking for a highly skilled Big Data Engineer (Scala) with strong expertise in Scala, Spark, Python, NiFi, and Apache Kafka to join our data engineering team. The ideal candidate will have a proven track record in building, scaling, and optimizing big data pipelines , and hands-on experience in distributed data systems and cloud-based solutions.


Key Responsibilities

- Design, develop, and optimize large-scale data pipelines and distributed data processing systems.

- Work extensively with Scala, Spark (PySpark), and Python for data processing and transformation.

- Develop and integrate streaming solutions using Apache Kafka and orchestration tools like NiFi / Airflow .

- Write efficient queries and perform data analysis using Jupyter Notebooks and SQL .

- Collaborate with cross-functional teams to design scalable cloud-based data architectures .

- Ensure delivery of high-quality code through code reviews, performance tuning, and best practices .

- Build monitoring and alerting systems leveraging Splunk or equivalent tools .

- Participate in CI/CD workflows using Git, Jenkins, and other DevOps tools.

- Contribute to product development with a focus on scalability, maintainability, and performance.


Mandatory Skills

- Scala – Minimum 3+ years of hands-on experience.

- Strong expertise in Spark (PySpark) and Python .

- Hands-on experience with Apache Kafka .

- Knowledge of NiFi / Airflow for orchestration.

- Strong experience in Distributed Data Systems (5+ years) .

- Proficiency in SQL and query optimization.

- Good understanding of Cloud Architecture .


Preferred Skills

- Exposure to messaging technologies like Apache Kafka or equivalent.

- Experience in designing intuitive, responsive UIs for data analytics visualization.

- Familiarity with Splunk or other monitoring/alerting solutions .

- Hands-on experience with CI/CD tools (Git, Jenkins).

- Strong grasp of software engineering concepts, data modeling, and optimization techniques .

This advertiser has chosen not to accept applicants from your region.
Be The First To Know

About the latest Senior data engineer Jobs in Noida !

Big Data Engineer - Scala

Ghaziabad, Uttar Pradesh Idyllic Services

Posted today

Job Viewed

Tap Again To Close

Job Description

Job Title: Big Data Engineer – Scala

Location: Bangalore, Chennai, Gurgaon, Pune, Mumbai.

Experience: 7–10 Years (Minimum 3+ years in Scala)

Notice Period: Immediate to 30 Days

Mode of Work: Hybrid


Role Overview

We are looking for a highly skilled Big Data Engineer (Scala) with strong expertise in Scala, Spark, Python, NiFi, and Apache Kafka to join our data engineering team. The ideal candidate will have a proven track record in building, scaling, and optimizing big data pipelines , and hands-on experience in distributed data systems and cloud-based solutions.


Key Responsibilities

- Design, develop, and optimize large-scale data pipelines and distributed data processing systems.

- Work extensively with Scala, Spark (PySpark), and Python for data processing and transformation.

- Develop and integrate streaming solutions using Apache Kafka and orchestration tools like NiFi / Airflow .

- Write efficient queries and perform data analysis using Jupyter Notebooks and SQL .

- Collaborate with cross-functional teams to design scalable cloud-based data architectures .

- Ensure delivery of high-quality code through code reviews, performance tuning, and best practices .

- Build monitoring and alerting systems leveraging Splunk or equivalent tools .

- Participate in CI/CD workflows using Git, Jenkins, and other DevOps tools.

- Contribute to product development with a focus on scalability, maintainability, and performance.


Mandatory Skills

- Scala – Minimum 3+ years of hands-on experience.

- Strong expertise in Spark (PySpark) and Python .

- Hands-on experience with Apache Kafka .

- Knowledge of NiFi / Airflow for orchestration.

- Strong experience in Distributed Data Systems (5+ years) .

- Proficiency in SQL and query optimization.

- Good understanding of Cloud Architecture .


Preferred Skills

- Exposure to messaging technologies like Apache Kafka or equivalent.

- Experience in designing intuitive, responsive UIs for data analytics visualization.

- Familiarity with Splunk or other monitoring/alerting solutions .

- Hands-on experience with CI/CD tools (Git, Jenkins).

- Strong grasp of software engineering concepts, data modeling, and optimization techniques .

This advertiser has chosen not to accept applicants from your region.

Senior Cloud Big Data Engineer

Noida, Uttar Pradesh Anicalls (Pty) Ltd

Posted today

Job Viewed

Tap Again To Close

Job Description


Candidate should have:
• Experience designing, developing, and testing applications using proven or emerging technologies, in a variety of technologies and environments.
• Experience in using and tuning relational databases (Azure SQL Datawarehouse and SQL DB, MS SQL Server, or other RDBMS is a plus).
• Experience with Data Lake implementations and design patterns
• Experience with Lambda and Kappa architecture implementations
• Knowledge and experience with Azure Data Factory (Informatica is a plus) as an ETL environment.
• Knowledge and exposures to the cloud or on-premises MPP data warehousing systems (Azure SQL Data Warehouse)
• Knowledge and experience in .Net Framework 4.6 and above, .Net Core, and .Net Standard.
• Knowledge and experience in Azure Storage such as Blob Storage, Data Lake Store, Cosmos DB, Azure SQL
• Knowledge and exposure to Big Data technologies Hadoop, HDFS, Hive, and Apache Spark/DataBricks, etc.
• Knowledge and experience in Azure DevOps (Build CI/CD Pipelines) and TFS.
• Knowledge and experience in Serverless Azure Compute Services such as Azure Functions, Logic Apps, App service, Service Bus, and Web Jobs.
• Demonstrated knowledge of data management concepts as well as an outstanding command of the SQL standard.
• Experience with C# required
• Object-Oriented Programming proficiency using .Net technology stack.
• At least 6 years of experience with Cloud-based analytic, data management, and visualization technologies
• Bachelor's degree in Programming/Systems, Computer Science or equivalent work experience.
This advertiser has chosen not to accept applicants from your region.

QA Engineer-Big Data

Noida, Uttar Pradesh BOLD Limited

Posted today

Job Viewed

Tap Again To Close

Job Description

BOLD is an established and fast-growing product company that transforms work lives. Since 2005, BOLD has delivered award-winning career services that have a meaningful and positive impact on job seekers and employers. BOLD’s robust product line includes a professional resume and cover letter writing services, scientifically validated career tests, and employer tools that help companies hire, onboard, and communicate with their staff.

Big Data is all about making dry figures accessible and useful to the right audience. Our team is working on latest tools for ETL, reporting, analysis and is providing performance monitoring and insights via dashboards, scorecards and ad hoc analysis which help create Customer Engagement reporting and modelling using our event stream.

Job description:

Role:

Hadoop QA as part of Big Data team will support big data operations. QA engineer Ensures that every phase and feature of the software solution is tested and any potential issue is identified and fixed before product goes live.

Responsibilities:

  • Ability to understand and analyze complex business requirements and develop test cases.
  • Design, develop & execute detailed test cases for functional, system and regression testing.
  • Creation of test plans, test requirements, test cases.
  • Data verification and validation from source system and Hadoop Environment.
  • Works closely with development and product teams and understand business requirements.
  • Communicates accurately the status and risks for on-going work and timelines.
  • Reports detailed software defects.
  • Required Skills:

  • Experience with Quality Assurance and testing tools
  • Knowledge of software testing principles, methods and processes
  • Experience in testing product developed using Hadoop technology
  • Expertise in writing complex SQL queries
  • Python Scripting Experience
  • Hands on Knowledge of Linux Environment
  • Experience with Defect/Project Management Tools, preferably Jira
  • Work Experience:

    3-5 years

    Educational Qualification:

    Engineering/ Master’s Degree from a good Institute (preferably Computer Science or related)

    About BOLD

    BOLD is a fast-paced, product company founded by two entrepreneurs passionate about helping people achieve their dreams. We stand together as a team empowering people to reach their professional aspirations.With our headquarters in Puerto Rico and offices in San Francisco and India, we’re a global organization on a path to change the career industry . Our vision is to revolutionize the online career world by creating transformational products that help people find jobs and companies hire the best candidates.A career at BOLD promises great challenges, opportunity, culture and the environment and you forge your own path ahead. Join us and discover what a great place BOLD is!

    This advertiser has chosen not to accept applicants from your region.
     

    Nearby Locations

    Other Jobs Near Me

    Industry

    1. request_quote Accounting
    2. work Administrative
    3. eco Agriculture Forestry
    4. smart_toy AI & Emerging Technologies
    5. school Apprenticeships & Trainee
    6. apartment Architecture
    7. palette Arts & Entertainment
    8. directions_car Automotive
    9. flight_takeoff Aviation
    10. account_balance Banking & Finance
    11. local_florist Beauty & Wellness
    12. restaurant Catering
    13. volunteer_activism Charity & Voluntary
    14. science Chemical Engineering
    15. child_friendly Childcare
    16. foundation Civil Engineering
    17. clean_hands Cleaning & Sanitation
    18. diversity_3 Community & Social Care
    19. construction Construction
    20. brush Creative & Digital
    21. currency_bitcoin Crypto & Blockchain
    22. support_agent Customer Service & Helpdesk
    23. medical_services Dental
    24. medical_services Driving & Transport
    25. medical_services E Commerce & Social Media
    26. school Education & Teaching
    27. electrical_services Electrical Engineering
    28. bolt Energy
    29. local_mall Fmcg
    30. gavel Government & Non Profit
    31. emoji_events Graduate
    32. health_and_safety Healthcare
    33. beach_access Hospitality & Tourism
    34. groups Human Resources
    35. precision_manufacturing Industrial Engineering
    36. security Information Security
    37. handyman Installation & Maintenance
    38. policy Insurance
    39. code IT & Software
    40. gavel Legal
    41. sports_soccer Leisure & Sports
    42. inventory_2 Logistics & Warehousing
    43. supervisor_account Management
    44. supervisor_account Management Consultancy
    45. supervisor_account Manufacturing & Production
    46. campaign Marketing
    47. build Mechanical Engineering
    48. perm_media Media & PR
    49. local_hospital Medical
    50. local_hospital Military & Public Safety
    51. local_hospital Mining
    52. medical_services Nursing
    53. local_gas_station Oil & Gas
    54. biotech Pharmaceutical
    55. checklist_rtl Project Management
    56. shopping_bag Purchasing
    57. home_work Real Estate
    58. person_search Recruitment Consultancy
    59. store Retail
    60. point_of_sale Sales
    61. science Scientific Research & Development
    62. wifi Telecoms
    63. psychology Therapy
    64. pets Veterinary
    View All Senior Data Engineer Jobs View All Jobs in Noida