22,598 Big Data Engineer jobs in India

Big Data Engineer

Pune, Maharashtra Nice Software Solutions Pvt. Ltd.

Posted 15 days ago

Job Viewed

Tap Again To Close

Job Description

Big Data Engineer (PySpark)

Location: Pune/Nagpur (WFO)

Experience: 8 - 12 Years

Employment Type: Full-time


Job Overview

We are looking for an experienced Big Data Engineer with strong expertise in PySpark and Big Data ecosystems. The ideal candidate will be responsible for designing, developing, and optimizing scalable data pipelines while ensuring high performance and reliability.


Key Responsibilities

  • Design, develop, and maintain data pipelines using PySpark and related Big Data technologies.
  • Work with HDFS, Hive, Sqoop , and other tools in the Hadoop ecosystem.
  • Write efficient HiveQL and SQL queries to handle large-scale datasets.
  • Perform performance tuning and optimization of distributed data systems.
  • Collaborate with cross-functional teams in an Agile environment to deliver high-quality solutions.
  • Manage and schedule workflows using Apache Airflow or Oozie .
  • Troubleshoot and resolve issues in data pipelines to ensure reliability and accuracy.


Required Skills

  • Proven experience in Big Data Engineering with a focus on PySpark.
  • Strong knowledge of HDFS, Hive, Sqoop , and related tools.
  • Proficiency in SQL/HiveQL for large datasets.
  • Expertise in performance tuning and optimization of distributed systems.
  • Familiarity with Agile methodology and collaborative team practices.
  • Experience with workflow orchestration tools (Airflow/Oozie ).
  • Strong problem-solving, analytical, and communication skills.


Good to Have

  • Knowledge of data modeling and data warehousing concepts.
  • Exposure to DevOps practices and CI/CD pipelines for data engineering.
  • Experience with other Big Data frameworks such as Spark Streaming or Kafka .
This advertiser has chosen not to accept applicants from your region.

Big Data Engineer

Pune, Maharashtra Nice Software Solutions Pvt. Ltd.

Posted 1 day ago

Job Viewed

Tap Again To Close

Job Description

Big Data Engineer (PySpark)

Location: Pune/Nagpur (WFO)

Experience: 8 - 12 Years

Employment Type: Full-time

Job Overview

We are looking for an experienced Big Data Engineer with strong expertise in PySpark and Big Data ecosystems. The ideal candidate will be responsible for designing, developing, and optimizing scalable data pipelines while ensuring high performance and reliability.

Key Responsibilities

  • Design, develop, and maintain data pipelines using PySpark and related Big Data technologies.
  • Work with HDFS, Hive, Sqoop, and other tools in the Hadoop ecosystem.
  • Write efficient HiveQL and SQL queries to handle large-scale datasets.
  • Perform performance tuning and optimization of distributed data systems.
  • Collaborate with cross-functional teams in an Agile environment to deliver high-quality solutions.
  • Manage and schedule workflows using Apache Airflow or Oozie.
  • Troubleshoot and resolve issues in data pipelines to ensure reliability and accuracy.

Required Skills

  • Proven experience in Big Data Engineering with a focus on PySpark.
  • Strong knowledge of HDFS, Hive, Sqoop, and related tools.
  • Proficiency in SQL/HiveQL for large datasets.
  • Expertise in performance tuning and optimization of distributed systems.
  • Familiarity with Agile methodology and collaborative team practices.
  • Experience with workflow orchestration tools (Airflow/Oozie).
  • Strong problem-solving, analytical, and communication skills.

Good to Have

  • Knowledge of data modeling and data warehousing concepts.
  • Exposure to DevOps practices and CI/CD pipelines for data engineering.
  • Experience with other Big Data frameworks such as Spark Streaming or Kafka.
This advertiser has chosen not to accept applicants from your region.

Big Data Engineer

Pune, Maharashtra Nice Software Solutions Pvt. Ltd.

Posted today

Job Viewed

Tap Again To Close

Job Description

Big Data Engineer (PySpark)

Location: Pune/Nagpur (WFO)

Experience: 8 - 12 Years

Employment Type: Full-time

Job Overview

We are looking for an experienced Big Data Engineer with strong expertise in PySpark and Big Data ecosystems. The ideal candidate will be responsible for designing, developing, and optimizing scalable data pipelines while ensuring high performance and reliability.

Key Responsibilities

  • Design, develop, and maintain data pipelines using PySpark and related Big Data technologies.
  • Work with HDFS, Hive, Sqoop , and other tools in the Hadoop ecosystem.
  • Write efficient HiveQL and SQL queries to handle large-scale datasets.
  • Perform performance tuning and optimization of distributed data systems.
  • Collaborate with cross-functional teams in an Agile environment to deliver high-quality solutions.
  • Manage and schedule workflows using Apache Airflow or Oozie .
  • Troubleshoot and resolve issues in data pipelines to ensure reliability and accuracy.

Required Skills

  • Proven experience in Big Data Engineering with a focus on PySpark.
  • Strong knowledge of HDFS, Hive, Sqoop , and related tools.
  • Proficiency in SQL/HiveQL for large datasets.
  • Expertise in performance tuning and optimization of distributed systems.
  • Familiarity with Agile methodology and collaborative team practices.
  • Experience with workflow orchestration tools ( Airflow/Oozie ).
  • Strong problem-solving, analytical, and communication skills.

Good to Have

  • Knowledge of data modeling and data warehousing concepts.
  • Exposure to DevOps practices and CI/CD pipelines for data engineering.
  • Experience with other Big Data frameworks such as Spark Streaming or Kafka .

This advertiser has chosen not to accept applicants from your region.

Big data engineer

Hyderabad, Andhra Pradesh Virtusa

Posted today

Job Viewed

Tap Again To Close

Job Description

Big data engineer - CREQ Description
  • Position: Big data engineer
  • Primary: big data concepts, aws, pyspark
  • Location: HYD
  • Create Trigger based automation framework for Data Migration
    Identify roles/access needed for data migration from federated bucket to managed bucket and Build APIs for the same
    Integrate CDMS framework with Lake and Data bridge API
    Data migration from S3 Managed to Hadoop On prem
    Jobs for Daily and Bulk loads
    Test support for AVRO to test lake features
    Test support for Compression types like LZO, .ENC to test lake features
    ABINITIO integration: Build feature to create operation trigger for ABI pipeline
    Movement to new datacenter -SQL server migration
    Carlstadt to Ashburn (DR switchover)

    Develop and maintain data platforms using Python.
    Work with AWS and Big Data, design and implement data pipelines, and ensure data quality and integrity.
    Collaborate with cross functional teams to understand data requirements and design solutions that meet business needs .
    Implement and manage agents for monitoring, logging, and automation within AWS environments.
    Handling migration from PySpark to AWS.
    (Secondary) Resource must have hands on development experience with various Ab Initio components such as Rollup Scan join Partition by key Partition by Round Robin Gather Merge Interleave Lookup etc.
    Must have experience with SQL database programming SQL performance tuning relational model analysis.
    Good knowledge in developing UNIX scripts Oracle SQLPLSQL.
    Leverage internal tools and SDKs, utilize AWS services such as S3, Athena, and Glue, and integrate with our internal Archival Service Platform for efficient data purging.
    Lead the integration efforts with the internal Archival Service Platform for seamless data purging and lifecycle management.
    Collaborate with the data engineering team to continuously improve data integration pipelines, ensuring adaptability to evolving business needs.
  • Primary Location Hyderabad, Andhra Pradesh, India Job Type Experienced Primary Skills Big Data, Python, Spark Years of Experience 12 Qualification
  • Education: Any degree or equivalent
  • Experience: 6+ years
  • Travel No
    This advertiser has chosen not to accept applicants from your region.

    Big Data Engineer

    Noida, Uttar Pradesh Training Basket

    Posted today

    Job Viewed

    Tap Again To Close

    Job Description

    Job Description

    We are looking for passionate   B.Tech freshers   with strong programming skills in   Java   who are eager to start their career in   Big Data technologies . The role offers exciting opportunities to work on real-time big data projects, data pipelines, and cloud-based data solutions.


    Requirements
    • Assist in designing, developing, and maintaining   big data solutions .

    • Write efficient code in   Java   and integrate with big data frameworks.

    • Support in building   data ingestion, transformation, and processing pipelines .

    • Work with   distributed systems   and learn technologies like   Hadoop, Spark, Kafka, Hive, HBase .

    • Collaborate with senior engineers on data-related problem-solving and performance optimization.

    • Participate in   debugging, testing, and documentation   of big data workflows.

    Required Skills:
    • Strong knowledge of   Core Java & OOPs concepts .

    • Good understanding of   SQL and database concepts .

    • Familiarity with   data structures & algorithms .

    • Basic knowledge of   Big Data frameworks   (Hadoop/Spark/Kafka) is an added advantage.

    • Problem-solving skills and eagerness to learn new technologies.

    Eligibility Criteria:
    • Education:   B.Tech (CSE/IT or related fields).

    • Batch:   (specific, e.g., 2024/2025 pass outs).

    • Experience:   Fresher (0–1 year)



    Benefits
    • Training and mentoring in   cutting-edge Big Data tools & technologies .

    • Exposure to   live projects   from day one.

    • A fast-paced, learning-oriented work culture.




    Requirements
    Strong knowledge of Core Java & OOPs concepts. Good understanding of SQL and database concepts. Familiarity with data structures & algorithms.
    This advertiser has chosen not to accept applicants from your region.

    Big Data Engineer

    Noida, Uttar Pradesh Kiash Solutions LLP

    Posted today

    Job Viewed

    Tap Again To Close

    Job Description

    Basic Qualifications:

    Bachelors degree or higher in Computer Science, or equivalent degree and 3-10 years related working experience.

    In-depth experience with a big data cloud platform, preferably Azure.

    Strong grasp of programming languages (Python, PySpark, or equivalent) and a willingness to learn new ones.

    Experience writing database-heavy services or APIs.

    Experience building and optimizing data pipelines, architectures, and data sets.

    Working knowledge of queueing, stream processing, and highly scalable data stores

    Experience working with and supporting cross-functional teams.

    Strong understanding of structuring code for testability.

    Preferred Qualifications:

    Professional experience implementing and maintaining MLOps pipelines in MLflow or AzureML.

    Professional experience implementing data ingestion pipelines using Data Factory.

    Professional experience with Databricks and coding with notebooks.

    Professional experience processing and manipulating data using SQL and Python code.

    Professional experience with user training, customer support, and coordination with cross-functional teams.


    This advertiser has chosen not to accept applicants from your region.

    Big Data Engineer

    Noida, Uttar Pradesh Training Basket

    Posted today

    Job Viewed

    Tap Again To Close

    Job Description

    We are looking for passionate   B.Tech freshers   with strong programming skills in   Java   who are eager to start their career in   Big Data technologies . The role offers exciting opportunities to work on real-time big data projects, data pipelines, and cloud-based data solutions.


    Requirements
    • Assist in designing, developing, and maintaining   big data solutions .

    • Write efficient code in   Java   and integrate with big data frameworks.

    • Support in building   data ingestion, transformation, and processing pipelines .

    • Work with   distributed systems   and learn technologies like   Hadoop, Spark, Kafka, Hive, HBase .

    • Collaborate with senior engineers on data-related problem-solving and performance optimization.

    • Participate in   debugging, testing, and documentation   of big data workflows.

    Required Skills:
    • Strong knowledge of   Core Java & OOPs concepts .

    • Good understanding of   SQL and database concepts .

    • Familiarity with   data structures & algorithms .

    • Basic knowledge of   Big Data frameworks   (Hadoop/Spark/Kafka) is an added advantage.

    • Problem-solving skills and eagerness to learn new technologies.

    Eligibility Criteria:
    • Education:   B.Tech (CSE/IT or related fields).

    • Batch:   (specific, e.g., 2024/2025 pass outs).

    • Experience:   Fresher (0–1 year)



    Benefits
    • Training and mentoring in   cutting-edge Big Data tools & technologies .

    • Exposure to   live projects   from day one.

    • A fast-paced, learning-oriented work culture.



    This advertiser has chosen not to accept applicants from your region.
    Be The First To Know

    About the latest Big data engineer Jobs in India !

    Big Data Engineer

    Bengaluru, Karnataka RiskInsight Consulting Pvt Ltd

    Posted today

    Job Viewed

    Tap Again To Close

    Job Description

    Responsibilities
    • Design, develop, and implement robust Big Data solutions using technologies such as Hadoop, Spark, and NoSQL databases.
    • Build and maintain scalable data pipelines for effective data ingestion, transformation, and analysis.
    • Collaborate with data scientists, analysts, and cross-functional teams to understand business requirements and translate them into technical solutions.
    • Ensure data quality and integrity through effective validation, monitoring, and troubleshooting techniques.
    • Optimize data processing workflows for maximum performance and efficiency.
    • Stay up-to-date with evolving Big Data technologies and methodologies to enhance existing systems.
    • Implement best practices for data governance, security, and compliance.
    • Document technical designs, processes, and procedures to support knowledge sharing across teams.

    Requirements

    • Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
    • 4+ years of experience as a Big Data Engineer or in a similar role.
    • Strong proficiency in Big Data technologies (Hadoop, Spark, Hive, Pig) and frameworks.
    • Extensive experience with programming languages such as Python, Scala, or Java.
    • Knowledge of data modeling and data warehousing concepts.
    • Familiarity with NoSQL databases like Cassandra or MongoDB.
    • Proficient in SQL for data querying and analysis.
    • Strong analytical and problem-solving skills.
    • Excellent communication and collaboration abilities.
    • Ability to work independently and effectively in a fast-paced environment.

    Benefits

    Competitive salary and benefits package.

    Opportunity to work on cutting-edge technologies and solve complex challenges.

    Dynamic and collaborative work environment with opportunities for growth and career advancement.

    Regular training and professional development opportunities.

    This advertiser has chosen not to accept applicants from your region.

    Big Data Engineer

    Chennai, Tamil Nadu Hexaware Technologies

    Posted today

    Job Viewed

    Tap Again To Close

    Job Description

    Description

    :
  • 4 or more years of experience working directly with enterprise data solutions
  • Hands on experience working in a public cloud environment and on-prem infrastructure.
  • Specialty on Columnar Databases like Redshift Spectrum and the AWS cloud infrastructure services (Redshift, S3, Lambda)
  • Excellent SQL skills and Python coding is a must
  • Experience with a wide variety of modern data processing technologies, including
  • Expert in Commonly used AWS services (S3, Lambda, Redshift, Glue, Athena)
  • Expert in Columnar databases primarily, Redshift
  • Expertise in Python is a must have
  • Big Data Stack (Spark, spectrum)
  • Data streaming (Kafka)
  • Knowledge on time series data stores like Apache pinot or druid is a plus
  • This advertiser has chosen not to accept applicants from your region.

    Big Data Engineer

    West Bengal, West Bengal Avanade

    Posted today

    Job Viewed

    Tap Again To Close

    Job Description

    Benefit from our flexible work options and career growth opportunities

    As an Azure Senior Data Engineer specialized in Microsoft Azure tools, you will support the implementation of projects focused on collecting, aggregating, storing, reconciling, and making data accessible from disparate sources to enable analysis and decision making.

    This role will also play a critical part in the data supply chain, by ensuring stakeholders can access and manipulate data for routine and ad hoc analysis. Additionally, you will support the full lifecycle of data from ingesting through analytics to action.

    What will you do?

    - Translate business requirements to technical solutions
    - Support the planning and implementation of technical solutions
    - Strong knowledge of data warehouse concepts and T-SQL relational/non-relational databases for data access and Advanced Analytics
    - Knowledge of Power BI to enhance better data visualizations to the clients.

    Some of the best things about working at Avanade:
    - Opportunity to work for Microsoft’s Global Alliance Partner of the Year (14 years in a row), with exceptional development and training (minimum 8 hours per year for training and paid certifications)
    - Real-time access to technical and skilled resources globally
    - Dedicated career advisor to encourage your growth
    - Engaged and helpful coworkers genuinely interested in you.

    This advertiser has chosen not to accept applicants from your region.
     

    Nearby Locations

    Other Jobs Near Me

    Industry

    1. request_quote Accounting
    2. work Administrative
    3. eco Agriculture Forestry
    4. smart_toy AI & Emerging Technologies
    5. school Apprenticeships & Trainee
    6. apartment Architecture
    7. palette Arts & Entertainment
    8. directions_car Automotive
    9. flight_takeoff Aviation
    10. account_balance Banking & Finance
    11. local_florist Beauty & Wellness
    12. restaurant Catering
    13. volunteer_activism Charity & Voluntary
    14. science Chemical Engineering
    15. child_friendly Childcare
    16. foundation Civil Engineering
    17. clean_hands Cleaning & Sanitation
    18. diversity_3 Community & Social Care
    19. construction Construction
    20. brush Creative & Digital
    21. currency_bitcoin Crypto & Blockchain
    22. support_agent Customer Service & Helpdesk
    23. medical_services Dental
    24. medical_services Driving & Transport
    25. medical_services E Commerce & Social Media
    26. school Education & Teaching
    27. electrical_services Electrical Engineering
    28. bolt Energy
    29. local_mall Fmcg
    30. gavel Government & Non Profit
    31. emoji_events Graduate
    32. health_and_safety Healthcare
    33. beach_access Hospitality & Tourism
    34. groups Human Resources
    35. precision_manufacturing Industrial Engineering
    36. security Information Security
    37. handyman Installation & Maintenance
    38. policy Insurance
    39. code IT & Software
    40. gavel Legal
    41. sports_soccer Leisure & Sports
    42. inventory_2 Logistics & Warehousing
    43. supervisor_account Management
    44. supervisor_account Management Consultancy
    45. supervisor_account Manufacturing & Production
    46. campaign Marketing
    47. build Mechanical Engineering
    48. perm_media Media & PR
    49. local_hospital Medical
    50. local_hospital Military & Public Safety
    51. local_hospital Mining
    52. medical_services Nursing
    53. local_gas_station Oil & Gas
    54. biotech Pharmaceutical
    55. checklist_rtl Project Management
    56. shopping_bag Purchasing
    57. home_work Real Estate
    58. person_search Recruitment Consultancy
    59. store Retail
    60. point_of_sale Sales
    61. science Scientific Research & Development
    62. wifi Telecoms
    63. psychology Therapy
    64. pets Veterinary
    View All Big Data Engineer Jobs