13598 Data Engineer jobs in india

Big Data Engineer

Mumbai, Maharashtra HirePower Staffing Solution

Posted 5 days ago

Job Viewed

Tap Again To Close

Job Description

Position Overview:

We are seeking a skilled Big Data Developer to join our growing delivery team, with a dual focus on hands-on project support and mentoring junior engineers. This role is ideal for a developer who not only thrives in a technical, fast-paced environment but is also passionate about coaching and developing the next generation of talent.

You will work on live client projects, provide technical support, contribute to solution delivery, and serve as a go-to technical mentor for less experienced team members.


Key Responsibilities:

  • Perform hands-on Big Data development work, including coding, testing, troubleshooting, and deploying solutions.
  • Support ongoing client projects, addressing technical challenges and ensuring smooth delivery.
  • Collaborate with junior engineers to guide them on coding standards, best practices, debugging, and project execution.
  • Review code and provide feedback to junior engineers to maintain high quality and scalable solutions.
  • Assist in designing and implementing solutions using Hadoop, Spark, Hive, HDFS, and Kafka.
  • Lead by example in object-oriented development, particularly using Scala and Java.
  • Translate complex requirements into clear, actionable technical tasks for the team.
  • Contribute to the development of ETL processes for integrating data from various sources.
  • Document technical approaches, best practices, and workflows for knowledge sharing within the team.

Required Skills and Qualifications:

  • 8+ years of professional experience in Big Data development and engineering.
  • Strong hands-on expertise with Hadoop, Hive, HDFS, Apache Spark, and Kafka.
  • Solid object-oriented development experience with Scala and Java.
  • Strong SQL skills with experience working with large data sets.
  • Practical experience designing, installing, configuring, and supporting Big Data clusters.
  • Deep understanding of ETL processes and data integration strategies.
  • Proven experience mentoring or supporting junior engineers in a team setting.
  • Strong problem-solving, troubleshooting, and analytical skills.
  • Excellent communication and interpersonal skills.


Preferred Qualifications:

  • Professional certifications in Big Data technologies (Cloudera, Databricks, AWS Big Data Specialty, etc.).
  • Experience with cloud Big Data platforms (AWS EMR, Azure HDInsight, or GCP Dataproc).
  • Exposure to Agile or DevOps practices in Big Data project environments.


What We Offer:

Opportunity to work on challenging, high-impact Big Data projects.

Leadership role in shaping and mentoring the next generation of engineers.

Supportive and collaborative team culture.

Flexible working environment

Competitive compensation and professional growth opportunities.

This advertiser has chosen not to accept applicants from your region.

Big Data Engineer

Hyderabad, Andhra Pradesh ₹500000 - ₹1200000 Y Artech

Posted today

Job Viewed

Tap Again To Close

Job Description

Experience: 4-6 Years(Contract to Hire)

Work Location: Chennai, TN | Bangalore, KA | Hyderabad, TS

Skill Required: Digital : Bigdata and Hadoop Ecosystems Digital : PySpark

Job Description:

"? Need to work as a developer in Bigdata, Hadoop or Data Warehousing Tools and Cloud Computing ?

Work on Hadoop, Hive SQL?s, Spark, Bigdata Eco System Tools?

Experience in working with teams in a complex organization involving multiple reporting lines?

The candidate should have strong functional and technical knowledge to deliver what is required and he/she should be well acquainted with Banking terminologies. ?

The candidate should have strong DevOps and Agile Development Framework knowledge?

Create Scala/Spark jobs for data transformation and aggregation?

Experience with stream-processing systems like Storm, Spark-Streaming, Flink"

This advertiser has chosen not to accept applicants from your region.

Big Data Engineer

₹1500000 - ₹2500000 Y Preludesys

Posted today

Job Viewed

Tap Again To Close

Job Description

Hi,

Greetings from Preludesys India Pvt Ltd

We are hiring for one of our prestigious clients for the below position

Job Opportunity: Big Data Engineer

Notice Period: Immediate - 30 Days

Key Responsibilities:

  • Design, develop, and maintain data pipelines using the Cloudera Hadoop ecosystem
  • Implement real-time data streaming solutions with Apache Kafka
  • Work with Dataiku and Apache Spark on the Cloudera platform for advanced analytics
  • Develop scalable data solutions using Python, PySpark, and SQL
  • Apply strong data modeling principles to support business intelligence and analytics

Mandatory Skills:

  • Hands-on experience with the Cloudera Platform

Nice-to-Have Skills:

  • Proficiency in Data Modeling
  • Experience with Hadoop and Spark
This advertiser has chosen not to accept applicants from your region.

Big Data Engineer

₹1500000 - ₹2500000 Y Trustklub Consulting

Posted today

Job Viewed

Tap Again To Close

Job Description

Roles and Responsibilities :

  • Design, develop, and maintain large-scale data pipelines using PySpark on AWS cloud platform.
  • Collaborate with cross-functional teams to gather requirements and deliver high-quality solutions that meet business needs.
  • Develop scalable data processing applications using Python programming language and integrate them with existing systems.
  • Troubleshoot complex issues related to big data processing, ETL processes, and data quality.

Job Requirements :

  • 4-10 years of experience in Big Data engineering with expertise in PySpark on AWS.
  • Strong understanding of Python programming language and its application in big data processing.
  • Experience working with distributed computing frameworks such as Hadoop or Spark.
  • Proficiency in designing scalable architectures for handling massive datasets.
This advertiser has chosen not to accept applicants from your region.

Big Data Engineer

Chennai, Tamil Nadu ₹1200000 - ₹3600000 Y Citi

Posted today

Job Viewed

Tap Again To Close

Job Description

Responsible for designing, developing, and optimizing data processing solutions using a combination of Big Data technologies. Focus on building scalable and efficient data pipelines for handling large datasets and enabling batch & real-time data streaming and processing.

Responsibilities:


Develop Spark applications using Scala or Python (Pyspark) for data transformation, aggregation, and analysis.

Develop and maintain Kafka-based data pipelines: This includes designing Kafka Streams, setting up Kafka Clusters, and ensuring efficient data flow.

Create and optimize Spark applications using Scala and PySpark: They leverage these languages to process large datasets and implement data transformations and aggregations.

Integrate Kafka with Spark for real-time processing: They build systems that ingest real-time data from Kafka and process it using Spark Streaming or Structured Streaming.

Collaborate with data teams: This includes data engineers, data scientists, and DevOps, to design and implement data solutions.

Tune and optimize Spark and Kafka clusters: Ensuring high performance, scalability, and efficiency of data processing workflows.

Write clean, functional, and optimized code: Adhering to coding standards and best practices.

Troubleshoot and resolve issues: Identifying and addressing any problems related to Kafka and Spark applications.

Maintain documentation: Creating and maintaining documentation for Kafka configurations, Spark jobs, and other processes.

Stay updated on technology trends: Continuously learning and applying new advancements in functional programming, big data, and related technologies.


Proficiency in:

Hadoop ecosystem big data tech stack(HDFS, YARN, MapReduce, Hive, Impala).

Spark (Scala, Python) for data processing and analysis.

Kafka for real-time data ingestion and processing.

ETL processes and data ingestion tools

Deep hands-on expertise in Pyspark, Scala, Kafka

Programming Languages:

Scala, Python, or Java for developing Spark applications.

SQL for data querying and analysis.

Other Skills:

Data warehousing concepts.

Linux/Unix operating systems.

Problem-solving and analytical skills.

Version control systems

-

Job Family Group:

Technology

-

Job Family:

Applications Development

-

Time Type:

Full time

-

Most Relevant Skills

Please see the requirements listed above.

-

Other Relevant Skills

For complementary skills, please see above and/or contact the recruiter.

-

Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law.

If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi .

View Citi's EEO Policy Statement and the Know Your Rights poster.

This advertiser has chosen not to accept applicants from your region.

Big Data Engineer

Pune, Maharashtra ₹800000 - ₹2400000 Y Funic Tech

Posted today

Job Viewed

Tap Again To Close

Job Description

Job Opening: Big Data Engineer

Location: Aundh, Pune (Work From Office)

Job Type: Full-Time | General Shift | Monday – Friday

Key Responsibilities

  • Build and manage highly scalable Big Data pipelines for large structured datasets.
  • Collaborate with Product Management & Engineering leadership to design and implement optimal solutions.
  • Participate in design discussions and contribute to selecting, integrating, and maintaining Big Data frameworks.
  • Develop distributed data processing systems using Spark, Akka, or similar frameworks.
  • Review, optimize, and enhance existing data pipelines to resolve bottlenecks.
  • Act as a Senior Individual Contributor by taking ownership of features/modules.
  • Troubleshoot production issues and ensure seamless data flow.
  • Work with US & India engineering teams to build and scale Java/Scala-based data pipelines.
  • Lead technical excellence in the India engineering team and ensure delivery quality.
  • Follow Agile practices and use tools such as JIRA, Git, Gradle/Maven/SBT for development & issue tracking.

Must-Have Skills

  1. Coding Expertise - Strong programming & database skills.
  2. SQL – Hands-on experience.
  3. Scala, PySpark, Cloud (AWS/Azure).
  4. Data Lake, Snowflake.
  5. Proven experience in building products from scratch.
  6. Strong background in Java or Scala, OOP, data structures, algorithms, profiling, and optimization.
  7. Experience with ETL/Data pipeline tools (Apache NiFi, Airflow, etc.).
  8. Hands-on with version control (Git) and build tools (Gradle, Maven, SBT).

Preferred Experience

  • 7+ years of overall experience, with 5–7 years of relevant Big Data experience.
  • 3+ years in designing & developing scalable Big Data pipelines.
  • Experience with frameworks like Spark, Akka, Storm, Hadoop.
  • Exposure to cloud platforms (AWS/Azure) for data engineering workloads.

Soft Skills

  • Strong communication skills (verbal & written).
  • Ability to work independently and collaboratively with global teams.
  • High sense of urgency with attention to accuracy & timeliness.
  • Capable of handling client interactions and resolving issues under pressure.
  • Proactive learner with strong problem-solving abilities.
This advertiser has chosen not to accept applicants from your region.

Big Data Engineer

Bengaluru, Karnataka ₹1500000 - ₹2500000 Y Innova ESI

Posted today

Job Viewed

Tap Again To Close

Job Description

Role: Big Data Engineer

Experience: 7+ Years

Location: Bangalore

Notice: Immediate Joiners Only

Job Description:

Azure Databricks, Azure Data Factory, Azure Function apps, Apache Spark, Scala, Java, Apache Kafka, event stream and Big Data

Optional Skills: Airflow, Python

Roles & Responsibilities

Overall, 12 years of experience in IT industry, including 5 years in Big Data

At least 3 years of experience in Azure handling Big Data projects.

At least 1 year in event streaming using Kafka.

Hands on with Scala and Java programming, Spark framework, performance optimization

Experience in building reusable frameworks with high quality

Worked on CI/CD (Azure DevOps, GitHub), aware of best practices, code quality and branching strategies.

Ability to automate tasks and deploy production standard code (with unit testing, continuous integration, versioning etc.)

Load transformed data into storage and reporting structures in destinations including data warehouse, high speed indexes, real-time reporting systems and analytics applications.

Peer reviews, Unit testing, deployment to production.

Strong problem solving and logical reasoning ability.

Excellent understanding of all aspects of the Software Development Lifecycle.

Enthusiastic to learn new skills, apply the knowledge on the project.

Ability to work independently at client locations, client facing role.

This advertiser has chosen not to accept applicants from your region.
Be The First To Know

About the latest Data engineer Jobs in India !

Big Data Engineer

Hyderabad, Andhra Pradesh ₹1500000 - ₹2500000 Y Techno Facts Solutions

Posted today

Job Viewed

Tap Again To Close

Job Description

Experience in developing and delivering scalable big data pipelines using Apache Spark and Databricks on AWS.

Position Requirements :

Must Have : Build and maintain scalable data pipelines using Databricks and Apache Spark. Develop and optimize ETL/ELT processes for structured and unstructured data. Knowledge of Lakehouse architecture for efficient data storage, processing, and analytics. Orchestrating ETL/ELT Pipelines: Design and manage data workflows using Databricks Workflows, Jobs API. Worked with AWS Data Services (S3, Lambda, CloudWatch) for seamless integration. Performance Optimization: Optimize queries using pushdown capabilities and indexing strategies. Worked on data governance with Unity Catalog, security policies, and access controls. Monitor, troubleshoot, and improve Databricks jobs and clusters. Exposure to end-to-end implementation of migration projects to AWS Cloud AWS & Python Expertise with hands-on cloud development. Orchestration: Airflow Code Repositories: Git, GitHub. Strong in writing SQL Cloud Data Migration: Deep understanding of processes. Strong Analytical, Problem-Solving & Communication Skills

Good to have Knowledge / Skills : Experience in Teradata, DataStage , SSIS Knowledge of Databricks Delta Live Table. Knowledge of Delta Lake. Streaming: Kafka, Spark Streaming. CICD: Jenkins IaC & Automation: Terraform for Databricks deployment.

Educational Background : BE / B.Tech / MCA / M.Sc / M.E / M.Tech / MBA

Certification : Amazon Web Services (AWS) certifications (AWS certified Data Engineer is recommended).  Databricks Certified Associate Developer for Apache Spark

Drop resume to Mail id-sailaja.-

This advertiser has chosen not to accept applicants from your region.

Big Data Engineer

Gurugram, Uttar Pradesh ₹1500000 - ₹2500000 Y MyCareernet

Posted today

Job Viewed

Tap Again To Close

Job Description

Company:Indian / Global Digital Organization

Key Skills:
Pyspark, AWS, Python, SCALA, ETL

Roles and Responsibilities:

  • Develop and deploy ETL and data warehousing solutions using Python libraries and Linux bash scripts on AWS EC2, with data stored in Redshift.
  • Collaborate with product and analytics teams to scope business needs, design metrics, and build reports/dashboards.
  • Automate and optimize existing data sets and ETL pipelines for efficiency and reliability.
  • Work with multi-terabyte data sets and write complex SQL queries to support analytics.
  • Design and implement ETL solutions integrating multiple data sources using Pentaho.
  • Utilize Linux/Unix scripting for data processing tasks.
  • Leverage AWS services (Redshift, S3, EC2) for storage, processing, and pipeline automation.
  • Follow software engineering best practices for coding standards, code reviews, source control, testing, and operations.

Skills Required:

Must-Have:

  • Hands-on experience with PySpark for big data processing
  • Strong knowledge of AWS services (Redshift, S3, EC2)
  • Proficiency in Python for data processing and automation
  • Strong SQL skills for working with RDBMS and multi-terabyte data sets

Nice-to-Have:

  • Experience with SCALA for distributed data processing
  • Knowledge of ETL tools such as Pentaho
  • Familiarity with Linux/Unix scripting for data operations
  • Exposure to data modeling, pipelines, and visualization

Education:
Bachelor's degree in Computer Science, Information Technology, or a related field

This advertiser has chosen not to accept applicants from your region.

Big Data Engineer

Hyderabad, Andhra Pradesh ₹1200000 - ₹3600000 Y IDESLABS PRIVATE LIMITED

Posted today

Job Viewed

Tap Again To Close

Job Description

We are looking for a skilled Big Data Engineer with 6 to 22 years of experience. The ideal candidate will have expertise in Pyspark, Azure Data Bricks, and workflows. This position is available as a contract role across Pan India.

Roles and Responsibility

  • Design and develop scalable big data systems using Pyspark and Azure Data Bricks.
  • Implement and manage workflows for efficient data processing.
  • Collaborate with cross-functional teams to integrate data from various sources.
  • Develop and maintain large-scale data pipelines and architectures.
  • Optimize system performance and troubleshoot issues.
  • Ensure data quality and integrity through data validation and testing procedures.

Job Requirements

  • Strong experience in big data engineering with Pyspark and Azure Data Bricks.
  • Proficiency in managing and working with large datasets.
  • Experience with workflows and automation tools.
  • Strong problem-solving skills and attention to detail.
  • Excellent communication and collaboration skills.
  • Ability to work in a fast-paced environment and meet deadlines.
This advertiser has chosen not to accept applicants from your region.
 

Nearby Locations

Other Jobs Near Me

Industry

  1. request_quote Accounting
  2. work Administrative
  3. eco Agriculture Forestry
  4. smart_toy AI & Emerging Technologies
  5. school Apprenticeships & Trainee
  6. apartment Architecture
  7. palette Arts & Entertainment
  8. directions_car Automotive
  9. flight_takeoff Aviation
  10. account_balance Banking & Finance
  11. local_florist Beauty & Wellness
  12. restaurant Catering
  13. volunteer_activism Charity & Voluntary
  14. science Chemical Engineering
  15. child_friendly Childcare
  16. foundation Civil Engineering
  17. clean_hands Cleaning & Sanitation
  18. diversity_3 Community & Social Care
  19. construction Construction
  20. brush Creative & Digital
  21. currency_bitcoin Crypto & Blockchain
  22. support_agent Customer Service & Helpdesk
  23. medical_services Dental
  24. medical_services Driving & Transport
  25. medical_services E Commerce & Social Media
  26. school Education & Teaching
  27. electrical_services Electrical Engineering
  28. bolt Energy
  29. local_mall Fmcg
  30. gavel Government & Non Profit
  31. emoji_events Graduate
  32. health_and_safety Healthcare
  33. beach_access Hospitality & Tourism
  34. groups Human Resources
  35. precision_manufacturing Industrial Engineering
  36. security Information Security
  37. handyman Installation & Maintenance
  38. policy Insurance
  39. code IT & Software
  40. gavel Legal
  41. sports_soccer Leisure & Sports
  42. inventory_2 Logistics & Warehousing
  43. supervisor_account Management
  44. supervisor_account Management Consultancy
  45. supervisor_account Manufacturing & Production
  46. campaign Marketing
  47. build Mechanical Engineering
  48. perm_media Media & PR
  49. local_hospital Medical
  50. local_hospital Military & Public Safety
  51. local_hospital Mining
  52. medical_services Nursing
  53. local_gas_station Oil & Gas
  54. biotech Pharmaceutical
  55. checklist_rtl Project Management
  56. shopping_bag Purchasing
  57. home_work Real Estate
  58. person_search Recruitment Consultancy
  59. store Retail
  60. point_of_sale Sales
  61. science Scientific Research & Development
  62. wifi Telecoms
  63. psychology Therapy
  64. pets Veterinary
View All Data Engineer Jobs