40,292 Data Engineer Opportunities jobs in India

Big Data Engineer

Hyderabad, Andhra Pradesh ₹500000 - ₹1200000 Y Artech

Posted today

Job Viewed

Tap Again To Close

Job Description

Experience: 4-6 Years(Contract to Hire)

Work Location: Chennai, TN | Bangalore, KA | Hyderabad, TS

Skill Required: Digital : Bigdata and Hadoop Ecosystems Digital : PySpark

Job Description:

"? Need to work as a developer in Bigdata, Hadoop or Data Warehousing Tools and Cloud Computing ?

Work on Hadoop, Hive SQL?s, Spark, Bigdata Eco System Tools?

Experience in working with teams in a complex organization involving multiple reporting lines?

The candidate should have strong functional and technical knowledge to deliver what is required and he/she should be well acquainted with Banking terminologies. ?

The candidate should have strong DevOps and Agile Development Framework knowledge?

Create Scala/Spark jobs for data transformation and aggregation?

Experience with stream-processing systems like Storm, Spark-Streaming, Flink"

This advertiser has chosen not to accept applicants from your region.

Big Data Engineer

Chennai, Tamil Nadu ₹1200000 - ₹3600000 Y Citi

Posted today

Job Viewed

Tap Again To Close

Job Description

Responsible for designing, developing, and optimizing data processing solutions using a combination of Big Data technologies. Focus on building scalable and efficient data pipelines for handling large datasets and enabling batch & real-time data streaming and processing.

Responsibilities:


Develop Spark applications using Scala or Python (Pyspark) for data transformation, aggregation, and analysis.

Develop and maintain Kafka-based data pipelines: This includes designing Kafka Streams, setting up Kafka Clusters, and ensuring efficient data flow.

Create and optimize Spark applications using Scala and PySpark: They leverage these languages to process large datasets and implement data transformations and aggregations.

Integrate Kafka with Spark for real-time processing: They build systems that ingest real-time data from Kafka and process it using Spark Streaming or Structured Streaming.

Collaborate with data teams: This includes data engineers, data scientists, and DevOps, to design and implement data solutions.

Tune and optimize Spark and Kafka clusters: Ensuring high performance, scalability, and efficiency of data processing workflows.

Write clean, functional, and optimized code: Adhering to coding standards and best practices.

Troubleshoot and resolve issues: Identifying and addressing any problems related to Kafka and Spark applications.

Maintain documentation: Creating and maintaining documentation for Kafka configurations, Spark jobs, and other processes.

Stay updated on technology trends: Continuously learning and applying new advancements in functional programming, big data, and related technologies.


Proficiency in:

Hadoop ecosystem big data tech stack(HDFS, YARN, MapReduce, Hive, Impala).

Spark (Scala, Python) for data processing and analysis.

Kafka for real-time data ingestion and processing.

ETL processes and data ingestion tools

Deep hands-on expertise in Pyspark, Scala, Kafka

Programming Languages:

Scala, Python, or Java for developing Spark applications.

SQL for data querying and analysis.

Other Skills:

Data warehousing concepts.

Linux/Unix operating systems.

Problem-solving and analytical skills.

Version control systems

-

Job Family Group:

Technology

-

Job Family:

Applications Development

-

Time Type:

Full time

-

Most Relevant Skills

Please see the requirements listed above.

-

Other Relevant Skills

For complementary skills, please see above and/or contact the recruiter.

-

Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law.

If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi .

View Citi's EEO Policy Statement and the Know Your Rights poster.

This advertiser has chosen not to accept applicants from your region.

Big Data Engineer

₹4800000 - ₹6000000 Y Artech

Posted today

Job Viewed

Tap Again To Close

Job Description

Job Title: Data Engineer (Python, PySpark, PL/SQL)

Work Location: Chennai, Tamil Nadu

Experience Required: 4 to 6 Years

Job Description:

We are seeking a skilled and motivated Data Engineer with 4-6 years of experience to join our team in Chennai. The ideal candidate will have hands-on experience with Python, PySpark, and PL/SQL, along with a strong understanding of Big Data technologies and cloud-based data platforms, especially Google Cloud Platform (GCP).

Key Responsibilities:

  • Design, develop, and optimize data pipelines using Python and PySpark.
  • Write efficient SQL and PL/SQL queries for data extraction, transformation, and analysis.
  • Work with GCP services, particularly BigQuery, for data warehousing and processing tasks.
  • Collaborate with data analysts, engineers, and other stakeholders to understand data requirements and deliver scalable solutions.
  • Ensure data quality, consistency, and governance across the pipeline.
  • Troubleshoot data issues and optimize performance of data workflows.

Essential Skills:

  • Proficiency in Python and PySpark for data engineering and ETL processes.
  • Strong expertise in PL/SQL and SQL query optimization.
  • Hands-on experience with Google Cloud Platform (GCP), especially BigQuery.
  • Knowledge of Big Data concepts and tools.
  • Ability to work in a collaborative and fast-paced environment.

Preferred Skills:

  • Experience with data orchestration tools such as Apache Airflow.
  • Familiarity with CI/CD practices and version control systems like Git.
  • Understanding of data modeling and data warehousing principles.
This advertiser has chosen not to accept applicants from your region.

Big Data Engineer

₹600000 - ₹1800000 Y Brillio

Posted today

Job Viewed

Tap Again To Close

Job Description

Job Title: Big Data Engineer

Role Overview

We are seeking a highly skilled Big Data Engineer to join our team.The ideal candidate will have strong experience in building, maintaining, and optimizing large-scale data pipelines and distributed data processing systems. This role involves working closely with cross-functional teams to ensure the reliability, scalability, and performance of data solutions.

Key Responsibilities

  • Design, develop, and maintain scalable data pipelines and ETL processes.
  • Work with large datasets using Hadoop ecosystem tools (Hive, Spark).
  • Build and optimize real-time and batch data processing solutions using Kafka and Spark Streaming.
  • Write efficient, high-performance SQL queries to extract, transform, and load data.
  • Develop reusable data frameworks and utilities in Python.
  • Collaborate with data scientists, analysts, and product teams to deliver reliable data solutions.
  • Monitor, troubleshoot, and optimize big data workflows for performance and cost efficiency.

Must-Have Skills

  • Strong hands-on experience with Hive and SQL for querying and data transformation.
  • Proficiency in Python for data manipulation and automation.
  • Expertise in Apache Spark (batch and streaming).
  • Experience working with Kafka for streaming data pipelines.

Good-to-Have Skills

  • Experience with workflow orchestration tools (Airflow, Oozie, etc.).
  • Knowledge of cloud-based big data platforms (AWS EMR, GCP Dataproc, Azure HDInsight).
  • Familiarity with CI/CD pipelines and version control (Git).
This advertiser has chosen not to accept applicants from your region.

Big Data Engineer

Hyderabad, Andhra Pradesh ₹1500000 - ₹2500000 Y Techno Facts Solutions

Posted today

Job Viewed

Tap Again To Close

Job Description

Experience in developing and delivering scalable big data pipelines using Apache Spark and Databricks on AWS.

Position Requirements :

Must Have : Build and maintain scalable data pipelines using Databricks and Apache Spark. Develop and optimize ETL/ELT processes for structured and unstructured data. Knowledge of Lakehouse architecture for efficient data storage, processing, and analytics. Orchestrating ETL/ELT Pipelines: Design and manage data workflows using Databricks Workflows, Jobs API. Worked with AWS Data Services (S3, Lambda, CloudWatch) for seamless integration. Performance Optimization: Optimize queries using pushdown capabilities and indexing strategies. Worked on data governance with Unity Catalog, security policies, and access controls. Monitor, troubleshoot, and improve Databricks jobs and clusters. Exposure to end-to-end implementation of migration projects to AWS Cloud AWS & Python Expertise with hands-on cloud development. Orchestration: Airflow Code Repositories: Git, GitHub. Strong in writing SQL Cloud Data Migration: Deep understanding of processes. Strong Analytical, Problem-Solving & Communication Skills

Good to have Knowledge / Skills : Experience in Teradata, DataStage , SSIS Knowledge of Databricks Delta Live Table. Knowledge of Delta Lake. Streaming: Kafka, Spark Streaming. CICD: Jenkins IaC & Automation: Terraform for Databricks deployment.

Educational Background : BE / B.Tech / MCA / M.Sc / M.E / M.Tech / MBA

Certification : Amazon Web Services (AWS) certifications (AWS certified Data Engineer is recommended).  Databricks Certified Associate Developer for Apache Spark

Drop resume to Mail id-sailaja.-

This advertiser has chosen not to accept applicants from your region.

Big Data Engineer

₹1500000 - ₹2500000 Y Ltimindtree

Posted today

Job Viewed

Tap Again To Close

Job Description

Job Description

  • Experience in Scala programming languages
  • Experience in Big Data technologies including Spark Scala and Kafka
  • Who have a good understanding of organizational strategy architecture patterns Microservices Event Driven and technology choices and coaching the team in execution in alignment to these guidelines
  • Who can apply organizational technology patterns effectively in projects and make recommendations on alternate options
  • Who have handson experience working with large volumes of data including different patterns of data ingestion processing batch real time movement storage and access for both internal and external to BU and ability to make independent decisions within scope of project
  • Who have a good understanding of data structures and algorithms
  • Who can test debug and fix issues within established SLAs
  • Who can design software that is easily testable and observable
  • Who understand how teams goals fit a business need
  • Who can identify business problems at the project level and provide solutions
  • Who understand data access patterns streaming technology data validation data performance cost optimization
  • Strong SQL skills
  • ETL TalenD preferred Any other ETL tool
  • Experience with Linux OS user level
  • Python or R programming skills good to have but not mandatory
  • Mandatory Certification any one
  • Cloudera CCA Spark and Hadoop Developer CCA175
  • Databricks Certified Developer Apache Spark 2X
  • Hortonworks HDP Certified Apache Spark Developer
This advertiser has chosen not to accept applicants from your region.

Big Data Engineer

Bengaluru, Karnataka ₹1200000 - ₹2400000 Y Deqode

Posted today

Job Viewed

Tap Again To Close

Job Description

Profile: Big Data Engineer (System Design)
Experience:
5+ years

Location:
Bangalore

Work Mode:
Hybrid

About The Role
We're looking for an experienced
Big Data Engineer
with system design expertise to architect and build scalable data pipelines and optimize big data solutions.

Key Responsibilities

  • Design, develop, and maintain data pipelines and ETL processes using Python, Hive, and Spark
  • Architect scalable big data solutions with strong system design principles
  • Build and optimize workflows using Apache Airflow
  • Implement data modeling, integration, and warehousing solutions
  • Collaborate with cross-functional teams to deliver data solutions

Must-Have Skills

  • 5+ years as a Data Engineer with Python, Hive, and Spark
  • Strong hands-on experience with Java
  • Advanced SQL and Hadoop experience
  • Expertise in Apache Airflow
  • Strong understanding of data modeling, integration, and warehousing
  • Experience with relational databases (PostgreSQL, MySQL)
  • System design knowledge
  • Excellent problem-solving and communication skills

Good to Have

  • Docker and containerization experience
  • Knowledge of Apache Beam, Apache Flink, or similar frameworks
  • Cloud platform experience.

Skills:- Apache Hive, Apache Spark, Python, SQL, Hadoop and Systems design

This advertiser has chosen not to accept applicants from your region.
Be The First To Know

About the latest Data engineer opportunities Jobs in India !

Big Data Engineer

Hyderabad, Andhra Pradesh ₹800000 - ₹2400000 Y RiskInsight Consulting Pvt Ltd

Posted today

Job Viewed

Tap Again To Close

Job Description

Responsibilities
  • Design, develop, and implement robust Big Data solutions using technologies such as Hadoop, Spark, and NoSQL databases.
  • Build and maintain scalable data pipelines for effective data ingestion, transformation, and analysis.
  • Collaborate with data scientists, analysts, and cross-functional teams to understand business requirements and translate them into technical solutions.
  • Ensure data quality and integrity through effective validation, monitoring, and troubleshooting techniques.
  • Optimize data processing workflows for maximum performance and efficiency.
  • Stay up-to-date with evolving Big Data technologies and methodologies to enhance existing systems.
  • Implement best practices for data governance, security, and compliance.
  • Document technical designs, processes, and procedures to support knowledge sharing across teams.
Requirements
  • Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
  • 4+ years of experience as a Big Data Engineer or in a similar role.
  • Strong proficiency in Big Data technologies (Hadoop, Spark, Hive, Pig) and frameworks.
  • Extensive experience with programming languages such as Python, Scala, or Java.
  • Knowledge of data modeling and data warehousing concepts.
  • Familiarity with NoSQL databases like Cassandra or MongoDB.
  • Proficient in SQL for data querying and analysis.
  • Strong analytical and problem-solving skills.
  • Excellent communication and collaboration abilities.
  • Ability to work independently and effectively in a fast-paced environment.
Benefits

Competitive salary and benefits package.

Opportunity to work on cutting-edge technologies and solve complex challenges.

Dynamic and collaborative work environment with opportunities for growth and career advancement.

Regular training and professional development opportunities.

This advertiser has chosen not to accept applicants from your region.

Big Data Engineer

Bengaluru, Karnataka ₹2000000 - ₹2500000 Y Techno Comp

Posted today

Job Viewed

Tap Again To Close

Job Description

JOB RESPONSIBILITY

· Experience with Big Data technologies will be a plus (Hadoop, Spark, Kafka, HBase, etc)

· Write SQL queries to validate the dashboard output

· Working experience with database environment - understanding relational database structure and hands-on SQL knowledge to extract/manipulate data for variance testing.

· Performing code reviews and pair programming

· Supporting and enhancing current applications

· Design, develop, test, and implement the application investigate and resolve complex issues while supporting existing applications.

QUALIFICATION

· B.Tech /B.E /MCA

EXPERIENCE

· 6+ year's experience in AWS Services: RDS, AWS Lambda, AWS Glue, Apache Spark, Kafka, Spark streaming, spark, Scala , Hive and AWS etc.

· 6+ year's experience SQL and NoSQL databases like MySQL, Postgres, Elasticsearch

· 6+ year's experience with Spark programming paradigms (batch and stream-processing)

· 6+ year's experience in Java, Scala. Familiarity with a scripting language like Python as well as Unix/Linux shells

· 6+ year's experience with Strong analytical skills and advanced SQL knowledge, indexing, query optimization techniques.

SKILLS AND COMPETENCIES

· Profound understanding of Big Data core concepts and technologies – Apache Spark, Kafka, Spark streaming, spark, Scala , Hive and AWS etc.

· Solid experience and understanding of various core AWS services such as IAM, Cloud Formation, EC2, S3, EMR, Glue, Lambda, Athena, and Redshift. Data Engineer (Bigdata, Kafka, Spark streaming, spark, Scala and AWS).

· Experience in system analysis, design, development, and implementation of data ingestion pipeline in AWS.

· Programming experience with Python/Scala, Shell scripting.

· Experience with DevOps and Continuous Integration/Delivery (CI/CD) concepts and tools such as Bitbucket and Bamboo.

· Good understanding of business and operational processes.

· Capable of Problem / issue resolution, capable of thinking out of the box.

Job Type: Full-time

Benefits:

  • Provident Fund
  • Work from home

Work Location: In person

This advertiser has chosen not to accept applicants from your region.

Big Data Engineer

Pune, Maharashtra ₹1500000 - ₹2500000 Y Talent Sketchers

Posted today

Job Viewed

Tap Again To Close

Job Description

Designation: Big Data Engineer

Experience: 4+ Years

Work Mode: Remote Opportunity

Notice Period: Immediate Joiners/ Serving Notice Period

Job Description:

Role: Big Data Engineer

This Data Engineer will be engaged in data science-related research and software application development and engineering duties related to our enterprise-grade Wi-Fi technology to provide an unprecedented visibility into the user experience. The Data Engineer will collaborate with other engineers and product managers to enhance WiFi AIOps models. This position requires experience in dealing with huge amounts of data generated by communication protocols. The Data Engineer will use his/her knowledge of wireless communication networks, machine learning and software engineering to develop and implement scalable algorithms to process a large amount of streaming data to classify AIOps Insights.

Job Duties:

  • 4+ years of experience with software development with big data technologies as listed below
  • Design and implement machine learning solutions which require to process terabytes of streaming data to develop AIOps production for Wi-Fi networks, detect problems and classify Insights
  • Require knowledge and experience of various big data streaming processing platforms, such as, Apache Storm and Apache Flink, and NoSQL database storage, Cassandra, ElasticSearch, Redis.
  • Utilize analytical and programming skills, such as Hadoop, Java and Python
  • Require good knowledge and experience to work with cloud based CICD tools and cloud devops teams to collect stats and create monitors for our data processing pipelines
  • Good understanding Python web service and be able to develop good quality code
  • Experience with containerizing applications, e.g., with Docker

Sincerely,

Sonia TS

This advertiser has chosen not to accept applicants from your region.
 

Nearby Locations

Other Jobs Near Me

Industry

  1. request_quote Accounting
  2. work Administrative
  3. eco Agriculture Forestry
  4. smart_toy AI & Emerging Technologies
  5. school Apprenticeships & Trainee
  6. apartment Architecture
  7. palette Arts & Entertainment
  8. directions_car Automotive
  9. flight_takeoff Aviation
  10. account_balance Banking & Finance
  11. local_florist Beauty & Wellness
  12. restaurant Catering
  13. volunteer_activism Charity & Voluntary
  14. science Chemical Engineering
  15. child_friendly Childcare
  16. foundation Civil Engineering
  17. clean_hands Cleaning & Sanitation
  18. diversity_3 Community & Social Care
  19. construction Construction
  20. brush Creative & Digital
  21. currency_bitcoin Crypto & Blockchain
  22. support_agent Customer Service & Helpdesk
  23. medical_services Dental
  24. medical_services Driving & Transport
  25. medical_services E Commerce & Social Media
  26. school Education & Teaching
  27. electrical_services Electrical Engineering
  28. bolt Energy
  29. local_mall Fmcg
  30. gavel Government & Non Profit
  31. emoji_events Graduate
  32. health_and_safety Healthcare
  33. beach_access Hospitality & Tourism
  34. groups Human Resources
  35. precision_manufacturing Industrial Engineering
  36. security Information Security
  37. handyman Installation & Maintenance
  38. policy Insurance
  39. code IT & Software
  40. gavel Legal
  41. sports_soccer Leisure & Sports
  42. inventory_2 Logistics & Warehousing
  43. supervisor_account Management
  44. supervisor_account Management Consultancy
  45. supervisor_account Manufacturing & Production
  46. campaign Marketing
  47. build Mechanical Engineering
  48. perm_media Media & PR
  49. local_hospital Medical
  50. local_hospital Military & Public Safety
  51. local_hospital Mining
  52. medical_services Nursing
  53. local_gas_station Oil & Gas
  54. biotech Pharmaceutical
  55. checklist_rtl Project Management
  56. shopping_bag Purchasing
  57. home_work Real Estate
  58. person_search Recruitment Consultancy
  59. store Retail
  60. point_of_sale Sales
  61. science Scientific Research & Development
  62. wifi Telecoms
  63. psychology Therapy
  64. pets Veterinary
View All Data Engineer Opportunities Jobs