1,218 Big Data jobs in India
Data Engineer II, Data & AI, Customer Engagement Technology
Job Viewed
Job Description
Job No Longer Available
This position is no longer listed on WhatJobs. The employer may be reviewing applications, filled the role, or has removed the listing.
However, we have similar jobs available for you below.
Big Data Developer - Java, Big data, Spring
Posted 2 days ago
Job Viewed
Job Description
Primary Responsibilities:
- Analyzes and investigates
- Provides explanations and interpretations within area of expertise
- Participate in scrum process and deliver stories/features according to the schedule
- Collaborate with team, architects and product stakeholders to understand the scope and design of a deliverable
- Participate in product support activities as needed by the team.
- Understand product architecture, features being built and come up with product improvement ideas and POCs
- Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so
Qualifications -
Required Qualifications:
- Undergraduate degree or equivalent experience
- Proven experience using Bigdata tech stack
- Sound knowledge on Java and Spring framework with good exposure to Spring Batch, Spring Data, Spring Web services, Python
- Proficient with Bigdata ecosystem (Sqoop, Spark, Hadoop, Hive, HBase)
- Proficient with Unix/Linux eco systems and shell scripting skills
- Proven Java, Kafka, Spark, Big Data, Azure ,analytical and problem solving skills
- Proven solid analytical and communication skills
Big Data Developer - Java, Big data, Spring
Posted 2 days ago
Job Viewed
Job Description
Primary Responsibilities:
- Analyzes and investigates
- Provides explanations and interpretations within area of expertise
- Participate in scrum process and deliver stories/features according to the schedule
- Collaborate with team, architects and product stakeholders to understand the scope and design of a deliverable
- Participate in product support activities as needed by the team.
- Understand product architecture, features being built and come up with product improvement ideas and POCs
- Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so
Qualifications -
Required Qualifications:
- Undergraduate degree or equivalent experience
- Proven experience using Bigdata tech stack
- Sound knowledge on Java and Spring framework with good exposure to Spring Batch, Spring Data, Spring Web services, Python
- Proficient with Bigdata ecosystem (Sqoop, Spark, Hadoop, Hive, HBase)
- Proficient with Unix/Linux eco systems and shell scripting skills
- Proven Java, Kafka, Spark, Big Data, Azure ,analytical and problem solving skills
- Proven solid analytical and communication skills
Big Data Developer - Java, Big data, Spring
Posted 2 days ago
Job Viewed
Job Description
Analyzes and investigates
Provides explanations and interpretations within area of expertise
Participate in scrum process and deliver stories/features according to the schedule
Collaborate with team, architects and product stakeholders to understand the scope and design of a deliverable
Participate in product support activities as needed by the team.
Understand product architecture, features being built and come up with product improvement ideas and POCs
Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so
Qualifications -
Required Qualifications:
Undergraduate degree or equivalent experience
Proven experience using Bigdata tech stack
Sound knowledge on Java and Spring framework with good exposure to Spring Batch, Spring Data, Spring Web services, Python
Proficient with Bigdata ecosystem (Sqoop, Spark, Hadoop, Hive, HBase)
Proficient with Unix/Linux eco systems and shell scripting skills
Proven Java, Kafka, Spark, Big Data, Azure ,analytical and problem solving skills
Proven solid analytical and communication skills
Big Data Developer - Java, Big data, Spring
Posted today
Job Viewed
Job Description
Primary Responsibilities:
- Analyzes and investigates
- Provides explanations and interpretations within area of expertise
- Participate in scrum process and deliver stories/features according to the schedule
- Collaborate with team, architects and product stakeholders to understand the scope and design of a deliverable
- Participate in product support activities as needed by the team.
- Understand product architecture, features being built and come up with product improvement ideas and POCs
- Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so
Qualifications -
Required Qualifications:
- Undergraduate degree or equivalent experience
- Proven experience using Bigdata tech stack
- Sound knowledge on Java and Spring framework with good exposure to Spring Batch, Spring Data, Spring Web services, Python
- Proficient with Bigdata ecosystem (Sqoop, Spark, Hadoop, Hive, HBase)
- Proficient with Unix/Linux eco systems and shell scripting skills
- Proven Java, Kafka, Spark, Big Data, Azure ,analytical and problem solving skills
- Proven solid analytical and communication skills
Big Data Engineer

Posted 1 day ago
Job Viewed
Job Description
**Responsibilities:**
+ Utilize knowledge of applications development procedures and concepts, and basic knowledge of other technical areas to identify and define necessary system enhancements
+ Identify and analyze issues, make recommendations, and implement solutions
+ Utilize knowledge of business processes, system processes, and industry standards to solve complex issues
+ Analyze information and make evaluative judgements to recommend solutions and improvements
+ Conduct testing and debugging, utilize script tools, and write basic code for design specifications
+ Assess applicability of similar experiences and evaluate options under circumstances not covered by procedures
+ Develop working knowledge of Citi's information systems, procedures, standards, client server application development, network operations, database administration, systems administration, data center operations, and PC-based applications
+ Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency.
**Qualifications:**
+ 3 to 5 years of relevant experience
+ Experience in programming/debugging used in business applications
+ Working knowledge of industry practice and standards
+ Comprehensive knowledge of specific business area for application development
+ Working knowledge of program languages
+ Consistently demonstrates clear and concise written and verbal communication
**Education:**
+ Bachelor's degree/University degree or equivalent experience
This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required.
Additional Job Description
We are looking for a Big Data Engineer that will work on the collecting, storing, processing, and analyzing of huge sets of data. The primary focus will be on choosing optimal solutions to use for these purposes, then maintaining, implementing, and monitoring them. You will also be responsible for integrating them with the architecture used across the company.
Responsibilities
- Selecting and integrating any Big Data tools and frameworks required to provide requested capabilities
- Implementing data wrangling, scarping, cleaning using both Java or Python
Strong experience on data structure.
Skills and Qualifications
- Proficient understanding of distributed computing principles
- Proficient in Java or Python and some part of machine learning
- Proficiency with Hadoop v2, MapReduce, HDFS, Pyspark, Spark
- Experience with building stream-processing systems, using solutions such as Storm or Spark-Streaming
- Good knowledge of Big Data querying tools, such as Pig, Hive, and Impala
- Experience with Spark
- Experience with integration of data from multiple data sources
- Experience with NoSQL databases, such as HBase, Cassandra, MongoDB
- Knowledge of various ETL techniques and frameworks, such as Flume
- Experience with various messaging systems, such as Kafka or RabbitMQ
- Experience with Big Data ML toolkits, such as Mahout, SparkML, or H2O
- Good understanding of Lambda Architecture, along with its advantages and drawbacks
- Experience with Cloudera/MapR/Hortonworks
This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required.
---
**Job Family Group:**
Technology
---
**Job Family:**
Applications Development
---
**Time Type:**
Full time
---
**Most Relevant Skills**
Please see the requirements listed above.
---
**Other Relevant Skills**
For complementary skills, please see above and/or contact the recruiter.
---
_Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law._
_If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review_ _Accessibility at Citi ( _._
_View Citi's_ _EEO Policy Statement ( _and the_ _Know Your Rights ( _poster._
Citi is an equal opportunity and affirmative action employer.
Minority/Female/Veteran/Individuals with Disabilities/Sexual Orientation/Gender Identity.
Big Data Developer
Posted 1 day ago
Job Viewed
Job Description
Experience: 4 - 7 years
Location: Bangalore
Job Description:
- Strong experience working with the Apache Spark framework, including a solid grasp of core concepts, performance optimizations, and industry best practices
- Proficient in PySpark with hands-on coding experience; familiarity with unit testing, object-oriented programming (OOP) principles, and software design patterns
- Experience with code deployment and associated processes
- Proven ability to write complex SQL queries to extract business-critical insights
- Hands-on experience in streaming data processing
- Familiarity with machine learning concepts is an added advantage
- Experience with NoSQL databases
- Good understanding of Test-Driven Development (TDD) methodologies
- Demonstrated flexibility and eagerness to learn new technologies
Roles & Responsibilities
- Design and implement solutions for problems arising out of large-scale data processing
- Attend/drive various architectural, design and status calls with multiple stakeholders
- Ensure end-to-end ownership of all tasks being aligned including development, testing, deployment and support
- Design, build & maintain efficient, reusable & reliable code
- Test implementation, troubleshoot & correct problems
- Capable of working as an individual contributor and within team too
- Ensure high quality software development with complete documentation and traceability
- Fulfil organizational responsibilities (sharing knowledge & experience with other teams/ groups)
Big Data Developer
Posted 2 days ago
Job Viewed
Job Description
Job Description:
Experience in working on Spark framework, good understanding of core concepts, optimizations, and best practices
Good hands-on experience in writing code in PySpark, should understand design principles and OOPS
Good experience in writing complex queries to derive business critical insights
Hands-on experience on Stream data processing
Understanding of Data Lake vs Data Warehousing concept
Knowledge on Machin learning would be an added advantag
Experience in NoSQL Technologies – MongoDB, Dynamo DB
Good understanding of test driven development
Flexible to learn new technologies
Roles & Responsibilities:
Design and implement solutions for problems arising out of large-scale data processing
Attend/drive various architectural, design and status calls with multiple stakeholders
Ensure end-to-end ownership of all tasks being aligned including development, testing, deployment and support
Design, build & maintain efficient, reusable & reliable code
Test implementation, troubleshoot & correct problems
Capable of working as an individual contributor and within team too
Ensure high quality software development with complete documentation and traceability
Fulfil organizational responsibilities (sharing knowledge & experience with other teams/ groups)
Be The First To Know
About the latest Big data Jobs in India !
Big Data Engineer
Posted 2 days ago
Job Viewed
Job Description
Big Data Engineer (PySpark)
Location: Pune/Nagpur (WFO)
Experience: 8 - 12 Years
Employment Type: Full-time
Job Overview
We are looking for an experienced Big Data Engineer with strong expertise in PySpark and Big Data ecosystems. The ideal candidate will be responsible for designing, developing, and optimizing scalable data pipelines while ensuring high performance and reliability.
Key Responsibilities
- Design, develop, and maintain data pipelines using PySpark and related Big Data technologies.
- Work with HDFS, Hive, Sqoop , and other tools in the Hadoop ecosystem.
- Write efficient HiveQL and SQL queries to handle large-scale datasets.
- Perform performance tuning and optimization of distributed data systems.
- Collaborate with cross-functional teams in an Agile environment to deliver high-quality solutions.
- Manage and schedule workflows using Apache Airflow or Oozie .
- Troubleshoot and resolve issues in data pipelines to ensure reliability and accuracy.
Required Skills
- Proven experience in Big Data Engineering with a focus on PySpark.
- Strong knowledge of HDFS, Hive, Sqoop , and related tools.
- Proficiency in SQL/HiveQL for large datasets.
- Expertise in performance tuning and optimization of distributed systems.
- Familiarity with Agile methodology and collaborative team practices.
- Experience with workflow orchestration tools (Airflow/Oozie ).
- Strong problem-solving, analytical, and communication skills.
Good to Have
- Knowledge of data modeling and data warehousing concepts.
- Exposure to DevOps practices and CI/CD pipelines for data engineering.
- Experience with other Big Data frameworks such as Spark Streaming or Kafka .
Big Data Developer
Posted 2 days ago
Job Viewed
Job Description
Job Descriptions for Big data or Cloud Engineer
Position Summary:
We are looking for candidates with hands on experience in Big Data or Cloud Technologies.
Must have technical Skills
- 2-4 Years of experience
- Expertise and hands-on experience on Python – Must Have
- Expertise knowledge on SparkQL/Spark Dataframe – Must Have
- Good knowledge of SQL – Good to Have
- Good knowledge of Shell script – Good to Have
- Good Knowledge of one of the Workflow engines like Oozie, Autosys – Good to Have
- Good knowledge of Agile Development– Good to Have
- Good knowledge of Cloud- Good to Have
- Passionate about exploring new technologies – Good to Have
- Automation approach - – Good to Have
Roles & Responsibilities
Selected candidate will work on Data Warehouse modernization projects and will be responsible for the following activities.
- Develop programs/scripts in Python/Java + SparkSQL/Spark Dataframe or Python/Java + Cloud native SQL like RedshiftSQL/SnowSQL etc.
- Validation of scripts
- Performance tuning
- Data ingestion from source to target platform
- Job orchestration
Big Data Developer
Posted 2 days ago
Job Viewed
Job Description
Location : Indore, Pune, Bangalore, Gurugram, Noida
Notice period : Can join Immediately or Currently serving ( 30- 45 days )
- 3-6 years of good hands on exposure with Big Data technologies – pySpark (Data frame and SparkSQL)
- Hands-on experience with using Cloud Platform provided Big Data technologies (i.e. Glue, Lambda, RedShift, S3, etc.)
- Good hands on experience of python
- Good understanding of SQL and data warehouse tools like (Redshift)
- Strong analytical, problem-solving, data analysis and research skills
- Demonstrable ability to think outside of the box and not be dependent on readily available tools
- Excellent communication, presentation and interpersonal skills are a must
- Orchestration with Step Function/MWAA and Any job scheduler experience
Roles & Responsibilities
- Develop efficient ETL pipelines through spark or Glue
- Able to implement business use cases using Python and pySpark.
- Able to write ELT/ETL jobs on AWS (Crawler, Glue Job)
- Participate in code peer reviews to ensure our applications comply with best practices.
- Gather requirements to define AWS services and accordingly implement different security features
- Provide estimates for development Task.
- Can perform integration testing of developed infrastructure.