987 Big Data jobs in India
Big Data
Posted today
Job Viewed
Job Description
**Big Data Technologies Apache Hive Apache Spark**
**PUNE**
**4 to 6 years
Big Data Developer - Java, Big data, Spring
Posted 14 days ago
Job Viewed
Job Description
Primary Responsibilities:
- Analyzes and investigates
- Provides explanations and interpretations within area of expertise
- Participate in scrum process and deliver stories/features according to the schedule
- Collaborate with team, architects and product stakeholders to understand the scope and design of a deliverable
- Participate in product support activities as needed by the team.
- Understand product architecture, features being built and come up with product improvement ideas and POCs
- Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to,
Big Data Developer - Java, Big data, Spring
Posted 6 days ago
Job Viewed
Job Description
Primary Responsibilities:
- Analyzes and investigates
- Provides explanations and interpretations within area of expertise
- Participate in scrum process and deliver stories/features according to the schedule
- Collaborate with team, architects and product stakeholders to understand the scope and design of a deliverable
- Participate in product support activities as needed by the team.
- Understand product architecture, features being built and come up with product improvement ideas and POCs
- Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to,
Big data developer - java, big data, spring
Posted 2 days ago
Job Viewed
Job Description
Big Data Developer - Java, Big data, Spring
Posted 11 days ago
Job Viewed
Job Description
Primary Responsibilities:
- Analyzes and investigates
- Provides explanations and interpretations within area of expertise
- Participate in scrum process and deliver stories/features according to the schedule
- Collaborate with team, architects and product stakeholders to understand the scope and design of a deliverable
- Participate in product support activities as needed by the team.
- Understand product architecture, features being built and come up with product improvement ideas and POCs
- Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to,
Big Data Developer
Posted today
Job Viewed
Job Description
**Responsibilities:**
+ Utilize knowledge of applications development procedures and concepts, and basic knowledge of other technical areas to identify and define necessary system enhancements
+ Identify and analyze issues, make recommendations, and implement solutions
+ Utilize knowledge of business processes, system processes, and industry standards to solve complex issues
+ Analyze information and make evaluative judgements to recommend solutions and improvements
+ Conduct testing and debugging, utilize script tools, and write basic code for design specifications
+ Assess applicability of similar experiences and evaluate options under circumstances not covered by procedures
+ Develop working knowledge of Citi's information systems, procedures, standards, client server application development, network operations, database administration, systems administration, data center operations, and PC-based applications
+ Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency.
**Qualifications:**
+ 0-2 years of relevant experience
+ Experience in programming/debugging used in business applications
+ Working knowledge of industry practice and standards
+ Comprehensive knowledge of specific business area for application development
+ Working knowledge of program languages
+ Consistently demonstrates clear and concise written and verbal communication
Key Skills:
Strong programming skills in **PySpark in a BigData** environment.
Familiarity with big data processing tools and techniques.
Experience with the Hadoop ecosystem including **Hive** , **HDFS** , Sqoop, **Spark** , Impala, Scala, etc.,
Well versed with shell scripting and Autosys scheduler.
Good understanding of distributed systems.
Should be familiar with data warehouse concepts.
Experience with streaming data platforms.
Excellent analytical and problem-solving skills.
Experience with writing complex **SQL** queries.
Knowledge on data modeling and data design is essential.
Should be independent and resourceful dealing with risks/issues and resolving them in a timely manner.
Excellent communication and articulation skills.
**Education:**
+ Bachelor's degree/University degree or equivalent experience
This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required.
---
**Job Family Group:**
Technology
---
**Job Family:**
Applications Development
---
**Time Type:**
Full time
---
**Most Relevant Skills**
Please see the requirements listed above.
---
**Other Relevant Skills**
For complementary skills, please see above and/or contact the recruiter.
---
_Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law._
_If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review_ _Accessibility at Citi ( _._
_View Citi's_ _EEO Policy Statement ( _and the_ _Know Your Rights ( _poster._
Citi is an equal opportunity and affirmative action employer.
Minority/Female/Veteran/Individuals with Disabilities/Sexual Orientation/Gender Identity.
Big Data Developer
Posted today
Job Viewed
Job Description
Years of experience: 4 - 7 years
Location: Bangalore, Gurgaon
Job Description:
- Experience in working on Spark framework, good understanding of core concepts, optimizations, and best practices
- Good hands-on experience in writing code in PySpark, should understand design principles and OOPS
- Good experience in writing complex queries to derive business critical insights
- Hands-on experience on Stream data processing
- Understanding of Data Lake vs Data Warehousing concept
- Knowledge on Machin learning would be an added advantag
- Experience in NoSQL Technologies – MongoDB, Dynamo DB
- Good unestanding of test driven development
- Flexible to learn new technologies
Roles & Responsibilities:
- Design and implement solutions for problems arising out of large-scale data processing
- Attend/drive various architectural, design and status calls with multiple stakeholders
- Ensure end-to-end ownership of all tasks being aligned including development, testing, deployment and support
- Design, build & maintain efficient, reusable & reliable code
- Test implementation, troubleshoot & correct problems
- Capable of working as an individual contributor and within team too
- Ensure high quality software development with complete documentation and traceability
- Fulfil organizational responsibilities (sharing knowledge & experience with other teams/ groups)
Be The First To Know
About the latest Big data Jobs in India !
Big Data Specialist
Posted today
Job Viewed
Job Description
Role Overview
We are seeking a highly skilled Big Data Engineer to join our team. The ideal candidate will have strong experience in building, maintaining, and optimizing large-scale data pipelines and distributed data processing systems. This role involves working closely with cross-functional teams to ensure the reliability, scalability, and performance of data solutions.
Key Responsibilities
- Design, develop, and maintain scalable data pipelines and ETL processes.
- Work with large datasets using Hadoop ecosystem tools (Hive, Spark).
- Build and optimize real-time and batch data processing solutions using Kafka and Spark Streaming.
- Write efficient, high-performance SQL queries to extract, transform, and load data.
- Develop reusable data frameworks and utilities in Python.
- Collaborate with data scientists, analysts, and product teams to deliver reliable data solutions.
- Monitor, troubleshoot, and optimize big data workflows for performance and cost efficiency.
Must-Have Skills
- Strong hands-on experience with Hive and SQL for querying and data transformation.
- Proficiency in Python for data manipulation and automation.
- Expertise in Apache Spark (batch and streaming).
- Experience working with Kafka for streaming data pipelines.
Good-to-Have Skills
- Experience with workflow orchestration tools (Airflow etc.)
- Knowledge of cloud-based big data platforms (AWS EMR, GCP Dataproc, Azure HDInsight).
- Familiarity with CI/CD pipelines and version control (Git).
Big Data Developer
Posted today
Job Viewed
Job Description
Job Title: Big Data Developer
Work Location: Hyderabad, Telangana (TG), India
Experience Range: 6–8 Years
Skill Area: Digital – Big Data and Hadoop Ecosystems
Job Description:
We are looking for an experienced and results-driven Big Data Developer to join our dynamic team in Hyderabad. The ideal candidate will have a strong background in designing, developing, and maintaining scalable big data systems and data processing frameworks.
Key Responsibilities:
- Design, develop, and maintain scalable and efficient big data architectures and data pipelines.
- Implement and optimize data processing frameworks using technologies such as Hadoop , Apache Spark , and Hive .
- Develop and support robust ETL (Extract, Transform, Load) processes to ensure the availability and accuracy of data for downstream applications.
- Collaborate with data engineers, data scientists, and business stakeholders to understand requirements and deliver data-driven solutions.
- Ensure performance tuning, optimization, and troubleshooting of big data applications and workflows.
- Maintain data integrity and ensure adherence to data governance and security standards.
Essential Skills:
- Strong hands-on experience with Hadoop , Apache Spark , and Hive .
- Proficiency in designing and implementing large-scale data processing systems.
- Experience in working with large datasets and distributed data processing frameworks.
- Expertise in writing efficient SQL and working with data warehousing concepts.
Desirable Skills:
- Experience in building and maintaining ETL pipelines .
- Familiarity with data modeling, data quality, and data validation practices.
- Exposure to cloud platforms like AWS, Azure, or GCP is a plus.
- Strong analytical and problem-solving skills with attention to detail.
Educational Qualification:
- Bachelor’s or Master’s degree in Computer Science, Engineering, Information Technology, or a related field.
Big Data Developer
Posted 5 days ago
Job Viewed
Job Description
Job Title: Big Data Developer (Java/Python)
Location: Chennai, Bangalore, Gurugram
Experience Required: 5+ years (5 to 15 yrs)
Joining: Immediate or Early Joiners Preferred
Employment Type: Full-time
Job Summary:
We are looking for a passionate and experienced Big Data Developer with expertise in either Java or Python to join our dynamic team. The ideal candidate will have a strong background in designing and implementing large-scale data processing systems and a solid understanding of modern data technologies. Candidates who are available to join immediately or at short notice will be given preference.
Key Responsibilities:
- Design, develop, and maintain scalable Big Data solutions using Hadoop ecosystem, Spark, and other distributed frameworks.
- Build and optimize data pipelines for batch and real-time data processing.
- Collaborate with data scientists, analysts, and other developers to integrate data-driven solutions into production.
- Write efficient, testable, and reusable code using Java or Python.
- Work closely with DevOps teams to deploy and monitor applications on cloud/on-prem infrastructure.
- Ensure data integrity, security, and performance tuning of large-scale data systems.
Technical Skills:
- Strong programming skills in Java or Python (both is a plus).
- Hands-on experience with Big Data technologies such as Hadoop , Hive , HDFS , Spark , Kafka , etc.
- Familiarity with data modeling, ETL pipelines, and data warehousing concepts.
- Good understanding of SQL and NoSQL databases.
- Experience with cloud platforms such as AWS, GCP, or Azure is a plus.
- Knowledge of CI/CD tools and containerization (Docker/Kubernetes) is desirable.
Required Qualifications:
- Bachelor's or Master’s degree in Computer Science, Engineering, or a related field.
- Minimum 5 years of experience in Big Data development with strong coding background in Java or Python.
- Strong problem-solving skills and ability to work independently or in a team.
- Excellent communication and collaboration skills.
Nice to Have:
- Experience with data lakes, lakehouses, or real-time analytics.
- Exposure to tools like Airflow, NiFi, or similar workflow orchestration tools.
Why Join Us?
- Opportunity to work with cutting-edge Big Data technologies.
- Collaborative and innovative work environment.
- Competitive compensation and benefits.
- Immediate onboarding for early joiners.