1,377 Big W jobs in India
Big Data
Posted today
Job Viewed
Job Description
We are looking for an experienced Big Data Engineer to join our team in India. The ideal candidate will have a strong background in big data technologies and will be responsible for designing and implementing data processing systems to handle large volumes of data.
Responsibilities- Design and implement scalable data processing systems using big data technologies.
- Analyze and interpret large datasets to derive actionable insights.
- Collaborate with cross-functional teams to understand data requirements and deliver solutions.
- Maintain and optimize existing data pipelines and workflows.
- Ensure data quality and integrity throughout the data lifecycle.
- Develop data models and algorithms to support business objectives.
- Proficiency in big data technologies such as Hadoop, Spark, and Kafka.
- Strong programming skills in languages such as Java, Python, or Scala.
- Experience with SQL and NoSQL databases like MongoDB, Cassandra, or Hive.
- Knowledge of data warehousing concepts and ETL processes.
- Familiarity with data visualization tools like Tableau or Power BI.
- Understanding of machine learning concepts and frameworks.
Education
Bachelor Of Technology (B.Tech/B.E)
Skills Required
Hadoop, Spark, Kafka, Sql, Nosql, Python, Data Warehousing, Etl, Machine Learning, Data Visualization
Big Data
Posted today
Job Viewed
Job Description
Teamware Solutions is seeking a talented and passionate Big Data Engineer with 2-4 years of experience to join our dynamic team. This role is crucial for designing, developing, implementing, and troubleshooting scalable Big Data solutions that process, store, and manage large volumes of diverse data. The successful candidate will contribute significantly to ensuring smooth data operations and enabling data-driven insights to meet various business objectives for our clients.
Roles and Responsibilities:
- Data Pipeline Development: Design, build, and maintain robust and scalable ETL (Extract, Transform, Load) pipelines for ingesting, transforming, and loading large datasets from various sources into Big Data platforms.
- Big Data Ecosystem Utilization: Work with and optimize components of the Hadoop ecosystem (e.g., HDFS, Hive) and Apache Spark for distributed data processing and analysis.
- Data Processing & Transformation: Develop efficient code using programming languages (e.g., Python, Scala, Java) to perform data manipulation, cleansing, and transformation for analytical purposes.
- Database Management: Interact with and manage various databases, including relational (SQL) and NoSQL databases (e.g., Cassandra, MongoDB, HBase), for data storage and retrieval.
- Troubleshooting & Optimization: Identify, troubleshoot, and resolve performance bottlenecks and issues within Big Data pipelines and infrastructure.
- Data Quality & Governance: Assist in implementing data quality checks and contribute to data governance practices to ensure data accuracy, consistency, and reliability.
- Collaboration: Collaborate effectively with Data Scientists, Data Analysts, and other engineering teams to understand data requirements and deliver reliable data solutions.
- Monitoring & Support: Monitor Big Data jobs and processes, providing operational support and maintenance for developed solutions.
- Continuous Learning: Stay updated with the latest trends and advancements in Big Data technologies and contribute to the adoption of new tools and best practices.
Preferred Candidate Profile:
- Experience: 2 to 4 years of hands-on experience in Big Data engineering or a related data-centric role.
- Big Data Technologies: Practical experience with key Big Data technologies such as Apache Hadoop, Apache Spark, and data warehousing concepts (e.g., Hive).
- Programming Skills: Proficiency in at least one programming language relevant to Big Data (e.g., Python, Scala, or Java).
- Database Skills: Strong command of SQL for data querying and manipulation. Familiarity with NoSQL databases is a plus.
- Cloud Exposure (Plus): Exposure to Big Data services on cloud platforms (e.g., AWS EMR, Azure Data Lake Analytics, Google Cloud Dataproc) is advantageous.
- Problem-Solving: Excellent analytical and problem-solving skills with a keen eye for detail in complex data environments.
- Communication: Good verbal and written communication skills to effectively collaborate with team members and stakeholders.
- Education: Bachelor's degree in Computer Science, Engineering, Data Science, or a related quantitative field.
Skills Required
Database Management, Troubleshooting, Apache Hadoop, Apache Spark, Programming Language
Big Data Developer - Java, Big data, Spring
Posted 4 days ago
Job Viewed
Job Description
Primary Responsibilities:
- Analyzes and investigates
- Provides explanations and interpretations within area of expertise
- Participate in scrum process and deliver stories/features according to the schedule
- Collaborate with team, architects and product stakeholders to understand the scope and design of a deliverable
- Participate in product support activities as needed by the team.
- Understand product architecture, features being built and come up with product improvement ideas and POCs
- Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so
Qualifications -
Required Qualifications:
- Undergraduate degree or equivalent experience
- Proven experience using Bigdata tech stack
- Sound knowledge on Java and Spring framework with good exposure to Spring Batch, Spring Data, Spring Web services, Python
- Proficient with Bigdata ecosystem (Sqoop, Spark, Hadoop, Hive, HBase)
- Proficient with Unix/Linux eco systems and shell scripting skills
- Proven Java, Kafka, Spark, Big Data, Azure ,analytical and problem solving skills
- Proven solid analytical and communication skills
Big Data Developer - Java, Big data, Spring
Posted today
Job Viewed
Job Description
Primary Responsibilities:
- Analyzes and investigates
- Provides explanations and interpretations within area of expertise
- Participate in scrum process and deliver stories/features according to the schedule
- Collaborate with team, architects and product stakeholders to understand the scope and design of a deliverable
- Participate in product support activities as needed by the team.
- Understand product architecture, features being built and come up with product improvement ideas and POCs
- Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so
Qualifications -
Required Qualifications:
- Undergraduate degree or equivalent experience
- Proven experience using Bigdata tech stack
- Sound knowledge on Java and Spring framework with good exposure to Spring Batch, Spring Data, Spring Web services, Python
- Proficient with Bigdata ecosystem (Sqoop, Spark, Hadoop, Hive, HBase)
- Proficient with Unix/Linux eco systems and shell scripting skills
- Proven Java, Kafka, Spark, Big Data, Azure ,analytical and problem solving skills
- Proven solid analytical and communication skills
Big Data Developer - Java, Big data, Spring
Posted 4 days ago
Job Viewed
Job Description
Analyzes and investigates
Provides explanations and interpretations within area of expertise
Participate in scrum process and deliver stories/features according to the schedule
Collaborate with team, architects and product stakeholders to understand the scope and design of a deliverable
Participate in product support activities as needed by the team.
Understand product architecture, features being built and come up with product improvement ideas and POCs
Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so
Qualifications -
Required Qualifications:
Undergraduate degree or equivalent experience
Proven experience using Bigdata tech stack
Sound knowledge on Java and Spring framework with good exposure to Spring Batch, Spring Data, Spring Web services, Python
Proficient with Bigdata ecosystem (Sqoop, Spark, Hadoop, Hive, HBase)
Proficient with Unix/Linux eco systems and shell scripting skills
Proven Java, Kafka, Spark, Big Data, Azure ,analytical and problem solving skills
Proven solid analytical and communication skills
Big Data Developer - Java, Big data, Spring
Posted 1 day ago
Job Viewed
Job Description
Primary Responsibilities:
- Analyzes and investigates
- Provides explanations and interpretations within area of expertise
- Participate in scrum process and deliver stories/features according to the schedule
- Collaborate with team, architects and product stakeholders to understand the scope and design of a deliverable
- Participate in product support activities as needed by the team.
- Understand product architecture, features being built and come up with product improvement ideas and POCs
- Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so
Qualifications -
Required Qualifications:
- Undergraduate degree or equivalent experience
- Proven experience using Bigdata tech stack
- Sound knowledge on Java and Spring framework with good exposure to Spring Batch, Spring Data, Spring Web services, Python
- Proficient with Bigdata ecosystem (Sqoop, Spark, Hadoop, Hive, HBase)
- Proficient with Unix/Linux eco systems and shell scripting skills
- Proven Java, Kafka, Spark, Big Data, Azure ,analytical and problem solving skills
- Proven solid analytical and communication skills
Big Data Engineer

Posted 2 days ago
Job Viewed
Job Description
**Responsibilities:**
+ Utilize knowledge of applications development procedures and concepts, and basic knowledge of other technical areas to identify and define necessary system enhancements
+ Identify and analyze issues, make recommendations, and implement solutions
+ Utilize knowledge of business processes, system processes, and industry standards to solve complex issues
+ Analyze information and make evaluative judgements to recommend solutions and improvements
+ Conduct testing and debugging, utilize script tools, and write basic code for design specifications
+ Assess applicability of similar experiences and evaluate options under circumstances not covered by procedures
+ Develop working knowledge of Citi's information systems, procedures, standards, client server application development, network operations, database administration, systems administration, data center operations, and PC-based applications
+ Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency.
**Qualifications:**
+ 3 to 5 years of relevant experience
+ Experience in programming/debugging used in business applications
+ Working knowledge of industry practice and standards
+ Comprehensive knowledge of specific business area for application development
+ Working knowledge of program languages
+ Consistently demonstrates clear and concise written and verbal communication
**Education:**
+ Bachelor's degree/University degree or equivalent experience
This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required.
Additional Job Description
We are looking for a Big Data Engineer that will work on the collecting, storing, processing, and analyzing of huge sets of data. The primary focus will be on choosing optimal solutions to use for these purposes, then maintaining, implementing, and monitoring them. You will also be responsible for integrating them with the architecture used across the company.
Responsibilities
- Selecting and integrating any Big Data tools and frameworks required to provide requested capabilities
- Implementing data wrangling, scarping, cleaning using both Java or Python
Strong experience on data structure.
Skills and Qualifications
- Proficient understanding of distributed computing principles
- Proficient in Java or Python and some part of machine learning
- Proficiency with Hadoop v2, MapReduce, HDFS, Pyspark, Spark
- Experience with building stream-processing systems, using solutions such as Storm or Spark-Streaming
- Good knowledge of Big Data querying tools, such as Pig, Hive, and Impala
- Experience with Spark
- Experience with integration of data from multiple data sources
- Experience with NoSQL databases, such as HBase, Cassandra, MongoDB
- Knowledge of various ETL techniques and frameworks, such as Flume
- Experience with various messaging systems, such as Kafka or RabbitMQ
- Experience with Big Data ML toolkits, such as Mahout, SparkML, or H2O
- Good understanding of Lambda Architecture, along with its advantages and drawbacks
- Experience with Cloudera/MapR/Hortonworks
This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required.
---
**Job Family Group:**
Technology
---
**Job Family:**
Applications Development
---
**Time Type:**
Full time
---
**Most Relevant Skills**
Please see the requirements listed above.
---
**Other Relevant Skills**
For complementary skills, please see above and/or contact the recruiter.
---
_Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law._
_If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review_ _Accessibility at Citi ( _._
_View Citi's_ _EEO Policy Statement ( _and the_ _Know Your Rights ( _poster._
Citi is an equal opportunity and affirmative action employer.
Minority/Female/Veteran/Individuals with Disabilities/Sexual Orientation/Gender Identity.
Be The First To Know
About the latest Big w Jobs in India !
Big Data Developer
Posted 2 days ago
Job Viewed
Job Description
Experience: 4 - 7 years
Location: Bangalore
Job Description:
- Strong experience working with the Apache Spark framework, including a solid grasp of core concepts, performance optimizations, and industry best practices
- Proficient in PySpark with hands-on coding experience; familiarity with unit testing, object-oriented programming (OOP) principles, and software design patterns
- Experience with code deployment and associated processes
- Proven ability to write complex SQL queries to extract business-critical insights
- Hands-on experience in streaming data processing
- Familiarity with machine learning concepts is an added advantage
- Experience with NoSQL databases
- Good understanding of Test-Driven Development (TDD) methodologies
- Demonstrated flexibility and eagerness to learn new technologies
Roles & Responsibilities
- Design and implement solutions for problems arising out of large-scale data processing
- Attend/drive various architectural, design and status calls with multiple stakeholders
- Ensure end-to-end ownership of all tasks being aligned including development, testing, deployment and support
- Design, build & maintain efficient, reusable & reliable code
- Test implementation, troubleshoot & correct problems
- Capable of working as an individual contributor and within team too
- Ensure high quality software development with complete documentation and traceability
- Fulfil organizational responsibilities (sharing knowledge & experience with other teams/ groups)
Big Data Developer
Posted 3 days ago
Job Viewed
Job Description
Job Description:
Experience in working on Spark framework, good understanding of core concepts, optimizations, and best practices
Good hands-on experience in writing code in PySpark, should understand design principles and OOPS
Good experience in writing complex queries to derive business critical insights
Hands-on experience on Stream data processing
Understanding of Data Lake vs Data Warehousing concept
Knowledge on Machin learning would be an added advantag
Experience in NoSQL Technologies – MongoDB, Dynamo DB
Good understanding of test driven development
Flexible to learn new technologies
Roles & Responsibilities:
Design and implement solutions for problems arising out of large-scale data processing
Attend/drive various architectural, design and status calls with multiple stakeholders
Ensure end-to-end ownership of all tasks being aligned including development, testing, deployment and support
Design, build & maintain efficient, reusable & reliable code
Test implementation, troubleshoot & correct problems
Capable of working as an individual contributor and within team too
Ensure high quality software development with complete documentation and traceability
Fulfil organizational responsibilities (sharing knowledge & experience with other teams/ groups)
Big Data Engineer
Posted 3 days ago
Job Viewed
Job Description
Big Data Engineer (PySpark)
Location: Pune/Nagpur (WFO)
Experience: 8 - 12 Years
Employment Type: Full-time
Job Overview
We are looking for an experienced Big Data Engineer with strong expertise in PySpark and Big Data ecosystems. The ideal candidate will be responsible for designing, developing, and optimizing scalable data pipelines while ensuring high performance and reliability.
Key Responsibilities
- Design, develop, and maintain data pipelines using PySpark and related Big Data technologies.
- Work with HDFS, Hive, Sqoop , and other tools in the Hadoop ecosystem.
- Write efficient HiveQL and SQL queries to handle large-scale datasets.
- Perform performance tuning and optimization of distributed data systems.
- Collaborate with cross-functional teams in an Agile environment to deliver high-quality solutions.
- Manage and schedule workflows using Apache Airflow or Oozie .
- Troubleshoot and resolve issues in data pipelines to ensure reliability and accuracy.
Required Skills
- Proven experience in Big Data Engineering with a focus on PySpark.
- Strong knowledge of HDFS, Hive, Sqoop , and related tools.
- Proficiency in SQL/HiveQL for large datasets.
- Expertise in performance tuning and optimization of distributed systems.
- Familiarity with Agile methodology and collaborative team practices.
- Experience with workflow orchestration tools (Airflow/Oozie ).
- Strong problem-solving, analytical, and communication skills.
Good to Have
- Knowledge of data modeling and data warehousing concepts.
- Exposure to DevOps practices and CI/CD pipelines for data engineering.
- Experience with other Big Data frameworks such as Spark Streaming or Kafka .