22,578 Big Data Hadoop jobs in India

Big Data(Hadoop)

Pune, Maharashtra Princenton software services pvt ltd

Posted today

Job Viewed

Tap Again To Close

Job Description

enior Software Test Engineer to join the Account Level Management(ALM) team in our Pune office that’s focused on building ALM Services with Data warehouse skills. The Mastercard Account Level Management platform empowers real-time card level decisioning. As consumers progress along their life stages as card holders, with increasing disposable income and more refined preferences, ALM provides services to issuers so they can effectively offer more relevant benefits and rewards at each stage, to drive loyalty and spend.

**Job Type**: Contractual / Temporary
Contract length: 24 months

Work Location: In person
This advertiser has chosen not to accept applicants from your region.

Big Data Hadoop Engineer

Hyderabad, Andhra Pradesh Anicalls (Pty) Ltd

Posted today

Job Viewed

Tap Again To Close

Job Description

Candidate should be able to:
Work closely with our QA Team to ensure data integrity and overall system quality
Work closely with Technology Leadership, Product Managers, and Reporting Team for understanding the functional and system requirements
Write Shell/Python scripts for jobs scheduling and data wrangling
Write Scoop Jobs to Import/Export data from Hadoop
Enhance existing Spark and Java applications, and provide support
Generate data reports using HiveQL, Spark SQL or PySpark
Develop greenfield data applications using Spark, Java, Python, JDBS/ODBC, and other Hadoop/BigData technologies
Design and develop Cloudera HDFS-based solutions using Spark (with interfaces Java, Python, and Spark SQL), Hive QL, Flume, Talend, IBM MQ, and Kafka
Candidate should have:
Ability to identify problems, and effectively communicate solutions to peers and management
Strong debugging skills to troubleshoot production issues
Experience in working with real-time data feeds
Understanding of Data architecture, replication, and administration
Familiarity with Linux OS
Exposure to RDBMS: Microsoft SQL Server, Oracle, DB2
Comfortability working in a team environment
AWS Cloud Analytics experience
Must have 5+ years of experience in using Hadoop/BigData technologies like Spark, Spark SQL, Hive, Flume, Parquet, and Avro file formats, Sqoop, etc.
Must have 5+ years of experience in developing applications using Java, Junit, Maven, and its eco-system
Must have 2+ years of experience in developing Shell/Python scripts
BS/BA degree in Computer Science, Information Systems or related field
This advertiser has chosen not to accept applicants from your region.

Big data/Hadoop Administration

Bengaluru, Karnataka Anicalls (Pty) Ltd

Posted today

Job Viewed

Tap Again To Close

Job Description

• As an individual contributor, design modules, apply creative problem-solving using tools and technologies.
• As individual contributor code and test modules.
• Interact and collaborate directly with software developers, product managers, and business analysts to ensure proper development and quality of service applications and products.
• Ability to do development in an Agile environment.
• Work closely with Leads and Architects to understand the requirements and translate that into code.
• Mentor junior engineers if required.
This advertiser has chosen not to accept applicants from your region.

Senior Big Data Hadoop Engineer

Hyderabad, Andhra Pradesh Anicalls (Pty) Ltd

Posted today

Job Viewed

Tap Again To Close

Job Description

• As an individual contributor, design modules, apply creative problem-solving using tools and technologies.
• As individual contributor code and test modules.
• Interact and collaborate directly with software developers, product managers, and business analysts to ensure proper development and quality of service applications and products.
• Ability to do development in an Agile environment.
• Work closely with Leads and Architects to understand the requirements and translate that into code.
• Mentor junior engineers if required.
This advertiser has chosen not to accept applicants from your region.

Big Data (Hadoop) Support Engineer

Hyderabad, Andhra Pradesh RiskInsight Consulting Pvt Ltd

Posted today

Job Viewed

Tap Again To Close

Job Description

Responsibilities
  • Provide technical support and troubleshooting for Big Data applications and systems built on the Hadoop ecosystem.
  • Monitor system performance, analyze logs, and identify potential issues before they impact services.
  • Collaborate with engineering teams to deploy and configure Hadoop clusters and related components.
  • Assist in maintenance and upgrades of Hadoop environments to ensure optimum performance and security.
  • Develop and maintain documentation for processes, procedures, and system configurations.
  • Implement data backup and recovery procedures to ensure data integrity and availability.
  • Participate in on-call rotations to provide after-hours support as needed.
  • Stay up to date with Hadoop technologies and support methodologies.
  • Assist in the training and onboarding of new team members and users on Hadoop best practices.

Requirements

  • Bachelor's degree in Computer Science, Information Technology, or a related field.
  • 3+ years of experience in Big Data support or system administration, specifically with the Hadoop ecosystem.
  • Strong understanding of Hadoop components (HDFS, MapReduce, Hive, Pig, etc.).
  • Experience with system monitoring and diagnostics tools.
  • Proficient in Linux/Unix commands and scripting languages (Bash, Python).
  • Basic understanding of database technologies and data warehousing concepts.
  • Strong problem-solving skills and ability to work under pressure.
  • Excellent communication and interpersonal skills.
  • Ability to work independently as well as collaboratively in a team environment.
  • Willingness to learn new technologies and enhance skills.

Skills: Hadoop, spark/scala, HDFS, SQL, Unix Scripting, Data Backup, System Monitoring

Benefits

Competitive salary and benefits package.

Opportunity to work on cutting-edge technologies and solve complex challenges.

Dynamic and collaborative work environment with opportunities for growth and career advancement.

Regular training and professional development opportunities.

This advertiser has chosen not to accept applicants from your region.

Big Data (Hadoop/spark/scala)

Hyderabad, Andhra Pradesh Tata Consultancy Services

Posted today

Job Viewed

Tap Again To Close

Job Description

Experience Range: 5 Years to 8 Years.

Location
- Hyderabad.

**Job Description**:

- Ingestdata from disparate sources (Structured, unstructured and semi-structured) anddevelop ETL jobs using the above skills.
- Doimpact analysis and come up with estimates.
- Takeresponsibility for end-to-end deliverable.
- CreateProject Plan & Work on Implementation Strategy.
- Needto have comprehensive understanding on ETL concepts and Cross Environment DataTransfers.
- Needto Handle Customer Communications and Management Reporting.

Qualifications :BACHELOR OF ENGINEERING
This advertiser has chosen not to accept applicants from your region.

Data Engineer (ETL, Big Data, Hadoop, Spark, GCP)

Bengaluru, Karnataka Confidential

Posted today

Job Viewed

Tap Again To Close

Job Description

  • Senior engineer is responsible for developing and delivering elements of engineering solutions to accomplish business goals.
  • Awareness is expected of the important engineering principles of the bank. Root cause analysis skills develop through addressing enhancements and fixes 2 products build reliability and resiliency into solutions through early testing peer reviews and automating the delivery life cycle
  • Successful candidate should be able to work independently on medium to large sized projects with strict deadlines.
  • Successful candidates should be able to work in a cross application mixed technical environment and must demonstrate solid hands-on development track record while working on an agile methodology. The role demands working alongside a geographically dispersed team.
  • The position is required as a part of the buildout of Compliance tech internal development team in India.
  • The overall team will primarily deliver improvements in compliance tech capabilities that are major components of the regular regulatory portfolio addressing various regulatory common commitments to mandate monitors.

Your key responsibilities

  • Analyzing data sets and designing and coding stable and scalable data ingestion workflows also integrating into existing workflows
  • Working with team members and stakeholders to clarify requirements and provide the appropriate ETL solution.
  • Work as a senior developer for developing analytics algorithm on top of ingested data.
  • Work as though senior developer for various data sourcing in Hadoop also GCP.
  • Own unit testing UAT deployment end user sign off and prod go live.
  • Ensuring new code is tested both at unit level and system level design develop and peer review new code and functionality.
  • Operate as a team member of an agile scrum team.
  • Root cause analysis skills to identify bugs and issues for failures.
  • Support Prod support and release management teams in their tasks.

Your skills and experience

  • More than 10+ years of coding experience in experience and reputed organizations
  • Hands on experience in Bitbucket and CI/CD pipelines
  • Proficient in Hadoop, Python, Spark, SQL Unix and Hive
  • Basic understanding of on Prem and GCP data security
  • Hands on development experience on large ETL/ big data systems .GCP being a big plus
  • Hands on experience on cloud build, artifact registry ,cloud DNS ,cloud load balancing etc.
  • Hands on experience on Data flow, Cloud composer, Cloud storage ,Data proc etc.
  • Basic understanding of data quality dimensions like Consistency, Completeness, Accuracy, Lineage etc.
  • Hands on business and systems knowledge gained in a regulatory delivery environment.
  • Desired Banking experience regulatory and cross product knowledge.
  • Passionate about test driven development.
  • Prior experience with release management tasks and responsibilities.
  • Data visualization experience is good to have.

Skills Required
Data Visualization, Python
This advertiser has chosen not to accept applicants from your region.
Be The First To Know

About the latest Big data hadoop Jobs in India !

Big Data Engineer (Hadoop)

Chennai, Tamil Nadu Confidential

Posted today

Job Viewed

Tap Again To Close

Job Description

Key Responsibilities:
  • Design, develop, and optimize large-scale data processing workflows using Hadoop components such as HDFS, MapReduce, Hive, Pig, and HBase .
  • Build and maintain ETL pipelines to ingest and transform data from various sources into Hadoop clusters.
  • Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver high-quality data solutions.
  • Ensure data quality, integrity, and security within big data environments.
  • Monitor and troubleshoot Hadoop cluster performance and resolve issues proactively.
  • Implement best practices for data storage, processing, and retrieval in Hadoop ecosystems.
  • Develop automation scripts for data pipeline orchestration and workflow management (using tools like Apache Oozie or Airflow).
  • Participate in capacity planning, cluster management, and Hadoop upgrades.
  • Document data architecture, workflows, and operational procedures.
Qualifications and Requirements:
  • Bachelor's degree in Computer Science, Information Technology, Engineering, or related field.
  • 3+ years of experience as a Big Data Engineer or Hadoop Developer.
  • Hands-on experience with core Hadoop ecosystem components: HDFS, MapReduce, Hive, Pig, HBase, Sqoop, Flume .
  • Strong proficiency in Java, Scala, or Python for big data processing.
  • Experience with data modeling and query optimization in Hive or HBase.
  • Familiarity with data ingestion techniques and tools (Sqoop, Flume, Kafka).
  • Understanding of cluster management and resource schedulers like YARN.
  • Knowledge of Linux/Unix environments and shell scripting.
  • Experience with version control (Git) and CI/CD pipelines.
  • Strong analytical and problem-solving skills.

Skills Required
Java, Scala, Python, Linux, Git
This advertiser has chosen not to accept applicants from your region.

Officer- Data Engineer (Big Data, Hadoop, Hive, Python, Spark) - C11 - PUNE

Pune, Maharashtra 12542 Citicorp Services India Private Limited

Posted today

Job Viewed

Tap Again To Close

Job Description

At Citi we’re not just building technology, we’re building the future of banking. Encompassing a broad range of specialties, roles, and cultures, our teams are creating innovations used across the globe. Citi is constantly growing and progressing through our technology, with laser focused on evolving the ways of doing things. As one of the world’s most global banks we’re changing how the world does business.

Shape your Career with Citi

We’re currently looking for a high caliber professional to join our team as 25893328 Officer, Data Engineer -C11- Hybrid based in Pune, India. Being part of our team means that we’ll provide you with the resources to meet your unique needs, empower you to make healthy decision and manage your financial well-being to help plan for your future. For instance:

  • We provide programs and services for your physical and mental well-being including access to telehealth options, health advocates, confidential counseling and more. Coverage varies by country.

  • We empower our employees to manage their financial well-being and help them plan for the future.

  • The Data Engineer is accountable for developing high quality data products to support the Bank’s regulatory requirements and data driven decision making. A Data Engineer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team.

    Responsibilities

  • Developing and supporting scalable, extensible, and highly available data solutions
  • Deliver on critical business priorities while ensuring alignment with the wider architectural vision
  • Identify and help address potential risks in the data supply chain
  • Follow and contribute to technical standards
  • Design and develop analytical data models
  • Required Qualifications & Work Experience

  • First Class Degree in Engineering/Technology (4-year graduate course)
  • 4 to 8 years’ experience implementing data-intensive solutions using agile methodologies
  • Experience of relational databases and using SQL for data querying, transformation and manipulation
  • Experience of modelling data for analytical consumers
  • Ability to automate and streamline the build, test and deployment of data pipelines
  • Experience in cloud native technologies and patterns
  • A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training
  • Excellent communication and problem-solving skills
  • T echnical Skills (Must Have)

  • ETL: Hands on experience of building data pipelines. Proficiency in two or more data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica
  • Big Data : Experience of ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing
  • Data Warehousing & Database Management : Understanding of Data Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design
  • Data Modeling & Design : Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures
  • Languages : Proficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala
  • DevOps : Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management
  • ---

    Job Family Group:

    Technology

    ---

    Job Family:

    Digital Software Engineering

    ---

    Time Type:

    Full time

    ---

    Most Relevant Skills

    Please see the requirements listed above.

    ---

    Other Relevant Skills

    For complementary skills, please see above and/or contact the recruiter.

    ---

    This advertiser has chosen not to accept applicants from your region.

    Big Data Lead (Hadoop. Spark)

    Chennai, Tamil Nadu Citigroup

    Posted 3 days ago

    Job Viewed

    Tap Again To Close

    Job Description

    This role manage the Olympus core Hadoop platform. Work with EAP and various application teams to ensure all cluster level changes are communicated properly, tested and released in a timely manner.
    Build automation for manual tasks like reconciliation and frameworks for handling new technologies like Apache Iceberg, Ozone etc. Work with application teams to ensure adoption of Olympus frameworks and to archive cold data / raw messages into Cloud Object Storage.
    This role is crucial to meeting client commitments related to the consent order control framework deliverables. The individual in this position will be responsible for several key tasks that are essential for our organization's success. These tasks include the management of Olympus core Hadoop platform, automation and creation of frameworks for handling big data.
    Automation, enhancement to cold data/raw message processing framework, enhancement of batch/streaming/housekeeping frameworks to handle Apache Iceberg etc.
    **Responsibilities:**
    + Partner with multiple management teams to ensure appropriate integration of functions to meet goals as well as identify and define necessary system enhancements to deploy new products and process improvements
    + Resolve variety of high impact problems/projects through in-depth evaluation of complex business processes, system processes, and industry standards
    + Provide expertise in area and advanced knowledge of applications programming and ensure application design adheres to the overall architecture blueprint
    + Utilize advanced knowledge of system flow and develop standards for coding, testing, debugging, and implementation
    + Develop comprehensive knowledge of how areas of business, such as architecture and infrastructure, integrate to accomplish business goals
    + Provide in-depth analysis with interpretive thinking to define issues and develop innovative solutions
    + Serve as advisor or coach to mid-level developers and analysts, allocating work as necessary
    + Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency.
    **Qualifications:**
    + 12-15 years of relevant experience in Apps Development or systems analysis role
    + Extensive experience system analysis and in programming of software applications
    + Experience in managing and implementing successful projects
    + Subject Matter Expert (SME) in at least one area of Applications Development
    + Ability to adjust priorities quickly as circumstances dictate
    + Demonstrated leadership and project management skills
    + Consistently demonstrates clear and concise written and verbal communication
    **Education:**
    + Bachelor's degree/University degree or equivalent experience
    + Master's degree preferred
    This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required.
    ---
    **Job Family Group:**
    Technology
    ---
    **Job Family:**
    Applications Development
    ---
    **Time Type:**
    Full time
    ---
    **Most Relevant Skills**
    Please see the requirements listed above.
    ---
    **Other Relevant Skills**
    For complementary skills, please see above and/or contact the recruiter.
    ---
    _Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law._
    _If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review_ _Accessibility at Citi ( _._
    _View Citi's_ _EEO Policy Statement ( _and the_ _Know Your Rights ( _poster._
    Citi is an equal opportunity and affirmative action employer.
    Minority/Female/Veteran/Individuals with Disabilities/Sexual Orientation/Gender Identity.
    This advertiser has chosen not to accept applicants from your region.
     

    Nearby Locations

    Other Jobs Near Me

    Industry

    1. request_quote Accounting
    2. work Administrative
    3. eco Agriculture Forestry
    4. smart_toy AI & Emerging Technologies
    5. school Apprenticeships & Trainee
    6. apartment Architecture
    7. palette Arts & Entertainment
    8. directions_car Automotive
    9. flight_takeoff Aviation
    10. account_balance Banking & Finance
    11. local_florist Beauty & Wellness
    12. restaurant Catering
    13. volunteer_activism Charity & Voluntary
    14. science Chemical Engineering
    15. child_friendly Childcare
    16. foundation Civil Engineering
    17. clean_hands Cleaning & Sanitation
    18. diversity_3 Community & Social Care
    19. construction Construction
    20. brush Creative & Digital
    21. currency_bitcoin Crypto & Blockchain
    22. support_agent Customer Service & Helpdesk
    23. medical_services Dental
    24. medical_services Driving & Transport
    25. medical_services E Commerce & Social Media
    26. school Education & Teaching
    27. electrical_services Electrical Engineering
    28. bolt Energy
    29. local_mall Fmcg
    30. gavel Government & Non Profit
    31. emoji_events Graduate
    32. health_and_safety Healthcare
    33. beach_access Hospitality & Tourism
    34. groups Human Resources
    35. precision_manufacturing Industrial Engineering
    36. security Information Security
    37. handyman Installation & Maintenance
    38. policy Insurance
    39. code IT & Software
    40. gavel Legal
    41. sports_soccer Leisure & Sports
    42. inventory_2 Logistics & Warehousing
    43. supervisor_account Management
    44. supervisor_account Management Consultancy
    45. supervisor_account Manufacturing & Production
    46. campaign Marketing
    47. build Mechanical Engineering
    48. perm_media Media & PR
    49. local_hospital Medical
    50. local_hospital Military & Public Safety
    51. local_hospital Mining
    52. medical_services Nursing
    53. local_gas_station Oil & Gas
    54. biotech Pharmaceutical
    55. checklist_rtl Project Management
    56. shopping_bag Purchasing
    57. home_work Real Estate
    58. person_search Recruitment Consultancy
    59. store Retail
    60. point_of_sale Sales
    61. science Scientific Research & Development
    62. wifi Telecoms
    63. psychology Therapy
    64. pets Veterinary
    View All Big Data Hadoop Jobs