1,218 Big Data jobs in India

Data Engineer II, Data & AI, Customer Engagement Technology

Hyderabad, Andhra Pradesh myGwork

Job Viewed

Tap Again To Close

Job Description

This job is with Amazon, an inclusive employer and a member of myGwork – the largest global platform for the LGBTQ+ business community. Please do not contact the recruiter directly.DESCRIPTION:As a Data Engineer on the Data and AI team, you will design and implement robust data pipelines and infrastructure that power our organization's data-driven decisions and AI capabilities. This role is critical in developing and maintaining our enterprise-scale data processing systems that handle high-volume transactions while ensuring data security, privacy compliance, and optimal performance.You'll be part of a dynamic team that designs and implements comprehensive data solutions, from real-time processing architectures to secure storage solutions and privacy-compliant data access layers. The role involves close collaboration with cross-functional teams, including software development engineers, product managers, and scientists, to create data products that power critical business capabilities. You'll have the opportunity to work with leading technologies in cloud computing, big data processing, and machine learning infrastructure, while contributing to the development of robust data governance frameworks.If you're passionate about solving complex technical challenges in high-scale environments, thrive in a collaborative team setting, and want to make a lasting impact on our organization's data infrastructure, this role offers an exciting opportunity to shape the future of our data and AI capabilities.Key job responsibilities- Design and implement ETL/ELT frameworks that handle large-scale data operations, while building reusable components for data ingestion, transformation, and orchestration while ensuring data quality and reliability.- Establish and maintain robust data governance standards by implementing comprehensive security controls, access management frameworks, and privacy-compliant architectures that safeguard sensitive information.- Drive the implementation of data solutions, both real-time and batch, optimizing them for both analytical workloads and AI/ML applications.- Lead technical design reviews and provide mentorship on data engineering best practices, identifying opportunities for architectural improvements and guiding the implementation of enhanced solutions.- Build data quality frameworks with robust monitoring systems and validation processes to ensure data accuracy and reliability throughout the data lifecycle.- Drive continuous improvement initiatives by evaluating and implementing new technologies and methodologies that enhance data infrastructure capabilities and operational efficiency.A day in the lifeThe day often begins with a team stand-up to align priorities, followed by a review of data pipeline monitoring alarms to address any processing issues and ensure data quality standards are maintained across systems. Throughout the day, you'll find yourself immersed in various technical tasks, including developing and optimizing ETL/ELT processes, implementing data governance controls, and reviewing code for data processing systems. You'll work closely with software engineers, scientists, and product managers, participating in technical design discussions and sharing your expertise in data architecture and engineering best practices. Your responsibilities extend to communicating with non-technical stakeholders, explaining data-related projects and their business impact.You'll also mentor junior engineers and contribute to maintaining comprehensive technical documentation. You'll troubleshoot issues that arise in the data infrastructure, optimize the performance of data pipelines, and ensure data security and compliance with relevant regulations. Staying updated on the latest data engineering technologies and best practices is crucial, as you'll be expected to incorporate new learnings into your work. By the end of a typical day, you'll have advanced key data infrastructure initiatives, solved complex technical challenges, and improved the reliability, efficiency, and security of data systems. Whether it's implementing new data governance controls, optimizing data processing workflows, or enhancing data platforms to support new AI models, your work directly impacts the organization's ability to leverage data for critical business decisions and AI capabilities.If you are not sure that every qualification on the list above describes you exactly, we'd still love to hear from you! At Amazon, we value people with unique backgrounds, experiences, and skillsets. If you're passionate about this role and want to make an impact on a global scale, please apply!About the teamThe Data and Artificial Intelligence (AI) team is a new function within Customer Engagement Technology. We own the end-to-end process of defining, building, implementing, and monitoring a comprehensive data strategy. We also develop and apply Generative Artificial Intelligence (GenAI), Machine Learning (ML), Ontology, and Natural Language Processing (NLP) to customer and associate experiences.BASIC QUALIFICATIONS:- 3+ years of data engineering experience- Bachelor's degree in Computer Science, Engineering, or a related technical disciplinePREFERRED QUALIFICATIONS:- Experience with AWS data services (Redshift, S3, Glue, EMR, Kinesis, Lambda, RDS) and understanding of IAM security frameworks- Proficiency in designing and implementing logical data models that drive physical designs- Hands-on experience working with large language models, including understanding of data infrastructure requirements for AI model trainingOur inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit for more information. If the country/region you're applying in isn't listed, please contact your Recruiting Partner.
This advertiser has chosen not to accept applicants from your region.

Job No Longer Available

This position is no longer listed on WhatJobs. The employer may be reviewing applications, filled the role, or has removed the listing.

However, we have similar jobs available for you below.

Big Data Developer - Java, Big data, Spring

Hyderabad, Andhra Pradesh Optum

Posted 2 days ago

Job Viewed

Tap Again To Close

Job Description

Primary Responsibilities:

  • Analyzes and investigates
  • Provides explanations and interpretations within area of expertise
  • Participate in scrum process and deliver stories/features according to the schedule
  • Collaborate with team, architects and product stakeholders to understand the scope and design of a deliverable
  • Participate in product support activities as needed by the team.
  • Understand product architecture, features being built and come up with product improvement ideas and POCs
  • Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so


Qualifications -

Required Qualifications:

  • Undergraduate degree or equivalent experience
  • Proven experience using Bigdata tech stack
  • Sound knowledge on Java and Spring framework with good exposure to Spring Batch, Spring Data, Spring Web services, Python
  • Proficient with Bigdata ecosystem (Sqoop, Spark, Hadoop, Hive, HBase)
  • Proficient with Unix/Linux eco systems and shell scripting skills
  • Proven Java, Kafka, Spark, Big Data, Azure ,analytical and problem solving skills
  • Proven solid analytical and communication skills
This advertiser has chosen not to accept applicants from your region.

Big Data Developer - Java, Big data, Spring

Hyderabad, Andhra Pradesh Optum

Posted 2 days ago

Job Viewed

Tap Again To Close

Job Description

Primary Responsibilities:

  • Analyzes and investigates
  • Provides explanations and interpretations within area of expertise
  • Participate in scrum process and deliver stories/features according to the schedule
  • Collaborate with team, architects and product stakeholders to understand the scope and design of a deliverable
  • Participate in product support activities as needed by the team.
  • Understand product architecture, features being built and come up with product improvement ideas and POCs
  • Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so

Qualifications -

Required Qualifications:

  • Undergraduate degree or equivalent experience
  • Proven experience using Bigdata tech stack
  • Sound knowledge on Java and Spring framework with good exposure to Spring Batch, Spring Data, Spring Web services, Python
  • Proficient with Bigdata ecosystem (Sqoop, Spark, Hadoop, Hive, HBase)
  • Proficient with Unix/Linux eco systems and shell scripting skills
  • Proven Java, Kafka, Spark, Big Data, Azure ,analytical and problem solving skills
  • Proven solid analytical and communication skills
This advertiser has chosen not to accept applicants from your region.

Big Data Developer - Java, Big data, Spring

Hyderabad, Andhra Pradesh Optum

Posted 2 days ago

Job Viewed

Tap Again To Close

Job Description

Primary Responsibilities:
Analyzes and investigates
Provides explanations and interpretations within area of expertise
Participate in scrum process and deliver stories/features according to the schedule
Collaborate with team, architects and product stakeholders to understand the scope and design of a deliverable
Participate in product support activities as needed by the team.
Understand product architecture, features being built and come up with product improvement ideas and POCs
Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so

Qualifications -
Required Qualifications:
Undergraduate degree or equivalent experience
Proven experience using Bigdata tech stack
Sound knowledge on Java and Spring framework with good exposure to Spring Batch, Spring Data, Spring Web services, Python
Proficient with Bigdata ecosystem (Sqoop, Spark, Hadoop, Hive, HBase)
Proficient with Unix/Linux eco systems and shell scripting skills
Proven Java, Kafka, Spark, Big Data, Azure ,analytical and problem solving skills
Proven solid analytical and communication skills
This advertiser has chosen not to accept applicants from your region.

Big Data Developer - Java, Big data, Spring

Hyderabad, Andhra Pradesh Optum

Posted today

Job Viewed

Tap Again To Close

Job Description

Primary Responsibilities:

  • Analyzes and investigates
  • Provides explanations and interpretations within area of expertise
  • Participate in scrum process and deliver stories/features according to the schedule
  • Collaborate with team, architects and product stakeholders to understand the scope and design of a deliverable
  • Participate in product support activities as needed by the team.
  • Understand product architecture, features being built and come up with product improvement ideas and POCs
  • Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so


Qualifications -

Required Qualifications:

  • Undergraduate degree or equivalent experience
  • Proven experience using Bigdata tech stack
  • Sound knowledge on Java and Spring framework with good exposure to Spring Batch, Spring Data, Spring Web services, Python
  • Proficient with Bigdata ecosystem (Sqoop, Spark, Hadoop, Hive, HBase)
  • Proficient with Unix/Linux eco systems and shell scripting skills
  • Proven Java, Kafka, Spark, Big Data, Azure ,analytical and problem solving skills
  • Proven solid analytical and communication skills
This advertiser has chosen not to accept applicants from your region.

Big Data Engineer

Pune, Maharashtra Citigroup

Posted 1 day ago

Job Viewed

Tap Again To Close

Job Description

The Applications Development Programmer Analyst is an intermediate level position responsible for participation in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. The overall objective of this role is to contribute to applications systems analysis and programming activities.
**Responsibilities:**
+ Utilize knowledge of applications development procedures and concepts, and basic knowledge of other technical areas to identify and define necessary system enhancements
+ Identify and analyze issues, make recommendations, and implement solutions
+ Utilize knowledge of business processes, system processes, and industry standards to solve complex issues
+ Analyze information and make evaluative judgements to recommend solutions and improvements
+ Conduct testing and debugging, utilize script tools, and write basic code for design specifications
+ Assess applicability of similar experiences and evaluate options under circumstances not covered by procedures
+ Develop working knowledge of Citi's information systems, procedures, standards, client server application development, network operations, database administration, systems administration, data center operations, and PC-based applications
+ Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency.
**Qualifications:**
+ 3 to 5 years of relevant experience
+ Experience in programming/debugging used in business applications
+ Working knowledge of industry practice and standards
+ Comprehensive knowledge of specific business area for application development
+ Working knowledge of program languages
+ Consistently demonstrates clear and concise written and verbal communication
**Education:**
+ Bachelor's degree/University degree or equivalent experience
This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required.
Additional Job Description
We are looking for a Big Data Engineer that will work on the collecting, storing, processing, and analyzing of huge sets of data. The primary focus will be on choosing optimal solutions to use for these purposes, then maintaining, implementing, and monitoring them. You will also be responsible for integrating them with the architecture used across the company.
Responsibilities
- Selecting and integrating any Big Data tools and frameworks required to provide requested capabilities
- Implementing data wrangling, scarping, cleaning using both Java or Python
Strong experience on data structure.
Skills and Qualifications
- Proficient understanding of distributed computing principles
- Proficient in Java or Python and some part of machine learning
- Proficiency with Hadoop v2, MapReduce, HDFS, Pyspark, Spark
- Experience with building stream-processing systems, using solutions such as Storm or Spark-Streaming
- Good knowledge of Big Data querying tools, such as Pig, Hive, and Impala
- Experience with Spark
- Experience with integration of data from multiple data sources
- Experience with NoSQL databases, such as HBase, Cassandra, MongoDB
- Knowledge of various ETL techniques and frameworks, such as Flume
- Experience with various messaging systems, such as Kafka or RabbitMQ
- Experience with Big Data ML toolkits, such as Mahout, SparkML, or H2O
- Good understanding of Lambda Architecture, along with its advantages and drawbacks
- Experience with Cloudera/MapR/Hortonworks
This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required.
---
**Job Family Group:**
Technology
---
**Job Family:**
Applications Development
---
**Time Type:**
Full time
---
**Most Relevant Skills**
Please see the requirements listed above.
---
**Other Relevant Skills**
For complementary skills, please see above and/or contact the recruiter.
---
_Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law._
_If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review_ _Accessibility at Citi ( _._
_View Citi's_ _EEO Policy Statement ( _and the_ _Know Your Rights ( _poster._
Citi is an equal opportunity and affirmative action employer.
Minority/Female/Veteran/Individuals with Disabilities/Sexual Orientation/Gender Identity.
This advertiser has chosen not to accept applicants from your region.

Big Data Developer

Bengaluru, Karnataka Impetus

Posted 1 day ago

Job Viewed

Tap Again To Close

Job Description

Experience: 4 - 7 years


Location: Bangalore


Job Description:


  • Strong experience working with the Apache Spark framework, including a solid grasp of core concepts, performance optimizations, and industry best practices
  • Proficient in PySpark with hands-on coding experience; familiarity with unit testing, object-oriented programming (OOP) principles, and software design patterns
  • Experience with code deployment and associated processes
  • Proven ability to write complex SQL queries to extract business-critical insights
  • Hands-on experience in streaming data processing
  • Familiarity with machine learning concepts is an added advantage
  • Experience with NoSQL databases
  • Good understanding of Test-Driven Development (TDD) methodologies
  • Demonstrated flexibility and eagerness to learn new technologies


Roles & Responsibilities


  • Design and implement solutions for problems arising out of large-scale data processing
  • Attend/drive various architectural, design and status calls with multiple stakeholders
  • Ensure end-to-end ownership of all tasks being aligned including development, testing, deployment and support
  • Design, build & maintain efficient, reusable & reliable code
  • Test implementation, troubleshoot & correct problems
  • Capable of working as an individual contributor and within team too
  • Ensure high quality software development with complete documentation and traceability
  • Fulfil organizational responsibilities (sharing knowledge & experience with other teams/ groups)
This advertiser has chosen not to accept applicants from your region.

Big Data Developer

Bengaluru, Karnataka Impetus

Posted 2 days ago

Job Viewed

Tap Again To Close

Job Description

Job Description:


Experience in working on Spark framework, good understanding of core concepts, optimizations, and best practices

Good hands-on experience in writing code in PySpark, should understand design principles and OOPS

Good experience in writing complex queries to derive business critical insights

Hands-on experience on Stream data processing

Understanding of Data Lake vs Data Warehousing concept

Knowledge on Machin learning would be an added advantag

Experience in NoSQL Technologies – MongoDB, Dynamo DB

Good understanding of test driven development

Flexible to learn new technologies


Roles & Responsibilities:

Design and implement solutions for problems arising out of large-scale data processing

Attend/drive various architectural, design and status calls with multiple stakeholders

Ensure end-to-end ownership of all tasks being aligned including development, testing, deployment and support

Design, build & maintain efficient, reusable & reliable code

Test implementation, troubleshoot & correct problems

Capable of working as an individual contributor and within team too

Ensure high quality software development with complete documentation and traceability

Fulfil organizational responsibilities (sharing knowledge & experience with other teams/ groups)

This advertiser has chosen not to accept applicants from your region.
Be The First To Know

About the latest Big data Jobs in India !

Big Data Engineer

Pune, Maharashtra Nice Software Solutions Pvt. Ltd.

Posted 2 days ago

Job Viewed

Tap Again To Close

Job Description

Big Data Engineer (PySpark)

Location: Pune/Nagpur (WFO)

Experience: 8 - 12 Years

Employment Type: Full-time


Job Overview

We are looking for an experienced Big Data Engineer with strong expertise in PySpark and Big Data ecosystems. The ideal candidate will be responsible for designing, developing, and optimizing scalable data pipelines while ensuring high performance and reliability.


Key Responsibilities

  • Design, develop, and maintain data pipelines using PySpark and related Big Data technologies.
  • Work with HDFS, Hive, Sqoop , and other tools in the Hadoop ecosystem.
  • Write efficient HiveQL and SQL queries to handle large-scale datasets.
  • Perform performance tuning and optimization of distributed data systems.
  • Collaborate with cross-functional teams in an Agile environment to deliver high-quality solutions.
  • Manage and schedule workflows using Apache Airflow or Oozie .
  • Troubleshoot and resolve issues in data pipelines to ensure reliability and accuracy.


Required Skills

  • Proven experience in Big Data Engineering with a focus on PySpark.
  • Strong knowledge of HDFS, Hive, Sqoop , and related tools.
  • Proficiency in SQL/HiveQL for large datasets.
  • Expertise in performance tuning and optimization of distributed systems.
  • Familiarity with Agile methodology and collaborative team practices.
  • Experience with workflow orchestration tools (Airflow/Oozie ).
  • Strong problem-solving, analytical, and communication skills.


Good to Have

  • Knowledge of data modeling and data warehousing concepts.
  • Exposure to DevOps practices and CI/CD pipelines for data engineering.
  • Experience with other Big Data frameworks such as Spark Streaming or Kafka .
This advertiser has chosen not to accept applicants from your region.

Big Data Developer

Indore, Madhya Pradesh Impetus

Posted 2 days ago

Job Viewed

Tap Again To Close

Job Description

Job Descriptions for Big data or Cloud Engineer


Position Summary:

We are looking for candidates with hands on experience in Big Data or Cloud Technologies.


Must have technical Skills


  • 2-4 Years of experience
  • Expertise and hands-on experience on Python – Must Have
  • Expertise knowledge on SparkQL/Spark Dataframe – Must Have
  • Good knowledge of SQL – Good to Have
  • Good knowledge of Shell script – Good to Have
  • Good Knowledge of one of the Workflow engines like Oozie, Autosys – Good to Have
  • Good knowledge of Agile Development– Good to Have
  • Good knowledge of Cloud- Good to Have
  • Passionate about exploring new technologies – Good to Have
  • Automation approach - – Good to Have


Roles & Responsibilities


Selected candidate will work on Data Warehouse modernization projects and will be responsible for the following activities.

  • Develop programs/scripts in Python/Java + SparkSQL/Spark Dataframe or Python/Java + Cloud native SQL like RedshiftSQL/SnowSQL etc.
  • Validation of scripts
  • Performance tuning
  • Data ingestion from source to target platform
  • Job orchestration
This advertiser has chosen not to accept applicants from your region.

Big Data Developer

Pune, Maharashtra Impetus

Posted 2 days ago

Job Viewed

Tap Again To Close

Job Description

Location : Indore, Pune, Bangalore, Gurugram, Noida


Notice period : Can join Immediately or Currently serving ( 30- 45 days )


  • 3-6 years of good hands on exposure with Big Data technologies – pySpark (Data frame and SparkSQL)
  • Hands-on experience with using Cloud Platform provided Big Data technologies (i.e. Glue, Lambda, RedShift, S3, etc.)
  • Good hands on experience of python
  • Good understanding of SQL and data warehouse tools like (Redshift)
  • Strong analytical, problem-solving, data analysis and research skills
  • Demonstrable ability to think outside of the box and not be dependent on readily available tools
  • Excellent communication, presentation and interpersonal skills are a must
  • Orchestration with Step Function/MWAA and Any job scheduler experience


Roles & Responsibilities


  • Develop efficient ETL pipelines through spark or Glue
  • Able to implement business use cases using Python and pySpark.
  • Able to write ELT/ETL jobs on AWS (Crawler, Glue Job)
  • Participate in code peer reviews to ensure our applications comply with best practices.
  • Gather requirements to define AWS services and accordingly implement different security features
  • Provide estimates for development Task.
  • Can perform integration testing of developed infrastructure.
This advertiser has chosen not to accept applicants from your region.
 

Nearby Locations

Other Jobs Near Me

Industry

  1. request_quote Accounting
  2. work Administrative
  3. eco Agriculture Forestry
  4. smart_toy AI & Emerging Technologies
  5. school Apprenticeships & Trainee
  6. apartment Architecture
  7. palette Arts & Entertainment
  8. directions_car Automotive
  9. flight_takeoff Aviation
  10. account_balance Banking & Finance
  11. local_florist Beauty & Wellness
  12. restaurant Catering
  13. volunteer_activism Charity & Voluntary
  14. science Chemical Engineering
  15. child_friendly Childcare
  16. foundation Civil Engineering
  17. clean_hands Cleaning & Sanitation
  18. diversity_3 Community & Social Care
  19. construction Construction
  20. brush Creative & Digital
  21. currency_bitcoin Crypto & Blockchain
  22. support_agent Customer Service & Helpdesk
  23. medical_services Dental
  24. medical_services Driving & Transport
  25. medical_services E Commerce & Social Media
  26. school Education & Teaching
  27. electrical_services Electrical Engineering
  28. bolt Energy
  29. local_mall Fmcg
  30. gavel Government & Non Profit
  31. emoji_events Graduate
  32. health_and_safety Healthcare
  33. beach_access Hospitality & Tourism
  34. groups Human Resources
  35. precision_manufacturing Industrial Engineering
  36. security Information Security
  37. handyman Installation & Maintenance
  38. policy Insurance
  39. code IT & Software
  40. gavel Legal
  41. sports_soccer Leisure & Sports
  42. inventory_2 Logistics & Warehousing
  43. supervisor_account Management
  44. supervisor_account Management Consultancy
  45. supervisor_account Manufacturing & Production
  46. campaign Marketing
  47. build Mechanical Engineering
  48. perm_media Media & PR
  49. local_hospital Medical
  50. local_hospital Military & Public Safety
  51. local_hospital Mining
  52. medical_services Nursing
  53. local_gas_station Oil & Gas
  54. biotech Pharmaceutical
  55. checklist_rtl Project Management
  56. shopping_bag Purchasing
  57. home_work Real Estate
  58. person_search Recruitment Consultancy
  59. store Retail
  60. point_of_sale Sales
  61. science Scientific Research & Development
  62. wifi Telecoms
  63. psychology Therapy
  64. pets Veterinary
View All Big Data Jobs