1,238 Big Data jobs in India

Big Data

Andhra Pradesh, Andhra Pradesh Virtusa

Posted today

Job Viewed

Tap Again To Close

Job Description

Data Engineer
Must have 9+ years of experience in below mentioned skills.
Must Have: Big Data Concepts Python(Core Python
- Able to write code), SQL, Shell Scripting, AWS S3 Good to Have: Event-driven/AWA SQS, Microservices, API Development,Kafka, Kubernetes, Argo, Amazon Redshift, Amazon Aurora

**About Virtusa**

Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us.

Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence.

Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
This advertiser has chosen not to accept applicants from your region.

Big Data Developer

Indore, Madhya Pradesh Impetus

Posted today

Job Viewed

Tap Again To Close

Job Description

Job Descriptions for Big data or Cloud Engineer


Position Summary:

We are looking for candidates with hands on experience in Big Data or Cloud Technologies.


Must have technical Skills


  • 2-4 Years of experience
  • Expertise and hands-on experience on Python – Must Have
  • Expertise knowledge on SparkQL/Spark Dataframe – Must Have
  • Good knowledge of SQL – Good to Have
  • Good knowledge of Shell script – Good to Have
  • Good Knowledge of one of the Workflow engines like Oozie, Autosys – Good to Have
  • Good knowledge of Agile Development– Good to Have
  • Good knowledge of Cloud- Good to Have
  • Passionate about exploring new technologies – Good to Have
  • Automation approach - – Good to Have


Roles & Responsibilities


Selected candidate will work on Data Warehouse modernization projects and will be responsible for the following activities.

  • Develop programs/scripts in Python/Java + SparkSQL/Spark Dataframe or Python/Java + Cloud native SQL like RedshiftSQL/SnowSQL etc.
  • Validation of scripts
  • Performance tuning
  • Data ingestion from source to target platform
  • Job orchestration
This advertiser has chosen not to accept applicants from your region.

Big Data Developer

Nagpur, Maharashtra HCLTech

Posted today

Job Viewed

Tap Again To Close

Job Description

Position: Big Data (Senior Developer / Lead)

Experience: 4 - 9 years

Location: Nagpur


Responsibilities:

  • Preferred Skillset: Spark, Scala , Linux Based Hadoop Ecosystem (HDFS, Impala, Hive, HBase, etc.), SQL, Jenkins.
  • Experience in Big data technologies , real time data processing platform (Spark Streaming - Kafka).
  • Experience in Cloudera would be an advantage.
  • Hands-on experience on Unix Command .
  • Strong foundation in computer science fundamentals: Data structure, Algorithms, and coding.
  • Experienced in Performance optimization techniques.
  • Consistently demonstrates clear and concise written and verbal communication.
  • Ability to multi-task and weekend support for production releases.
This advertiser has chosen not to accept applicants from your region.

Big Data Developer

Pune, Maharashtra Impetus

Posted today

Job Viewed

Tap Again To Close

Job Description

Location : Indore, Pune, Bangalore, Gurugram, Noida


Notice period : Can join Immediately or Currently serving ( 30- 45 days )


  • 3-6 years of good hands on exposure with Big Data technologies – pySpark (Data frame and SparkSQL)
  • Hands-on experience with using Cloud Platform provided Big Data technologies (i.e. Glue, Lambda, RedShift, S3, etc.)
  • Good hands on experience of python
  • Good understanding of SQL and data warehouse tools like (Redshift)
  • Strong analytical, problem-solving, data analysis and research skills
  • Demonstrable ability to think outside of the box and not be dependent on readily available tools
  • Excellent communication, presentation and interpersonal skills are a must
  • Orchestration with Step Function/MWAA and Any job scheduler experience


Roles & Responsibilities


  • Develop efficient ETL pipelines through spark or Glue
  • Able to implement business use cases using Python and pySpark.
  • Able to write ELT/ETL jobs on AWS (Crawler, Glue Job)
  • Participate in code peer reviews to ensure our applications comply with best practices.
  • Gather requirements to define AWS services and accordingly implement different security features
  • Provide estimates for development Task.
  • Can perform integration testing of developed infrastructure.
This advertiser has chosen not to accept applicants from your region.

Big Data Developer

Bengaluru, Karnataka Impetus

Posted today

Job Viewed

Tap Again To Close

Job Description

Job Description:


Experience in working on Spark framework, good understanding of core concepts, optimizations, and best practices

Good hands-on experience in writing code in PySpark, should understand design principles and OOPS

Good experience in writing complex queries to derive business critical insights

Hands-on experience on Stream data processing

Understanding of Data Lake vs Data Warehousing concept

Knowledge on Machin learning would be an added advantag

Experience in NoSQL Technologies – MongoDB, Dynamo DB

Good understanding of test driven development

Flexible to learn new technologies


Roles & Responsibilities:

Design and implement solutions for problems arising out of large-scale data processing

Attend/drive various architectural, design and status calls with multiple stakeholders

Ensure end-to-end ownership of all tasks being aligned including development, testing, deployment and support

Design, build & maintain efficient, reusable & reliable code

Test implementation, troubleshoot & correct problems

Capable of working as an individual contributor and within team too

Ensure high quality software development with complete documentation and traceability

Fulfil organizational responsibilities (sharing knowledge & experience with other teams/ groups)

This advertiser has chosen not to accept applicants from your region.

Big Data Developer

Bengaluru, Karnataka Impetus

Posted today

Job Viewed

Tap Again To Close

Job Description

Job Description

** LOOKING FOR IMMEDIATE JOINERS ONLY**

Qualification

Degree – Graduates/Postgraduate in CSE or related field


Job Descriptions for Big data or Cloud Engineer


Position Summary:

We are looking for candidates with hands on experience in Big Data or Cloud Technologies.



Must have technical Skills

  • 3-6 Years of experience
  • Expertize and hands-on experience on Python – Must Have
  • Expertize knowledge on SparkQL/Spark Dataframe – Must Have
  • Good knowledge of SQL – Good to Have
  • Good knowledge of Shell script – Good to Have
  • Good Knowledge of one of the Workflow engine like Oozie, Autosys – Good to Have
  • Good knowledge of Agile Development– Good to Have
  • Good knowledge of Cloud- Good to Have
  • Passionate about exploring new technologies – Good to Have
  • Automation approach - – Good to Have


Roles & Responsibilities


Selected candidate will work on Data Warehouse modernization projects and will responsible for the following activities.

  • Develop programs/scripts in Python/Java + SparkSQL/Spark Dataframe or Python/Java + Cloud native SQL like RedshiftSQL/SnowSQL etc.
  • Validation of scripts
  • Performance tuning
  • Data ingestion from source to target platform
  • Job orchestration
This advertiser has chosen not to accept applicants from your region.

Big Data Developer

Noida, Uttar Pradesh Impetus

Posted today

Job Viewed

Tap Again To Close

Job Description

Immediate Joiners Preferred


Location : Indore, Noida, Gurugram, Hyderabad, Bangalore, Pune


2-4 Years of experience

Expertize and hands-on experience on Python – Must Have

Expertize knowledge on SparkSQL– Must Have

Good knowledge of atleast one Cloud(AWS/Azure/GCP)- Must Have

Good knowledge of SQL – Good to Have

Good knowledge of Shell script – Good to Have

Good Knowledge of one of the Workflow engines like Oozie, Autosys – Good to Have

Good knowledge of Agile Development– Good to Have

Passionate about exploring new technologies – Good to Have

Automation approach - – Good to Have

Good Communication Skills – Good to Have


Roles & Responsibilities


Responsibilities

Selected candidate will work on Data Warehouse modernization projects and will be responsible for the following activities.

  • Develop programs/scripts in Python/Java + SparkSQL/Spark Dataframe or Python + Cloud native SQL like RedshiftSQL/SnowSQL etc.
  • Validation of scripts
  • Performance tuning
  • Data ingestion from source to target platform
  • Job orchestration


If you are currently serving notice period and Readily available in 30 - 35 Days

Please share your resume at

Do mention the details of CTC, Expectation , Notice period (mention LWD), location.

This advertiser has chosen not to accept applicants from your region.
Be The First To Know

About the latest Big data Jobs in India !

Big Data Developer

Affine

Posted today

Job Viewed

Tap Again To Close

Job Description

Experience: 5 to 9 years


Must have Skills:

  • Kotlin/Scala/Java
  • Spark
  • SQL
  • Spark Streaming
  • Any cloud (AWS preferable)
  • Kafka /Kinesis/Any streaming services
  • Object-Oriented Programming
  • Hive, ETL/ELT design experience
  • CICD experience (ETL pipeline deployment)
  • Data Modeling experience


Good to Have Skills:

  • Git/similar version control tool
  • Knowledge in CI/CD, Microservices


Role Objective:


Big Data Engineer will be responsible for expanding and optimizing our data and database architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building. The Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems, and products


Roles & Responsibilities:

  • Sound knowledge in Spark architecture and distributed computing and Spark streaming.
  • Proficient in Spark – including RDD and Data frames core functions, troubleshooting and performance tuning.
  • Good understanding in object-oriented concepts and hands on experience on Kotlin/Scala/Java with excellent programming logic and technique.
  • Good in functional programming and OOPS concept on Kotlin/Scala/Java
  • Good experience in SQL
  • Managing the team of Associates and Senior Associates and ensuring the utilization is maintained across the project.
  • Able to mentor new members for onboarding to the project.
  • Understand the client requirement and able to design, develop from scratch and deliver.
  • AWS cloud experience would be preferable.
  • Experience in analyzing, re-architecting, and re-platforming on-premises data warehouses to data platforms on cloud (AWS is preferred)
  • Leading the client calls to flag off any delays, blockers, escalations and collate all the requirements.
  • Managing project timing, client expectations and meeting deadlines.
  • Should have played project and team management roles.
  • Facilitate meetings within the team on regular basis.
  • Understand business requirement and analyze different approaches and plan deliverables and milestones for the project.
  • Optimization, maintenance, and support of pipelines.
  • Strong analytical and logical skills.
  • Ability to comfortably tackling new challenges and learn
This advertiser has chosen not to accept applicants from your region.

Big Data Developer

Hyderabad, Andhra Pradesh MetLife

Posted today

Job Viewed

Tap Again To Close

Job Description

Position Summary

MetLife established a Global capability center (MGCC) in India to scale and mature Data & Analytics, technology capabilities in a cost-effective manner and make MetLife future ready. The center is integral to Global Technology and Operations with a with a focus to protect & build MetLife IP, promote reusability and drive experimentation and innovation. The Data & Analytics team in India mirrors the Global D&A team with an objective to drive business value through trusted data, scaled capabilities, and actionable insights


Role Value Proposition

MetLife Global Capability Center (MGCC) is looking for a Senior Cloud data engineer who has the responsibility of building ETL/ELT, data warehousing and reusable components using Azure, Databricks and spark. He/She will collaborate with the business systems analyst, technical leads, project managers and business/operations teams in building data enablement solutions across different LOBs and use cases.

Job Responsibilities

  • Collect, store, process and analyze large datasets to build and implement extract, transfer, load (ETL) processes
  • Develop metadata and configuration based reusable frameworks to reduce the development effort
  • Develop quality code with integral performance optimizations in place right at the development stage.
  • Collaborate with global team in driving the delivery of projects and recommend development and performance improvements.
  • Extensive experience of various databases types and knowledge to leverage the right one for the need.
  • Strong understanding of data tools and ability to leverage them to understand the data and generate insights
  • Hands on experience in building/designing at-scale Data Lake, Data warehouses, data stores for analytics consumption On prem and Cloud (real time as well as batch use cases)
  • Ability to interact with business analysts and functional analysts in getting the requirements and implementing the ETL solutions.


Education, Technical Skills & Other Critical Requirement

Education

Bachelor’s degree in computer science, Engineering, or related discipline

Experience

8 to 10 years of working experience on Azure Cloud using Databricks or Synapse

Technical Skills

  1. Experience in transforming data using Python, Spark or Scala
  2. Technical depth in Cloud Architecture Framework, Lakehouse Architecture and One Lake solutions.
  3. Experience in implementing data ingestion and curation process using Azure with tools such as Azure Data Factory, Databricks Workflows, Azure Synapse, Cosmos DB, Spark (Scala/python), Data bricks .
  4. Experience in cloud optimized code on Azure using Databricks, Synapse dedicated SQL Pool and serverless Pools, Cosmos, SQL APIs loading and consumption optimizations.
  5. Scripting experience primarily on shell/bash/PowerShell would be desirable.
  6. Experience in writing SQL and performing data analysis skills for data anomaly detection and data quality assurance.
This advertiser has chosen not to accept applicants from your region.

Big Data Engineer

Hyderabad, Andhra Pradesh RandomTrees

Posted today

Job Viewed

Tap Again To Close

Job Description

Job Title: Big Data Engineer

Experience: 5–9 Years

Location: Hyderabad-Hybrid

Employment Type: Full-Time

Job Summary:

We are seeking a skilled Big Data Engineer with 5–9 years of experience in building and managing scalable data pipelines and analytics solutions. The ideal candidate will have strong expertise in Big Data, Hadoop, Apache Spark, SQL, Hadoop, and Data Lake/Data Warehouse architectures. Experience working with any cloud platform (AWS, Azure, or GCP) is preferred.

Required Skills:

  • 5–9 years of hands-on experience as a Big Data Engineer.
  • Strong proficiency in Apache Spark (PySpark or Scala).
  • Solid understanding and experience with SQL and database optimization.
  • Experience with data lake or data warehouse environments and architecture patterns.
  • Good understanding of data modeling, performance tuning, and partitioning strategies.
  • Experience in working with large-scale distributed systems and batch/stream data processing.
  • Preferred Qualifications:
  • Experience with cloud platforms such as AWS, Azure, or GCP is preferred.
  • Education:
  • Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
This advertiser has chosen not to accept applicants from your region.
 

Nearby Locations

Other Jobs Near Me

Industry

  1. request_quote Accounting
  2. work Administrative
  3. eco Agriculture Forestry
  4. smart_toy AI & Emerging Technologies
  5. school Apprenticeships & Trainee
  6. apartment Architecture
  7. palette Arts & Entertainment
  8. directions_car Automotive
  9. flight_takeoff Aviation
  10. account_balance Banking & Finance
  11. local_florist Beauty & Wellness
  12. restaurant Catering
  13. volunteer_activism Charity & Voluntary
  14. science Chemical Engineering
  15. child_friendly Childcare
  16. foundation Civil Engineering
  17. clean_hands Cleaning & Sanitation
  18. diversity_3 Community & Social Care
  19. construction Construction
  20. brush Creative & Digital
  21. currency_bitcoin Crypto & Blockchain
  22. support_agent Customer Service & Helpdesk
  23. medical_services Dental
  24. medical_services Driving & Transport
  25. medical_services E Commerce & Social Media
  26. school Education & Teaching
  27. electrical_services Electrical Engineering
  28. bolt Energy
  29. local_mall Fmcg
  30. gavel Government & Non Profit
  31. emoji_events Graduate
  32. health_and_safety Healthcare
  33. beach_access Hospitality & Tourism
  34. groups Human Resources
  35. precision_manufacturing Industrial Engineering
  36. security Information Security
  37. handyman Installation & Maintenance
  38. policy Insurance
  39. code IT & Software
  40. gavel Legal
  41. sports_soccer Leisure & Sports
  42. inventory_2 Logistics & Warehousing
  43. supervisor_account Management
  44. supervisor_account Management Consultancy
  45. supervisor_account Manufacturing & Production
  46. campaign Marketing
  47. build Mechanical Engineering
  48. perm_media Media & PR
  49. local_hospital Medical
  50. local_hospital Military & Public Safety
  51. local_hospital Mining
  52. medical_services Nursing
  53. local_gas_station Oil & Gas
  54. biotech Pharmaceutical
  55. checklist_rtl Project Management
  56. shopping_bag Purchasing
  57. home_work Real Estate
  58. person_search Recruitment Consultancy
  59. store Retail
  60. point_of_sale Sales
  61. science Scientific Research & Development
  62. wifi Telecoms
  63. psychology Therapy
  64. pets Veterinary
View All Big Data Jobs