417 Spark jobs in Delhi

Senior Backend Developer W/Spark

New Delhi, Delhi LiveRamp

Posted 1 day ago

Job Viewed

Tap Again To Close

Job Description

About Us

LiveRamp is the data collaboration platform of choice for the world’s most innovative companies. A groundbreaking leader in consumer privacy, data ethics, and foundational identity, LiveRamp is setting the new standard for building a connected customer view with unmatched clarity and context while protecting precious brand and consumer trust. LiveRamp offers complete flexibility to collaborate wherever data lives to support the widest range of data collaboration use cases—within organizations, between brands, and across its premier global network of top-quality partners.


Hundreds of global innovators, from iconic consumer brands and tech giants to banks, retailers, and healthcare leaders turn to LiveRamp to build enduring brand and business value by deepening customer engagement and loyalty, activating new partnerships, and maximizing the value of their first-party data while staying on the forefront of rapidly evolving compliance and privacy requirements.


About the Role

LiveRamp is looking for a strong Backend Engineer with deep expertise in Big Data technologies to build and scale high-performance distributed systems


You will:

  • Build and maintain large-scale, distributed backend systems.
  • Design and optimize Big Data ecosystems including Spark, Hadoop/MR, and Kafka.
  • Leverage cloud-based platforms (GCP, AWS, Azure) for development and deployment.
  • Implement observability practices including distributed tracing, SLOs, and SLIs.
  • Write maintainable, extensible, scalable, and high-performance backend code.
  • Collaborate with a global, cross-functional team to deliver projects end-to-end.
  • Ensure reliability, scalability, and performance of backend infrastructure.


Your team will:

You will be part of the White Box Monitoring Development Team consisting of: 1 Team Lead Manager (TLM), 2 Backend Engineers (Level 5), and 1 Data Analyst. This team is responsible for building robust backend systems for monitoring, data processing, and event-driven architectures.


About you:

  • 6+ years of experience in software engineering (backend).
  • 3+ years of experience with cloud platforms (GCP, AWS, Azure).
  • 2+ years of hands-on experience managing/optimizing Big Data ecosystems (Spark, Hadoop/MR, Kafka).
  • Proficiency in compiled languages: Java, Scala, or Go.
  • 1+ year of experience in observability practices (distributed tracing, SLIs, SLOs, SLAs).
  • Bachelor’s or Master’s in Computer Science, Information Systems, or related field.
  • Strong knowledge of Object-Oriented Design (OOD) and Object-Oriented Analysis (OOA).
  • Proven track record of delivering large-scale, cross-functional projects.
  • Strong communication and collaboration skills, especially with remote teams.
  • Passion for building reliable, scalable distributed systems.


Preferred Skills:

  • Experience with real-time distributed databases (e.g.,SingleStore).
  • Familiarity with GCP products such as BigTable, Big Query, DataProc, PubSub.
  • Knowledge of event systems design and implementation.
  • Experience with infrastructure/deployment tools like Terraform, Kubernetes, Helm, Gradle.
  • Experience designing and implementing RESTful APIs at scale.
  • Strong technical knowledge of monitoring and reliability practices.


Benefits:

  • People: Work with talented, collaborative, and friendly people who love what they do.
  • Work/Life Harmony: Flexible paid time off, paid holidays, options for working from home, and paid parental leave.


More about us:

LiveRampers are empowered to live our values of committing to shared goals and operational excellence. Connecting LiveRampers to new ideas and to one another is one of our guiding principles one that informs how we hire, train, and grow our global teams across nine countries and four continents. By continually building inclusive, high belonging teams, LiveRampers can deliver exceptional work, champion innovative ideas, and be their best selves. Click here to learn more about Diversity, Inclusion, & Belonging (DIB) at LiveRamp.

This advertiser has chosen not to accept applicants from your region.

Senior Backend Developer W/Spark

Narela, Delhi LiveRamp

Posted 1 day ago

Job Viewed

Tap Again To Close

Job Description

About Us

LiveRamp is the data collaboration platform of choice for the world’s most innovative companies. A groundbreaking leader in consumer privacy, data ethics, and foundational identity, LiveRamp is setting the new standard for building a connected customer view with unmatched clarity and context while protecting precious brand and consumer trust. LiveRamp offers complete flexibility to collaborate wherever data lives to support the widest range of data collaboration use cases—within organizations, between brands, and across its premier global network of top-quality partners.


Hundreds of global innovators, from iconic consumer brands and tech giants to banks, retailers, and healthcare leaders turn to LiveRamp to build enduring brand and business value by deepening customer engagement and loyalty, activating new partnerships, and maximizing the value of their first-party data while staying on the forefront of rapidly evolving compliance and privacy requirements.


About the Role

LiveRamp is looking for a strong Backend Engineer with deep expertise in Big Data technologies to build and scale high-performance distributed systems


You will:

  • Build and maintain large-scale, distributed backend systems.
  • Design and optimize Big Data ecosystems including Spark, Hadoop/MR, and Kafka.
  • Leverage cloud-based platforms (GCP, AWS, Azure) for development and deployment.
  • Implement observability practices including distributed tracing, SLOs, and SLIs.
  • Write maintainable, extensible, scalable, and high-performance backend code.
  • Collaborate with a global, cross-functional team to deliver projects end-to-end.
  • Ensure reliability, scalability, and performance of backend infrastructure.


Your team will:

You will be part of the White Box Monitoring Development Team consisting of: 1 Team Lead Manager (TLM), 2 Backend Engineers (Level 5), and 1 Data Analyst. This team is responsible for building robust backend systems for monitoring, data processing, and event-driven architectures.


About you:

  • 6+ years of experience in software engineering (backend).
  • 3+ years of experience with cloud platforms (GCP, AWS, Azure).
  • 2+ years of hands-on experience managing/optimizing Big Data ecosystems (Spark, Hadoop/MR, Kafka).
  • Proficiency in compiled languages: Java, Scala, or Go.
  • 1+ year of experience in observability practices (distributed tracing, SLIs, SLOs, SLAs).
  • Bachelor’s or Master’s in Computer Science, Information Systems, or related field.
  • Strong knowledge of Object-Oriented Design (OOD) and Object-Oriented Analysis (OOA).
  • Proven track record of delivering large-scale, cross-functional projects.
  • Strong communication and collaboration skills, especially with remote teams.
  • Passion for building reliable, scalable distributed systems.


Preferred Skills:

  • Experience with real-time distributed databases (e.g.,SingleStore).
  • Familiarity with GCP products such as BigTable, Big Query, DataProc, PubSub.
  • Knowledge of event systems design and implementation.
  • Experience with infrastructure/deployment tools like Terraform, Kubernetes, Helm, Gradle.
  • Experience designing and implementing RESTful APIs at scale.
  • Strong technical knowledge of monitoring and reliability practices.


Benefits:

  • People: Work with talented, collaborative, and friendly people who love what they do.
  • Work/Life Harmony: Flexible paid time off, paid holidays, options for working from home, and paid parental leave.


More about us:

LiveRampers are empowered to live our values of committing to shared goals and operational excellence. Connecting LiveRampers to new ideas and to one another is one of our guiding principles one that informs how we hire, train, and grow our global teams across nine countries and four continents. By continually building inclusive, high belonging teams, LiveRampers can deliver exceptional work, champion innovative ideas, and be their best selves. Click here to learn more about Diversity, Inclusion, & Belonging (DIB) at LiveRamp.

This advertiser has chosen not to accept applicants from your region.

Senior Backend Developer W/Spark

Delhi, Delhi LiveRamp

Posted 1 day ago

Job Viewed

Tap Again To Close

Job Description

About Us

LiveRamp is the data collaboration platform of choice for the world’s most innovative companies. A groundbreaking leader in consumer privacy, data ethics, and foundational identity, LiveRamp is setting the new standard for building a connected customer view with unmatched clarity and context while protecting precious brand and consumer trust. LiveRamp offers complete flexibility to collaborate wherever data lives to support the widest range of data collaboration use cases—within organizations, between brands, and across its premier global network of top-quality partners.


Hundreds of global innovators, from iconic consumer brands and tech giants to banks, retailers, and healthcare leaders turn to LiveRamp to build enduring brand and business value by deepening customer engagement and loyalty, activating new partnerships, and maximizing the value of their first-party data while staying on the forefront of rapidly evolving compliance and privacy requirements.


About the Role

LiveRamp is looking for a strong Backend Engineer with deep expertise in Big Data technologies to build and scale high-performance distributed systems


You will:

  • Build and maintain large-scale, distributed backend systems.
  • Design and optimize Big Data ecosystems including Spark, Hadoop/MR, and Kafka.
  • Leverage cloud-based platforms (GCP, AWS, Azure) for development and deployment.
  • Implement observability practices including distributed tracing, SLOs, and SLIs.
  • Write maintainable, extensible, scalable, and high-performance backend code.
  • Collaborate with a global, cross-functional team to deliver projects end-to-end.
  • Ensure reliability, scalability, and performance of backend infrastructure.


Your team will:

You will be part of the White Box Monitoring Development Team consisting of: 1 Team Lead Manager (TLM), 2 Backend Engineers (Level 5), and 1 Data Analyst. This team is responsible for building robust backend systems for monitoring, data processing, and event-driven architectures.


About you:

  • 6+ years of experience in software engineering (backend).
  • 3+ years of experience with cloud platforms (GCP, AWS, Azure).
  • 2+ years of hands-on experience managing/optimizing Big Data ecosystems (Spark, Hadoop/MR, Kafka).
  • Proficiency in compiled languages: Java, Scala, or Go.
  • 1+ year of experience in observability practices (distributed tracing, SLIs, SLOs, SLAs).
  • Bachelor’s or Master’s in Computer Science, Information Systems, or related field.
  • Strong knowledge of Object-Oriented Design (OOD) and Object-Oriented Analysis (OOA).
  • Proven track record of delivering large-scale, cross-functional projects.
  • Strong communication and collaboration skills, especially with remote teams.
  • Passion for building reliable, scalable distributed systems.


Preferred Skills:

  • Experience with real-time distributed databases (e.g.,SingleStore).
  • Familiarity with GCP products such as BigTable, Big Query, DataProc, PubSub.
  • Knowledge of event systems design and implementation.
  • Experience with infrastructure/deployment tools like Terraform, Kubernetes, Helm, Gradle.
  • Experience designing and implementing RESTful APIs at scale.
  • Strong technical knowledge of monitoring and reliability practices.


Benefits:

  • People: Work with talented, collaborative, and friendly people who love what they do.
  • Work/Life Harmony: Flexible paid time off, paid holidays, options for working from home, and paid parental leave.


More about us:

LiveRampers are empowered to live our values of committing to shared goals and operational excellence. Connecting LiveRampers to new ideas and to one another is one of our guiding principles one that informs how we hire, train, and grow our global teams across nine countries and four continents. By continually building inclusive, high belonging teams, LiveRampers can deliver exceptional work, champion innovative ideas, and be their best selves. Click here to learn more about Diversity, Inclusion, & Belonging (DIB) at LiveRamp.

This advertiser has chosen not to accept applicants from your region.

Spark / Scala Data Engineer

Delhi, Delhi Tata Consultancy Services

Posted today

Job Viewed

Tap Again To Close

Job Description

Role - Spark / Scala Data Engineer

Experience - 8 to 10 yrs

Location - Bangalore/Chennai/Hyderabad/Delhi/Pune


Must Have- Big Data Hadoop - Hive and Spark/Scala solid experience- SQL advance knowledge - Been able to test changes and issues properly, replicating the code functionality into SQL- Worked with Code Repositories as GIT, Maven, .- DevOps Knowledge (Jenkins, Scripts, .) - Tools used for deploying software into environments, use of Jira.Good to have:- Analyst Skills - Being able to translate technical requirements to non-technical partners and to deliver clear solutions. Been able to create test cases scenarios.- Control-m solid experience - Been able to create jobs, modify parameters- Documentation - Experience of carrying out data and process analysis to create specifications documents- Finance Knowledge - Have a experience working in Financial Services / Banking organization with an understanding of Financial Services / Retail, Business and Corporate Banking- AWS knowledge- Unix / Linux

This advertiser has chosen not to accept applicants from your region.

Spark / Scala Data Engineer

Delhi, Delhi Tata Consultancy Services

Posted today

Job Viewed

Tap Again To Close

Job Description

Role - Spark / Scala Data Engineer
Experience - 8 to 10 yrs
Location - Bangalore/Chennai/Hyderabad/Delhi/Pune

Must Have- Big Data Hadoop - Hive and Spark/Scala solid experience- SQL advance knowledge - Been able to test changes and issues properly, replicating the code functionality into SQL- Worked with Code Repositories as GIT, Maven, .- DevOps Knowledge (Jenkins, Scripts, .) - Tools used for deploying software into environments, use of Jira.Good to have:- Analyst Skills - Being able to translate technical requirements to non-technical partners and to deliver clear solutions. Been able to create test cases scenarios.- Control-m solid experience - Been able to create jobs, modify parameters- Documentation - Experience of carrying out data and process analysis to create specifications documents- Finance Knowledge - Have a experience working in Financial Services / Banking organization with an understanding of Financial Services / Retail, Business and Corporate Banking- AWS knowledge- Unix / Linux
This advertiser has chosen not to accept applicants from your region.

Spark / Scala Data Engineer

Delhi, Delhi Tata Consultancy Services

Posted 1 day ago

Job Viewed

Tap Again To Close

Job Description

Role - Spark / Scala Data Engineer

Experience - 8 to 10 yrs

Location - Bangalore/Chennai/Hyderabad/Delhi/Pune


Must Have- Big Data Hadoop - Hive and Spark/Scala solid experience- SQL advance knowledge - Been able to test changes and issues properly, replicating the code functionality into SQL- Worked with Code Repositories as GIT, Maven, .- DevOps Knowledge (Jenkins, Scripts, .) - Tools used for deploying software into environments, use of Jira.Good to have:- Analyst Skills - Being able to translate technical requirements to non-technical partners and to deliver clear solutions. Been able to create test cases scenarios.- Control-m solid experience - Been able to create jobs, modify parameters- Documentation - Experience of carrying out data and process analysis to create specifications documents- Finance Knowledge - Have a experience working in Financial Services / Banking organization with an understanding of Financial Services / Retail, Business and Corporate Banking- AWS knowledge- Unix / Linux

This advertiser has chosen not to accept applicants from your region.

Big Data Engineer

Delhi, Delhi ₹150000 - ₹200000 Y Qcentrio

Posted today

Job Viewed

Tap Again To Close

Job Description

Work Location : Pan India

Experience : 6+ Years

Notice Period : Immediate - 30 days

Mandatory Skills : Big Data, Python, SQL, Spark/Pyspark, AWS Cloud

JD and required Skills & Responsibilities :

  • Actively participate in all phases of the software development lifecycle, including requirements gathering, functional and technical design, development, testing, roll-out, and support.

  • Solve complex business problems by utilizing a disciplined development methodology.

  • Produce scalable, flexible, efficient, and supportable solutions using appropriate technologies.

  • Analyse the source and target system data. Map the transformation that meets the requirements.

  • Interact with the client and onsite coordinators during different phases of a project.

  • Design and implement product features in collaboration with business and Technology stakeholders.

  • Anticipate, identify, and solve issues concerning data management to improve data quality.

  • Clean, prepare, and optimize data at scale for ingestion and consumption.

  • Support the implementation of new data management projects and re-structure the current data architecture.

  • Implement automated workflows and routines using workflow scheduling tools.

  • Understand and use continuous integration, test-driven development, and production deployment frameworks.

  • Participate in design, code, test plans, and dataset implementation performed by other data engineers in support of maintaining data engineering standards.

  • Analyze and profile data for the purpose of designing scalable solutions.

  • Troubleshoot straightforward data issues and perform root cause analysis to proactively resolve product issues.

Required Skills :

  • 5+ years of relevant experience developing Data and analytic solutions.

  • Experience building data lake solutions leveraging one or more of the following AWS, EMR, S3, Hive & PySpark

  • Experience with relational SQL.

  • Experience with scripting languages such as Python.

  • Experience with source control tools such as GitHub and related dev process.

  • Experience with workflow scheduling tools such as Airflow.

  • In-depth knowledge of AWS Cloud (S3, EMR, Databricks)

  • Has a passion for data solutions.

  • Has a strong problem-solving and analytical mindset

  • Working experience in the design, Development, and test of data pipelines.

  • Experience working with Agile Teams.

  • Able to influence and communicate effectively, both verbally and in writing, with team members and business stakeholders

  • Able to quickly pick up new programming languages, technologies, and frameworks.

  • Bachelor's degree in computer science

This advertiser has chosen not to accept applicants from your region.
Be The First To Know

About the latest Spark Jobs in Delhi !

Big Data Engineer

Delhi, Delhi ₹90000 - ₹120000 Y Qcentrio

Posted today

Job Viewed

Tap Again To Close

Job Description

We are seeking an experienced and driven Data Engineer with 5+ years of hands-on experience in building scalable data infrastructure and systems. You will play a key role in designing and developing robust, high-performance ETL pipelines and managing large-scale datasets to support critical business functions. This role requires deep technical expertise, strong problem-solving skills, and the ability to thrive in a fast-paced, evolving environment.

Key Responsibilities :

  • Design, develop, and maintain scalable and reliable ETL/ELT pipelines for processing large volumes of data (terabytes and beyond).

  • Model and structure data for performance, scalability, and usability.

  • Work with cloud infrastructure (preferably Azure) to build and optimize data workflows.

  • Leverage distributed computing frameworks like Apache Spark and Hadoop for large-scale data processing.

  • Build and manage data lake/lakehouse architectures in alignment with best practices.

  • Optimize ETL performance and manage cost-effective data operations.

  • Collaborate closely with cross-functional teams including data science, analytics, and software engineering.

  • Ensure data quality, integrity, and security across all stages of the data lifecycle.

Required Skills & Qualifications :

  • 7 to 10 years of relevant experience in bigdata engineering.

  • Advanced proficiency in Python,

  • Strong skills in SQL for complex data manipulation and analysis.

  • Hands-on experience with Apache Spark, Hadoop, or similar distributed systems.

  • Proven track record of handling large-scale datasets (TBs) in production environments.

  • Cloud development experience with Azure (preferred), AWS, or GCP.

  • Solid understanding of data lake and data lakehouse architectures.

  • Expertise in ETL performance tuning and cost optimization techniques.

  • Knowledge of data structures, algorithms, and modern software engineering practices.

Soft Skills :

  • Strong communication skills with the ability to explain complex technical concepts clearly and concisely.

  • Self-starter who learns quickly and takes ownership.

  • High attention to detail with a strong sense of data quality and reliability.

  • Comfortable working in an agile, fast-changing environment with incomplete requirements.

Preferred Qualifications :

  • Experience with tools like Apache Airflow, Azure Data Factory, or similar.

  • Familiarity with CI/CD and DevOps in the context of data engineering.

  • Knowledge of data governance, cataloging, and access control principles.

Skills : Python,Sql,Aws,Azure, Hadoop

This advertiser has chosen not to accept applicants from your region.

Senior Big Data Engineer

Delhi, Delhi Veltris

Posted today

Job Viewed

Tap Again To Close

Job Description

Veltris is a Digital Product Engineering Services partner committed to driving technology-enabled transformation across enterprises, businesses, and industries. We specialize in delivering next-generation solutions for sectors including healthcare, technology, communications, manufacturing, and finance.

With a focus on innovation and acceleration, Veltris empowers clients to build, modernize, and scale intelligent products that deliver connected, AI-powered experiences. Our experience-centric approach, agile methodologies, and exceptional talent enable us to streamline product development, maximize platform ROI, and drive meaningful business outcomes across both digital and physical ecosystems.

In a strategic move to strengthen our healthcare offerings and expand industry capabilities, Veltris has acquired BPK Technologies. This acquisition enhances our domain expertise, broadens our go-to-market strategy, and positions us to deliver even greater value to enterprise and mid-market clients in healthcare and beyond.

Position-Senior Big Data Engineer

Must have Big Data analytics platform experience.

• Key stacks: Spark, Druid, Drill, ClickHouse.

• 8+ years experience in Python/Java, CI/CD, infrastructure & cloud, Terraform, plus depth in:

o Big Data pipelines: Spark, Kafka, Glue, EMR, Hudi, Schema Registry, Data Lineage.

o Graph DBs: Neo4j, Neptune, JanusGraph, Dgraph.

Preferred Qualifications:

• Master’s degree (M.Tech/MS) or Ph.D. in Computer Science, Information Technology, Data Science, Artificial Intelligence, Machine Learning, Software Engineering, or a related technical field.

• Candidates with an equivalent combination of education and relevant industry experience will also be considered.

Disclaimer:

The information provided herein is for general informational purposes only and reflects the current strategic direction and service offerings of Veltris. While we strive for accuracy, Veltris makes no representations or warranties regarding the completeness, reliability, or suitability of the information for any specific purpose. Any statements related to business growth, acquisitions, or future plans, including the acquisition of BPK Technologies, are subject to change without notice and do not constitute a binding commitment. Veltris reserves the right to modify its strategies, services, or business relationships at its sole discretion. For the most up-to-date and detailed information, please contact Veltris directly
This advertiser has chosen not to accept applicants from your region.

GCP Big Data Engineer

Delhi, Delhi Talentmatics

Posted 11 days ago

Job Viewed

Tap Again To Close

Job Description

We are seeking an experienced GCP Big Data Engineer with 8–10 years of expertise in designing, developing, and optimizing large-scale data processing solutions. The ideal candidate will bring strong leadership capabilities, technical depth, and a proven track record of delivering end-to-end big data solutions in cloud environments.

Key Responsibilities:-

  • Lead and mentor teams in designing scalable and efficient ETL pipelines on Google Cloud Platform (GCP) .
  • Drive best practices for data modeling, data integration, and data quality management .
  • Collaborate with stakeholders to define data engineering strategies aligned with business goals.
  • Ensure high performance, scalability, and reliability in data systems using SQL and PySpark .

Must-Have Skills:-

  • GCP expertise in data engineering services (BigQuery, Dataflow, Dataproc, Pub/Sub, Cloud Storage).
  • Strong programming in SQL & PySpark .
  • Hands-on experience in ETL pipeline design, development, and optimization .
  • Strong problem-solving and leadership skills with experience guiding data engineering teams.

Qualification:-

  • Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field .
  • Relevant certifications in GCP Data Engineering preferred.
This advertiser has chosen not to accept applicants from your region.
 

Nearby Locations

Other Jobs Near Me

Industry

  1. request_quote Accounting
  2. work Administrative
  3. eco Agriculture Forestry
  4. smart_toy AI & Emerging Technologies
  5. school Apprenticeships & Trainee
  6. apartment Architecture
  7. palette Arts & Entertainment
  8. directions_car Automotive
  9. flight_takeoff Aviation
  10. account_balance Banking & Finance
  11. local_florist Beauty & Wellness
  12. restaurant Catering
  13. volunteer_activism Charity & Voluntary
  14. science Chemical Engineering
  15. child_friendly Childcare
  16. foundation Civil Engineering
  17. clean_hands Cleaning & Sanitation
  18. diversity_3 Community & Social Care
  19. construction Construction
  20. brush Creative & Digital
  21. currency_bitcoin Crypto & Blockchain
  22. support_agent Customer Service & Helpdesk
  23. medical_services Dental
  24. medical_services Driving & Transport
  25. medical_services E Commerce & Social Media
  26. school Education & Teaching
  27. electrical_services Electrical Engineering
  28. bolt Energy
  29. local_mall Fmcg
  30. gavel Government & Non Profit
  31. emoji_events Graduate
  32. health_and_safety Healthcare
  33. beach_access Hospitality & Tourism
  34. groups Human Resources
  35. precision_manufacturing Industrial Engineering
  36. security Information Security
  37. handyman Installation & Maintenance
  38. policy Insurance
  39. code IT & Software
  40. gavel Legal
  41. sports_soccer Leisure & Sports
  42. inventory_2 Logistics & Warehousing
  43. supervisor_account Management
  44. supervisor_account Management Consultancy
  45. supervisor_account Manufacturing & Production
  46. campaign Marketing
  47. build Mechanical Engineering
  48. perm_media Media & PR
  49. local_hospital Medical
  50. local_hospital Military & Public Safety
  51. local_hospital Mining
  52. medical_services Nursing
  53. local_gas_station Oil & Gas
  54. biotech Pharmaceutical
  55. checklist_rtl Project Management
  56. shopping_bag Purchasing
  57. home_work Real Estate
  58. person_search Recruitment Consultancy
  59. store Retail
  60. point_of_sale Sales
  61. science Scientific Research & Development
  62. wifi Telecoms
  63. psychology Therapy
  64. pets Veterinary
View All Spark Jobs View All Jobs in Delhi