276 Spark jobs in Delhi

Java Developer(with Spark SQL)

Narela, Delhi Affine

Posted 23 days ago

Job Viewed

Tap Again To Close

Job Description

Experience - 4 - 9 years

Work Location: India - Remote, Bengaluru will be preferred.

Work Timings: 1:00pm to 10:00pm IST


We are seeking experienced Java Developers with strong Spark SQL skills to join a fast-paced project for a global travel technology client. The role focuses on building API integrations to connect with external data vendors and creating high-performance Spark jobs to process and land raw data into target systems.

You will work closely with distributed teams, including US-based stakeholders, and must be able to deliver quality output in a short timeframe.


Key Responsibilities:


  • Design, develop, and optimize Java-based backend services (Spring Boot / Microservices) for API integrations.
  • Develop and maintain Spark SQL queries and data processing pipelines for large-scale data ingestion.
  • Build Spark batch and streaming jobs to land raw data from multiple vendor APIs into data lakes or warehouses.
  • Implement robust error handling, logging, and monitoring for data pipelines.
  • Collaborate with cross-functional teams across geographies to define integration requirements and deliverables.
  • Troubleshoot and optimize Spark SQL for performance and cost efficiency.
  • Participate in Agile ceremonies, daily standups, and client discussions.


Required Skills:


  • 4 to 8 years of relevant experience.
  • Core Java (Java 8 or above) with proven API development experience.
  • Apache Spark (Core, SQL, DataFrame APIs) for large-scale data processing.
  • Spark SQL – strong ability to write and optimize queries for complex joins, aggregations, and transformations.
  • Experience with API integration (RESTful APIs, authentication, payload handling, and rate limiting).
  • Hands-on with data ingestion frameworks and ETL concepts.
  • Experience with MySQL or other RDBMS for relational data management.
  • Proficiency in Git for version control.
  • Strong debugging, performance tuning, and problem-solving skills.
  • Ability to work with minimal supervision in a short-term, delivery-focused engagement.
This advertiser has chosen not to accept applicants from your region.

Java Developer(with Spark SQL)

Delhi, Delhi Affine

Posted 23 days ago

Job Viewed

Tap Again To Close

Job Description

Experience - 4 - 9 years

Work Location: India - Remote, Bengaluru will be preferred.

Work Timings: 1:00pm to 10:00pm IST


We are seeking experienced Java Developers with strong Spark SQL skills to join a fast-paced project for a global travel technology client. The role focuses on building API integrations to connect with external data vendors and creating high-performance Spark jobs to process and land raw data into target systems.

You will work closely with distributed teams, including US-based stakeholders, and must be able to deliver quality output in a short timeframe.


Key Responsibilities:


  • Design, develop, and optimize Java-based backend services (Spring Boot / Microservices) for API integrations.
  • Develop and maintain Spark SQL queries and data processing pipelines for large-scale data ingestion.
  • Build Spark batch and streaming jobs to land raw data from multiple vendor APIs into data lakes or warehouses.
  • Implement robust error handling, logging, and monitoring for data pipelines.
  • Collaborate with cross-functional teams across geographies to define integration requirements and deliverables.
  • Troubleshoot and optimize Spark SQL for performance and cost efficiency.
  • Participate in Agile ceremonies, daily standups, and client discussions.


Required Skills:


  • 4 to 8 years of relevant experience.
  • Core Java (Java 8 or above) with proven API development experience.
  • Apache Spark (Core, SQL, DataFrame APIs) for large-scale data processing.
  • Spark SQL – strong ability to write and optimize queries for complex joins, aggregations, and transformations.
  • Experience with API integration (RESTful APIs, authentication, payload handling, and rate limiting).
  • Hands-on with data ingestion frameworks and ETL concepts.
  • Experience with MySQL or other RDBMS for relational data management.
  • Proficiency in Git for version control.
  • Strong debugging, performance tuning, and problem-solving skills.
  • Ability to work with minimal supervision in a short-term, delivery-focused engagement.
This advertiser has chosen not to accept applicants from your region.

Java Developer(with Spark SQL)

New Delhi, Delhi Affine

Posted 23 days ago

Job Viewed

Tap Again To Close

Job Description

Experience - 4 - 9 years

Work Location: India - Remote, Bengaluru will be preferred.

Work Timings: 1:00pm to 10:00pm IST


We are seeking experienced Java Developers with strong Spark SQL skills to join a fast-paced project for a global travel technology client. The role focuses on building API integrations to connect with external data vendors and creating high-performance Spark jobs to process and land raw data into target systems.

You will work closely with distributed teams, including US-based stakeholders, and must be able to deliver quality output in a short timeframe.


Key Responsibilities:


  • Design, develop, and optimize Java-based backend services (Spring Boot / Microservices) for API integrations.
  • Develop and maintain Spark SQL queries and data processing pipelines for large-scale data ingestion.
  • Build Spark batch and streaming jobs to land raw data from multiple vendor APIs into data lakes or warehouses.
  • Implement robust error handling, logging, and monitoring for data pipelines.
  • Collaborate with cross-functional teams across geographies to define integration requirements and deliverables.
  • Troubleshoot and optimize Spark SQL for performance and cost efficiency.
  • Participate in Agile ceremonies, daily standups, and client discussions.


Required Skills:


  • 4 to 8 years of relevant experience.
  • Core Java (Java 8 or above) with proven API development experience.
  • Apache Spark (Core, SQL, DataFrame APIs) for large-scale data processing.
  • Spark SQL – strong ability to write and optimize queries for complex joins, aggregations, and transformations.
  • Experience with API integration (RESTful APIs, authentication, payload handling, and rate limiting).
  • Hands-on with data ingestion frameworks and ETL concepts.
  • Experience with MySQL or other RDBMS for relational data management.
  • Proficiency in Git for version control.
  • Strong debugging, performance tuning, and problem-solving skills.
  • Ability to work with minimal supervision in a short-term, delivery-focused engagement.
This advertiser has chosen not to accept applicants from your region.

Cosmos and Spark Development Lead

Delhi, Delhi Tata Consultancy Services

Posted 22 days ago

Job Viewed

Tap Again To Close

Job Description

Cosmos (Primary) + Spark Development Lead


Experience: 10-12years


Location: Bangalore, Hyderabad or PAN INDIA


  • Having atleast 10+yrs exp working in Cosmos Data Modeling, Spark SDK for Cosmos
  • Experience in Cosmos Partitioning/Indexing
  • Experience in RU optimization , Performance tuning ),
  • Experience Pyspark
This advertiser has chosen not to accept applicants from your region.

Cosmos and Spark Development Lead

Narela, Delhi Tata Consultancy Services

Posted 22 days ago

Job Viewed

Tap Again To Close

Job Description

Cosmos (Primary) + Spark Development Lead


Experience: 10-12years


Location: Bangalore, Hyderabad or PAN INDIA


  • Having atleast 10+yrs exp working in Cosmos Data Modeling, Spark SDK for Cosmos
  • Experience in Cosmos Partitioning/Indexing
  • Experience in RU optimization , Performance tuning ),
  • Experience Pyspark
This advertiser has chosen not to accept applicants from your region.

Cosmos and Spark Development Lead

New Delhi, Delhi Tata Consultancy Services

Posted 22 days ago

Job Viewed

Tap Again To Close

Job Description

Cosmos (Primary) + Spark Development Lead


Experience: 10-12years


Location: Bangalore, Hyderabad or PAN INDIA


  • Having atleast 10+yrs exp working in Cosmos Data Modeling, Spark SDK for Cosmos
  • Experience in Cosmos Partitioning/Indexing
  • Experience in RU optimization , Performance tuning ),
  • Experience Pyspark
This advertiser has chosen not to accept applicants from your region.

Data Processing Agency

Delhi, Delhi Satyam Drugs

Posted 1 day ago

Job Viewed

Tap Again To Close

Job Description

We are looking for a reliable agency that can provide a team of 20 Virtual Assistants to help with a large-scale website data review and processing project. The ideal agency should have a team ready to start immediately, with experience in data entry, web research, and bulk processing tasks.

Project Details:

- Task: Reviewing and processing data from a website
- Team Size Needed: 20 Virtual Assistants
- Workload: High-volume tasks requiring speed and accuracy
- Estimated Hours: Flexible, but each VA should be available for at least 20-30 hours per week
- Tools: Google Sheets, website logins (credentials provided), and web-based tools
- Training: Brief training will be provided before starting

Requirements for the Agency:
Ability to quickly deploy a team of 20 VAs
Experience handling large-scale data processing or similar tasks
Strong quality control processes to ensure accuracy
Project manager or team lead to oversee work and ensure deadlines are met
Proven track record with similar high-volume projects

Pay: ₹9,291.20 - ₹20,000.00 per month

Schedule:

- Evening shift
- Monday to Friday
- Morning shift
- Night shift

Supplemental Pay:

- Performance bonus

Application Question(s):

- Are you an agency ?
- Can you provide 20 Data entry executives

Work Location: In person
This advertiser has chosen not to accept applicants from your region.
Be The First To Know

About the latest Spark Jobs in Delhi !

Big Data Engineer - Scala

Narela, Delhi Idyllic Services

Posted 1 day ago

Job Viewed

Tap Again To Close

Job Description

Job Title: Big Data Engineer – Scala

Location: Bangalore, Chennai, Gurgaon, Pune, Mumbai.

Experience: 7–10 Years (Minimum 3+ years in Scala)

Notice Period: Immediate to 30 Days

Mode of Work: Hybrid


Role Overview

We are looking for a highly skilled Big Data Engineer (Scala) with strong expertise in Scala, Spark, Python, NiFi, and Apache Kafka to join our data engineering team. The ideal candidate will have a proven track record in building, scaling, and optimizing big data pipelines , and hands-on experience in distributed data systems and cloud-based solutions.


Key Responsibilities

- Design, develop, and optimize large-scale data pipelines and distributed data processing systems.

- Work extensively with Scala, Spark (PySpark), and Python for data processing and transformation.

- Develop and integrate streaming solutions using Apache Kafka and orchestration tools like NiFi / Airflow .

- Write efficient queries and perform data analysis using Jupyter Notebooks and SQL .

- Collaborate with cross-functional teams to design scalable cloud-based data architectures .

- Ensure delivery of high-quality code through code reviews, performance tuning, and best practices .

- Build monitoring and alerting systems leveraging Splunk or equivalent tools .

- Participate in CI/CD workflows using Git, Jenkins, and other DevOps tools.

- Contribute to product development with a focus on scalability, maintainability, and performance.


Mandatory Skills

- Scala – Minimum 3+ years of hands-on experience.

- Strong expertise in Spark (PySpark) and Python .

- Hands-on experience with Apache Kafka .

- Knowledge of NiFi / Airflow for orchestration.

- Strong experience in Distributed Data Systems (5+ years) .

- Proficiency in SQL and query optimization.

- Good understanding of Cloud Architecture .


Preferred Skills

- Exposure to messaging technologies like Apache Kafka or equivalent.

- Experience in designing intuitive, responsive UIs for data analytics visualization.

- Familiarity with Splunk or other monitoring/alerting solutions .

- Hands-on experience with CI/CD tools (Git, Jenkins).

- Strong grasp of software engineering concepts, data modeling, and optimization techniques .

This advertiser has chosen not to accept applicants from your region.

Big Data Engineer - Scala

Delhi, Delhi Idyllic Services

Posted 1 day ago

Job Viewed

Tap Again To Close

Job Description

Job Title: Big Data Engineer – Scala

Location: Bangalore, Chennai, Gurgaon, Pune, Mumbai.

Experience: 7–10 Years (Minimum 3+ years in Scala)

Notice Period: Immediate to 30 Days

Mode of Work: Hybrid


Role Overview

We are looking for a highly skilled Big Data Engineer (Scala) with strong expertise in Scala, Spark, Python, NiFi, and Apache Kafka to join our data engineering team. The ideal candidate will have a proven track record in building, scaling, and optimizing big data pipelines , and hands-on experience in distributed data systems and cloud-based solutions.


Key Responsibilities

- Design, develop, and optimize large-scale data pipelines and distributed data processing systems.

- Work extensively with Scala, Spark (PySpark), and Python for data processing and transformation.

- Develop and integrate streaming solutions using Apache Kafka and orchestration tools like NiFi / Airflow .

- Write efficient queries and perform data analysis using Jupyter Notebooks and SQL .

- Collaborate with cross-functional teams to design scalable cloud-based data architectures .

- Ensure delivery of high-quality code through code reviews, performance tuning, and best practices .

- Build monitoring and alerting systems leveraging Splunk or equivalent tools .

- Participate in CI/CD workflows using Git, Jenkins, and other DevOps tools.

- Contribute to product development with a focus on scalability, maintainability, and performance.


Mandatory Skills

- Scala – Minimum 3+ years of hands-on experience.

- Strong expertise in Spark (PySpark) and Python .

- Hands-on experience with Apache Kafka .

- Knowledge of NiFi / Airflow for orchestration.

- Strong experience in Distributed Data Systems (5+ years) .

- Proficiency in SQL and query optimization.

- Good understanding of Cloud Architecture .


Preferred Skills

- Exposure to messaging technologies like Apache Kafka or equivalent.

- Experience in designing intuitive, responsive UIs for data analytics visualization.

- Familiarity with Splunk or other monitoring/alerting solutions .

- Hands-on experience with CI/CD tools (Git, Jenkins).

- Strong grasp of software engineering concepts, data modeling, and optimization techniques .

This advertiser has chosen not to accept applicants from your region.

Big Data Engineer - Scala

New Delhi, Delhi Idyllic Services

Posted 1 day ago

Job Viewed

Tap Again To Close

Job Description

Job Title: Big Data Engineer – Scala

Location: Bangalore, Chennai, Gurgaon, Pune, Mumbai.

Experience: 7–10 Years (Minimum 3+ years in Scala)

Notice Period: Immediate to 30 Days

Mode of Work: Hybrid


Role Overview

We are looking for a highly skilled Big Data Engineer (Scala) with strong expertise in Scala, Spark, Python, NiFi, and Apache Kafka to join our data engineering team. The ideal candidate will have a proven track record in building, scaling, and optimizing big data pipelines , and hands-on experience in distributed data systems and cloud-based solutions.


Key Responsibilities

- Design, develop, and optimize large-scale data pipelines and distributed data processing systems.

- Work extensively with Scala, Spark (PySpark), and Python for data processing and transformation.

- Develop and integrate streaming solutions using Apache Kafka and orchestration tools like NiFi / Airflow .

- Write efficient queries and perform data analysis using Jupyter Notebooks and SQL .

- Collaborate with cross-functional teams to design scalable cloud-based data architectures .

- Ensure delivery of high-quality code through code reviews, performance tuning, and best practices .

- Build monitoring and alerting systems leveraging Splunk or equivalent tools .

- Participate in CI/CD workflows using Git, Jenkins, and other DevOps tools.

- Contribute to product development with a focus on scalability, maintainability, and performance.


Mandatory Skills

- Scala – Minimum 3+ years of hands-on experience.

- Strong expertise in Spark (PySpark) and Python .

- Hands-on experience with Apache Kafka .

- Knowledge of NiFi / Airflow for orchestration.

- Strong experience in Distributed Data Systems (5+ years) .

- Proficiency in SQL and query optimization.

- Good understanding of Cloud Architecture .


Preferred Skills

- Exposure to messaging technologies like Apache Kafka or equivalent.

- Experience in designing intuitive, responsive UIs for data analytics visualization.

- Familiarity with Splunk or other monitoring/alerting solutions .

- Hands-on experience with CI/CD tools (Git, Jenkins).

- Strong grasp of software engineering concepts, data modeling, and optimization techniques .

This advertiser has chosen not to accept applicants from your region.
 

Nearby Locations

Other Jobs Near Me

Industry

  1. request_quote Accounting
  2. work Administrative
  3. eco Agriculture Forestry
  4. smart_toy AI & Emerging Technologies
  5. school Apprenticeships & Trainee
  6. apartment Architecture
  7. palette Arts & Entertainment
  8. directions_car Automotive
  9. flight_takeoff Aviation
  10. account_balance Banking & Finance
  11. local_florist Beauty & Wellness
  12. restaurant Catering
  13. volunteer_activism Charity & Voluntary
  14. science Chemical Engineering
  15. child_friendly Childcare
  16. foundation Civil Engineering
  17. clean_hands Cleaning & Sanitation
  18. diversity_3 Community & Social Care
  19. construction Construction
  20. brush Creative & Digital
  21. currency_bitcoin Crypto & Blockchain
  22. support_agent Customer Service & Helpdesk
  23. medical_services Dental
  24. medical_services Driving & Transport
  25. medical_services E Commerce & Social Media
  26. school Education & Teaching
  27. electrical_services Electrical Engineering
  28. bolt Energy
  29. local_mall Fmcg
  30. gavel Government & Non Profit
  31. emoji_events Graduate
  32. health_and_safety Healthcare
  33. beach_access Hospitality & Tourism
  34. groups Human Resources
  35. precision_manufacturing Industrial Engineering
  36. security Information Security
  37. handyman Installation & Maintenance
  38. policy Insurance
  39. code IT & Software
  40. gavel Legal
  41. sports_soccer Leisure & Sports
  42. inventory_2 Logistics & Warehousing
  43. supervisor_account Management
  44. supervisor_account Management Consultancy
  45. supervisor_account Manufacturing & Production
  46. campaign Marketing
  47. build Mechanical Engineering
  48. perm_media Media & PR
  49. local_hospital Medical
  50. local_hospital Military & Public Safety
  51. local_hospital Mining
  52. medical_services Nursing
  53. local_gas_station Oil & Gas
  54. biotech Pharmaceutical
  55. checklist_rtl Project Management
  56. shopping_bag Purchasing
  57. home_work Real Estate
  58. person_search Recruitment Consultancy
  59. store Retail
  60. point_of_sale Sales
  61. science Scientific Research & Development
  62. wifi Telecoms
  63. psychology Therapy
  64. pets Veterinary
View All Spark Jobs View All Jobs in Delhi