Java Developer(with Spark SQL)
Posted 23 days ago
Job Viewed
Job Description
Experience - 4 - 9 years
Work Location: India - Remote, Bengaluru will be preferred.
Work Timings: 1:00pm to 10:00pm IST
We are seeking experienced Java Developers with strong Spark SQL skills to join a fast-paced project for a global travel technology client. The role focuses on building API integrations to connect with external data vendors and creating high-performance Spark jobs to process and land raw data into target systems.
You will work closely with distributed teams, including US-based stakeholders, and must be able to deliver quality output in a short timeframe.
Key Responsibilities:
- Design, develop, and optimize Java-based backend services (Spring Boot / Microservices) for API integrations.
- Develop and maintain Spark SQL queries and data processing pipelines for large-scale data ingestion.
- Build Spark batch and streaming jobs to land raw data from multiple vendor APIs into data lakes or warehouses.
- Implement robust error handling, logging, and monitoring for data pipelines.
- Collaborate with cross-functional teams across geographies to define integration requirements and deliverables.
- Troubleshoot and optimize Spark SQL for performance and cost efficiency.
- Participate in Agile ceremonies, daily standups, and client discussions.
Required Skills:
- 4 to 8 years of relevant experience.
- Core Java (Java 8 or above) with proven API development experience.
- Apache Spark (Core, SQL, DataFrame APIs) for large-scale data processing.
- Spark SQL – strong ability to write and optimize queries for complex joins, aggregations, and transformations.
- Experience with API integration (RESTful APIs, authentication, payload handling, and rate limiting).
- Hands-on with data ingestion frameworks and ETL concepts.
- Experience with MySQL or other RDBMS for relational data management.
- Proficiency in Git for version control.
- Strong debugging, performance tuning, and problem-solving skills.
- Ability to work with minimal supervision in a short-term, delivery-focused engagement.
Java Developer(with Spark SQL)
Posted 23 days ago
Job Viewed
Job Description
Experience - 4 - 9 years
Work Location: India - Remote, Bengaluru will be preferred.
Work Timings: 1:00pm to 10:00pm IST
We are seeking experienced Java Developers with strong Spark SQL skills to join a fast-paced project for a global travel technology client. The role focuses on building API integrations to connect with external data vendors and creating high-performance Spark jobs to process and land raw data into target systems.
You will work closely with distributed teams, including US-based stakeholders, and must be able to deliver quality output in a short timeframe.
Key Responsibilities:
- Design, develop, and optimize Java-based backend services (Spring Boot / Microservices) for API integrations.
- Develop and maintain Spark SQL queries and data processing pipelines for large-scale data ingestion.
- Build Spark batch and streaming jobs to land raw data from multiple vendor APIs into data lakes or warehouses.
- Implement robust error handling, logging, and monitoring for data pipelines.
- Collaborate with cross-functional teams across geographies to define integration requirements and deliverables.
- Troubleshoot and optimize Spark SQL for performance and cost efficiency.
- Participate in Agile ceremonies, daily standups, and client discussions.
Required Skills:
- 4 to 8 years of relevant experience.
- Core Java (Java 8 or above) with proven API development experience.
- Apache Spark (Core, SQL, DataFrame APIs) for large-scale data processing.
- Spark SQL – strong ability to write and optimize queries for complex joins, aggregations, and transformations.
- Experience with API integration (RESTful APIs, authentication, payload handling, and rate limiting).
- Hands-on with data ingestion frameworks and ETL concepts.
- Experience with MySQL or other RDBMS for relational data management.
- Proficiency in Git for version control.
- Strong debugging, performance tuning, and problem-solving skills.
- Ability to work with minimal supervision in a short-term, delivery-focused engagement.
Java Developer(with Spark SQL)
Posted 23 days ago
Job Viewed
Job Description
Experience - 4 - 9 years
Work Location: India - Remote, Bengaluru will be preferred.
Work Timings: 1:00pm to 10:00pm IST
We are seeking experienced Java Developers with strong Spark SQL skills to join a fast-paced project for a global travel technology client. The role focuses on building API integrations to connect with external data vendors and creating high-performance Spark jobs to process and land raw data into target systems.
You will work closely with distributed teams, including US-based stakeholders, and must be able to deliver quality output in a short timeframe.
Key Responsibilities:
- Design, develop, and optimize Java-based backend services (Spring Boot / Microservices) for API integrations.
- Develop and maintain Spark SQL queries and data processing pipelines for large-scale data ingestion.
- Build Spark batch and streaming jobs to land raw data from multiple vendor APIs into data lakes or warehouses.
- Implement robust error handling, logging, and monitoring for data pipelines.
- Collaborate with cross-functional teams across geographies to define integration requirements and deliverables.
- Troubleshoot and optimize Spark SQL for performance and cost efficiency.
- Participate in Agile ceremonies, daily standups, and client discussions.
Required Skills:
- 4 to 8 years of relevant experience.
- Core Java (Java 8 or above) with proven API development experience.
- Apache Spark (Core, SQL, DataFrame APIs) for large-scale data processing.
- Spark SQL – strong ability to write and optimize queries for complex joins, aggregations, and transformations.
- Experience with API integration (RESTful APIs, authentication, payload handling, and rate limiting).
- Hands-on with data ingestion frameworks and ETL concepts.
- Experience with MySQL or other RDBMS for relational data management.
- Proficiency in Git for version control.
- Strong debugging, performance tuning, and problem-solving skills.
- Ability to work with minimal supervision in a short-term, delivery-focused engagement.
Cosmos and Spark Development Lead
Posted 22 days ago
Job Viewed
Job Description
Cosmos (Primary) + Spark Development Lead
Experience: 10-12years
Location: Bangalore, Hyderabad or PAN INDIA
- Having atleast 10+yrs exp working in Cosmos Data Modeling, Spark SDK for Cosmos
- Experience in Cosmos Partitioning/Indexing
- Experience in RU optimization , Performance tuning ),
- Experience Pyspark
Cosmos and Spark Development Lead
Posted 22 days ago
Job Viewed
Job Description
Cosmos (Primary) + Spark Development Lead
Experience: 10-12years
Location: Bangalore, Hyderabad or PAN INDIA
- Having atleast 10+yrs exp working in Cosmos Data Modeling, Spark SDK for Cosmos
- Experience in Cosmos Partitioning/Indexing
- Experience in RU optimization , Performance tuning ),
- Experience Pyspark
Cosmos and Spark Development Lead
Posted 22 days ago
Job Viewed
Job Description
Cosmos (Primary) + Spark Development Lead
Experience: 10-12years
Location: Bangalore, Hyderabad or PAN INDIA
- Having atleast 10+yrs exp working in Cosmos Data Modeling, Spark SDK for Cosmos
- Experience in Cosmos Partitioning/Indexing
- Experience in RU optimization , Performance tuning ),
- Experience Pyspark
Data Processing Agency
Posted 1 day ago
Job Viewed
Job Description
Project Details:
- Task: Reviewing and processing data from a website
- Team Size Needed: 20 Virtual Assistants
- Workload: High-volume tasks requiring speed and accuracy
- Estimated Hours: Flexible, but each VA should be available for at least 20-30 hours per week
- Tools: Google Sheets, website logins (credentials provided), and web-based tools
- Training: Brief training will be provided before starting
Requirements for the Agency:
Ability to quickly deploy a team of 20 VAs
Experience handling large-scale data processing or similar tasks
Strong quality control processes to ensure accuracy
Project manager or team lead to oversee work and ensure deadlines are met
Proven track record with similar high-volume projects
Pay: ₹9,291.20 - ₹20,000.00 per month
Schedule:
- Evening shift
- Monday to Friday
- Morning shift
- Night shift
Supplemental Pay:
- Performance bonus
Application Question(s):
- Are you an agency ?
- Can you provide 20 Data entry executives
Work Location: In person
Be The First To Know
About the latest Spark Jobs in Delhi !
Big Data Engineer - Scala
Posted 1 day ago
Job Viewed
Job Description
Job Title: Big Data Engineer – Scala
Location: Bangalore, Chennai, Gurgaon, Pune, Mumbai.
Experience: 7–10 Years (Minimum 3+ years in Scala)
Notice Period: Immediate to 30 Days
Mode of Work: Hybrid
Role Overview
We are looking for a highly skilled Big Data Engineer (Scala) with strong expertise in Scala, Spark, Python, NiFi, and Apache Kafka to join our data engineering team. The ideal candidate will have a proven track record in building, scaling, and optimizing big data pipelines , and hands-on experience in distributed data systems and cloud-based solutions.
Key Responsibilities
- Design, develop, and optimize large-scale data pipelines and distributed data processing systems.
- Work extensively with Scala, Spark (PySpark), and Python for data processing and transformation.
- Develop and integrate streaming solutions using Apache Kafka and orchestration tools like NiFi / Airflow .
- Write efficient queries and perform data analysis using Jupyter Notebooks and SQL .
- Collaborate with cross-functional teams to design scalable cloud-based data architectures .
- Ensure delivery of high-quality code through code reviews, performance tuning, and best practices .
- Build monitoring and alerting systems leveraging Splunk or equivalent tools .
- Participate in CI/CD workflows using Git, Jenkins, and other DevOps tools.
- Contribute to product development with a focus on scalability, maintainability, and performance.
Mandatory Skills
- Scala – Minimum 3+ years of hands-on experience.
- Strong expertise in Spark (PySpark) and Python .
- Hands-on experience with Apache Kafka .
- Knowledge of NiFi / Airflow for orchestration.
- Strong experience in Distributed Data Systems (5+ years) .
- Proficiency in SQL and query optimization.
- Good understanding of Cloud Architecture .
Preferred Skills
- Exposure to messaging technologies like Apache Kafka or equivalent.
- Experience in designing intuitive, responsive UIs for data analytics visualization.
- Familiarity with Splunk or other monitoring/alerting solutions .
- Hands-on experience with CI/CD tools (Git, Jenkins).
- Strong grasp of software engineering concepts, data modeling, and optimization techniques .
Big Data Engineer - Scala
Posted 1 day ago
Job Viewed
Job Description
Job Title: Big Data Engineer – Scala
Location: Bangalore, Chennai, Gurgaon, Pune, Mumbai.
Experience: 7–10 Years (Minimum 3+ years in Scala)
Notice Period: Immediate to 30 Days
Mode of Work: Hybrid
Role Overview
We are looking for a highly skilled Big Data Engineer (Scala) with strong expertise in Scala, Spark, Python, NiFi, and Apache Kafka to join our data engineering team. The ideal candidate will have a proven track record in building, scaling, and optimizing big data pipelines , and hands-on experience in distributed data systems and cloud-based solutions.
Key Responsibilities
- Design, develop, and optimize large-scale data pipelines and distributed data processing systems.
- Work extensively with Scala, Spark (PySpark), and Python for data processing and transformation.
- Develop and integrate streaming solutions using Apache Kafka and orchestration tools like NiFi / Airflow .
- Write efficient queries and perform data analysis using Jupyter Notebooks and SQL .
- Collaborate with cross-functional teams to design scalable cloud-based data architectures .
- Ensure delivery of high-quality code through code reviews, performance tuning, and best practices .
- Build monitoring and alerting systems leveraging Splunk or equivalent tools .
- Participate in CI/CD workflows using Git, Jenkins, and other DevOps tools.
- Contribute to product development with a focus on scalability, maintainability, and performance.
Mandatory Skills
- Scala – Minimum 3+ years of hands-on experience.
- Strong expertise in Spark (PySpark) and Python .
- Hands-on experience with Apache Kafka .
- Knowledge of NiFi / Airflow for orchestration.
- Strong experience in Distributed Data Systems (5+ years) .
- Proficiency in SQL and query optimization.
- Good understanding of Cloud Architecture .
Preferred Skills
- Exposure to messaging technologies like Apache Kafka or equivalent.
- Experience in designing intuitive, responsive UIs for data analytics visualization.
- Familiarity with Splunk or other monitoring/alerting solutions .
- Hands-on experience with CI/CD tools (Git, Jenkins).
- Strong grasp of software engineering concepts, data modeling, and optimization techniques .
Big Data Engineer - Scala
Posted 1 day ago
Job Viewed
Job Description
Job Title: Big Data Engineer – Scala
Location: Bangalore, Chennai, Gurgaon, Pune, Mumbai.
Experience: 7–10 Years (Minimum 3+ years in Scala)
Notice Period: Immediate to 30 Days
Mode of Work: Hybrid
Role Overview
We are looking for a highly skilled Big Data Engineer (Scala) with strong expertise in Scala, Spark, Python, NiFi, and Apache Kafka to join our data engineering team. The ideal candidate will have a proven track record in building, scaling, and optimizing big data pipelines , and hands-on experience in distributed data systems and cloud-based solutions.
Key Responsibilities
- Design, develop, and optimize large-scale data pipelines and distributed data processing systems.
- Work extensively with Scala, Spark (PySpark), and Python for data processing and transformation.
- Develop and integrate streaming solutions using Apache Kafka and orchestration tools like NiFi / Airflow .
- Write efficient queries and perform data analysis using Jupyter Notebooks and SQL .
- Collaborate with cross-functional teams to design scalable cloud-based data architectures .
- Ensure delivery of high-quality code through code reviews, performance tuning, and best practices .
- Build monitoring and alerting systems leveraging Splunk or equivalent tools .
- Participate in CI/CD workflows using Git, Jenkins, and other DevOps tools.
- Contribute to product development with a focus on scalability, maintainability, and performance.
Mandatory Skills
- Scala – Minimum 3+ years of hands-on experience.
- Strong expertise in Spark (PySpark) and Python .
- Hands-on experience with Apache Kafka .
- Knowledge of NiFi / Airflow for orchestration.
- Strong experience in Distributed Data Systems (5+ years) .
- Proficiency in SQL and query optimization.
- Good understanding of Cloud Architecture .
Preferred Skills
- Exposure to messaging technologies like Apache Kafka or equivalent.
- Experience in designing intuitive, responsive UIs for data analytics visualization.
- Familiarity with Splunk or other monitoring/alerting solutions .
- Hands-on experience with CI/CD tools (Git, Jenkins).
- Strong grasp of software engineering concepts, data modeling, and optimization techniques .