18,060 Scala jobs in India
Big Data Engineer - Scala
Posted 2 days ago
Job Viewed
Job Description
Job Title: Big Data Engineer – Scala
Location: Bangalore, Chennai, Gurgaon, Pune, Mumbai.
Experience: 7–10 Years (Minimum 3+ years in Scala)
Notice Period: Immediate to 30 Days
Mode of Work: Hybrid
Click the link below to learn more about the role and take the AI Interview to begin your application journey:
Role Overview
We are looking for a highly skilled Big Data Engineer (Scala) with strong expertise in Scala, Spark, Python, NiFi, and Apache Kafka to join our data engineering team. The ideal candidate will have a proven track record in building, scaling, and optimizing big data pipelines , and hands-on experience in distributed data systems and cloud-based solutions.
Key Responsibilities
- Design, develop, and optimize large-scale data pipelines and distributed data processing systems.
- Work extensively with Scala, Spark (PySpark), and Python for data processing and transformation.
- Develop and integrate streaming solutions using Apache Kafka and orchestration tools like NiFi / Airflow .
- Write efficient queries and perform data analysis using Jupyter Notebooks and SQL .
- Collaborate with cross-functional teams to design scalable cloud-based data architectures .
- Ensure delivery of high-quality code through code reviews, performance tuning, and best practices .
- Build monitoring and alerting systems leveraging Splunk or equivalent tools .
- Participate in CI/CD workflows using Git, Jenkins, and other DevOps tools.
- Contribute to product development with a focus on scalability, maintainability, and performance.
Mandatory Skills
- Scala – Minimum 3+ years of hands-on experience.
- Strong expertise in Spark (PySpark) and Python .
- Hands-on experience with Apache Kafka .
- Knowledge of NiFi / Airflow for orchestration.
- Strong experience in Distributed Data Systems (5+ years) .
- Proficiency in SQL and query optimization.
- Good understanding of Cloud Architecture .
Preferred Skills
- Exposure to messaging technologies like Apache Kafka or equivalent.
- Experience in designing intuitive, responsive UIs for data analytics visualization.
- Familiarity with Splunk or other monitoring/alerting solutions .
- Hands-on experience with CI/CD tools (Git, Jenkins).
- Strong grasp of software engineering concepts, data modeling, and optimization techniques .
Big data engineer - scala
Posted today
Job Viewed
Job Description
Job Title: Big Data Engineer – ScalaLocation: Bangalore, Chennai, Gurgaon, Pune, Mumbai.Experience: 7–10 Years (Minimum 3+ years in Scala)Notice Period: Immediate to 30 DaysMode of Work: Hybrid ? Click the link below to learn more about the role and take the AI Interview to begin your application journey: Pe NRole OverviewWe are looking for a highly skilled Big Data Engineer (Scala) with strong expertise in Scala, Spark, Python, Ni Fi, and Apache Kafka to join our data engineering team. The ideal candidate will have a proven track record in building, scaling, and optimizing big data pipelines, and hands-on experience in distributed data systems and cloud-based solutions.Key Responsibilities- Design, develop, and optimize large-scale data pipelines and distributed data processing systems.- Work extensively with Scala, Spark (Py Spark), and Python for data processing and transformation.- Develop and integrate streaming solutions using Apache Kafka and orchestration tools like Ni Fi / Airflow.- Write efficient queries and perform data analysis using Jupyter Notebooks and SQL.- Collaborate with cross-functional teams to design scalable cloud-based data architectures.- Ensure delivery of high-quality code through code reviews, performance tuning, and best practices.- Build monitoring and alerting systems leveraging Splunk or equivalent tools.- Participate in CI/CD workflows using Git, Jenkins, and other Dev Ops tools.- Contribute to product development with a focus on scalability, maintainability, and performance.Mandatory Skills- Scala – Minimum 3+ years of hands-on experience.- Strong expertise in Spark (Py Spark) and Python.- Hands-on experience with Apache Kafka.- Knowledge of Ni Fi / Airflow for orchestration.- Strong experience in Distributed Data Systems (5+ years).- Proficiency in SQL and query optimization.- Good understanding of Cloud Architecture.Preferred Skills- Exposure to messaging technologies like Apache Kafka or equivalent.- Experience in designing intuitive, responsive UIs for data analytics visualization.- Familiarity with Splunk or other monitoring/alerting solutions.- Hands-on experience with CI/CD tools (Git, Jenkins).- Strong grasp of software engineering concepts, data modeling, and optimization techniques.
Big data engineer - scala
Posted today
Job Viewed
Job Description
Job Title: Big Data Engineer – ScalaLocation: Bangalore, Chennai, Gurgaon, Pune, Mumbai.Experience: 7–10 Years (Minimum 3+ years in Scala)Notice Period: Immediate to 30 DaysMode of Work: Hybrid ? Click the link below to learn more about the role and take the AI Interview to begin your application journey: Pe NRole OverviewWe are looking for a highly skilled Big Data Engineer (Scala) with strong expertise in Scala, Spark, Python, Ni Fi, and Apache Kafka to join our data engineering team. The ideal candidate will have a proven track record in building, scaling, and optimizing big data pipelines, and hands-on experience in distributed data systems and cloud-based solutions.Key Responsibilities- Design, develop, and optimize large-scale data pipelines and distributed data processing systems.- Work extensively with Scala, Spark (Py Spark), and Python for data processing and transformation.- Develop and integrate streaming solutions using Apache Kafka and orchestration tools like Ni Fi / Airflow.- Write efficient queries and perform data analysis using Jupyter Notebooks and SQL.- Collaborate with cross-functional teams to design scalable cloud-based data architectures.- Ensure delivery of high-quality code through code reviews, performance tuning, and best practices.- Build monitoring and alerting systems leveraging Splunk or equivalent tools.- Participate in CI/CD workflows using Git, Jenkins, and other Dev Ops tools.- Contribute to product development with a focus on scalability, maintainability, and performance.Mandatory Skills- Scala – Minimum 3+ years of hands-on experience.- Strong expertise in Spark (Py Spark) and Python.- Hands-on experience with Apache Kafka.- Knowledge of Ni Fi / Airflow for orchestration.- Strong experience in Distributed Data Systems (5+ years).- Proficiency in SQL and query optimization.- Good understanding of Cloud Architecture.Preferred Skills- Exposure to messaging technologies like Apache Kafka or equivalent.- Experience in designing intuitive, responsive UIs for data analytics visualization.- Familiarity with Splunk or other monitoring/alerting solutions.- Hands-on experience with CI/CD tools (Git, Jenkins).- Strong grasp of software engineering concepts, data modeling, and optimization techniques.
Big data engineer - scala
Posted today
Job Viewed
Job Description
Job Title: Big Data Engineer – ScalaLocation: Bangalore, Chennai, Gurgaon, Pune, Mumbai.Experience: 7–10 Years (Minimum 3+ years in Scala)Notice Period: Immediate to 30 DaysMode of Work: Hybrid ? Click the link below to learn more about the role and take the AI Interview to begin your application journey: Pe NRole OverviewWe are looking for a highly skilled Big Data Engineer (Scala) with strong expertise in Scala, Spark, Python, Ni Fi, and Apache Kafka to join our data engineering team. The ideal candidate will have a proven track record in building, scaling, and optimizing big data pipelines, and hands-on experience in distributed data systems and cloud-based solutions.Key Responsibilities- Design, develop, and optimize large-scale data pipelines and distributed data processing systems.- Work extensively with Scala, Spark (Py Spark), and Python for data processing and transformation.- Develop and integrate streaming solutions using Apache Kafka and orchestration tools like Ni Fi / Airflow.- Write efficient queries and perform data analysis using Jupyter Notebooks and SQL.- Collaborate with cross-functional teams to design scalable cloud-based data architectures.- Ensure delivery of high-quality code through code reviews, performance tuning, and best practices.- Build monitoring and alerting systems leveraging Splunk or equivalent tools.- Participate in CI/CD workflows using Git, Jenkins, and other Dev Ops tools.- Contribute to product development with a focus on scalability, maintainability, and performance.Mandatory Skills- Scala – Minimum 3+ years of hands-on experience.- Strong expertise in Spark (Py Spark) and Python.- Hands-on experience with Apache Kafka.- Knowledge of Ni Fi / Airflow for orchestration.- Strong experience in Distributed Data Systems (5+ years).- Proficiency in SQL and query optimization.- Good understanding of Cloud Architecture.Preferred Skills- Exposure to messaging technologies like Apache Kafka or equivalent.- Experience in designing intuitive, responsive UIs for data analytics visualization.- Familiarity with Splunk or other monitoring/alerting solutions.- Hands-on experience with CI/CD tools (Git, Jenkins).- Strong grasp of software engineering concepts, data modeling, and optimization techniques.
Big data engineer - scala
Posted today
Job Viewed
Job Description
Job Title: Big Data Engineer – ScalaLocation: Bangalore, Chennai, Gurgaon, Pune, Mumbai.Experience: 7–10 Years (Minimum 3+ years in Scala)Notice Period: Immediate to 30 DaysMode of Work: Hybrid ? Click the link below to learn more about the role and take the AI Interview to begin your application journey: Pe NRole OverviewWe are looking for a highly skilled Big Data Engineer (Scala) with strong expertise in Scala, Spark, Python, Ni Fi, and Apache Kafka to join our data engineering team. The ideal candidate will have a proven track record in building, scaling, and optimizing big data pipelines, and hands-on experience in distributed data systems and cloud-based solutions.Key Responsibilities- Design, develop, and optimize large-scale data pipelines and distributed data processing systems.- Work extensively with Scala, Spark (Py Spark), and Python for data processing and transformation.- Develop and integrate streaming solutions using Apache Kafka and orchestration tools like Ni Fi / Airflow.- Write efficient queries and perform data analysis using Jupyter Notebooks and SQL.- Collaborate with cross-functional teams to design scalable cloud-based data architectures.- Ensure delivery of high-quality code through code reviews, performance tuning, and best practices.- Build monitoring and alerting systems leveraging Splunk or equivalent tools.- Participate in CI/CD workflows using Git, Jenkins, and other Dev Ops tools.- Contribute to product development with a focus on scalability, maintainability, and performance.Mandatory Skills- Scala – Minimum 3+ years of hands-on experience.- Strong expertise in Spark (Py Spark) and Python.- Hands-on experience with Apache Kafka.- Knowledge of Ni Fi / Airflow for orchestration.- Strong experience in Distributed Data Systems (5+ years).- Proficiency in SQL and query optimization.- Good understanding of Cloud Architecture.Preferred Skills- Exposure to messaging technologies like Apache Kafka or equivalent.- Experience in designing intuitive, responsive UIs for data analytics visualization.- Familiarity with Splunk or other monitoring/alerting solutions.- Hands-on experience with CI/CD tools (Git, Jenkins).- Strong grasp of software engineering concepts, data modeling, and optimization techniques.
Big data engineer - scala
Posted today
Job Viewed
Job Description
Job Title: Big Data Engineer – ScalaLocation: Bangalore, Chennai, Gurgaon, Pune, Mumbai.Experience: 7–10 Years (Minimum 3+ years in Scala)Notice Period: Immediate to 30 DaysMode of Work: Hybrid ? Click the link below to learn more about the role and take the AI Interview to begin your application journey: Pe NRole OverviewWe are looking for a highly skilled Big Data Engineer (Scala) with strong expertise in Scala, Spark, Python, Ni Fi, and Apache Kafka to join our data engineering team. The ideal candidate will have a proven track record in building, scaling, and optimizing big data pipelines, and hands-on experience in distributed data systems and cloud-based solutions.Key Responsibilities- Design, develop, and optimize large-scale data pipelines and distributed data processing systems.- Work extensively with Scala, Spark (Py Spark), and Python for data processing and transformation.- Develop and integrate streaming solutions using Apache Kafka and orchestration tools like Ni Fi / Airflow.- Write efficient queries and perform data analysis using Jupyter Notebooks and SQL.- Collaborate with cross-functional teams to design scalable cloud-based data architectures.- Ensure delivery of high-quality code through code reviews, performance tuning, and best practices.- Build monitoring and alerting systems leveraging Splunk or equivalent tools.- Participate in CI/CD workflows using Git, Jenkins, and other Dev Ops tools.- Contribute to product development with a focus on scalability, maintainability, and performance.Mandatory Skills- Scala – Minimum 3+ years of hands-on experience.- Strong expertise in Spark (Py Spark) and Python.- Hands-on experience with Apache Kafka.- Knowledge of Ni Fi / Airflow for orchestration.- Strong experience in Distributed Data Systems (5+ years).- Proficiency in SQL and query optimization.- Good understanding of Cloud Architecture.Preferred Skills- Exposure to messaging technologies like Apache Kafka or equivalent.- Experience in designing intuitive, responsive UIs for data analytics visualization.- Familiarity with Splunk or other monitoring/alerting solutions.- Hands-on experience with CI/CD tools (Git, Jenkins).- Strong grasp of software engineering concepts, data modeling, and optimization techniques.
Big data engineer - scala
Posted today
Job Viewed
Job Description
Job Title: Big Data Engineer – ScalaLocation: Bangalore, Chennai, Gurgaon, Pune, Mumbai.Experience: 7–10 Years (Minimum 3+ years in Scala)Notice Period: Immediate to 30 DaysMode of Work: Hybrid ? Click the link below to learn more about the role and take the AI Interview to begin your application journey: Pe NRole OverviewWe are looking for a highly skilled Big Data Engineer (Scala) with strong expertise in Scala, Spark, Python, Ni Fi, and Apache Kafka to join our data engineering team. The ideal candidate will have a proven track record in building, scaling, and optimizing big data pipelines, and hands-on experience in distributed data systems and cloud-based solutions.Key Responsibilities- Design, develop, and optimize large-scale data pipelines and distributed data processing systems.- Work extensively with Scala, Spark (Py Spark), and Python for data processing and transformation.- Develop and integrate streaming solutions using Apache Kafka and orchestration tools like Ni Fi / Airflow.- Write efficient queries and perform data analysis using Jupyter Notebooks and SQL.- Collaborate with cross-functional teams to design scalable cloud-based data architectures.- Ensure delivery of high-quality code through code reviews, performance tuning, and best practices.- Build monitoring and alerting systems leveraging Splunk or equivalent tools.- Participate in CI/CD workflows using Git, Jenkins, and other Dev Ops tools.- Contribute to product development with a focus on scalability, maintainability, and performance.Mandatory Skills- Scala – Minimum 3+ years of hands-on experience.- Strong expertise in Spark (Py Spark) and Python.- Hands-on experience with Apache Kafka.- Knowledge of Ni Fi / Airflow for orchestration.- Strong experience in Distributed Data Systems (5+ years).- Proficiency in SQL and query optimization.- Good understanding of Cloud Architecture.Preferred Skills- Exposure to messaging technologies like Apache Kafka or equivalent.- Experience in designing intuitive, responsive UIs for data analytics visualization.- Familiarity with Splunk or other monitoring/alerting solutions.- Hands-on experience with CI/CD tools (Git, Jenkins).- Strong grasp of software engineering concepts, data modeling, and optimization techniques.
Be The First To Know
About the latest Scala Jobs in India !
Big data engineer - scala
Posted today
Job Viewed
Job Description
Job Title: Big Data Engineer – ScalaLocation: Bangalore, Chennai, Gurgaon, Pune, Mumbai.Experience: 7–10 Years (Minimum 3+ years in Scala)Notice Period: Immediate to 30 DaysMode of Work: Hybrid ? Click the link below to learn more about the role and take the AI Interview to begin your application journey: Pe NRole OverviewWe are looking for a highly skilled Big Data Engineer (Scala) with strong expertise in Scala, Spark, Python, Ni Fi, and Apache Kafka to join our data engineering team. The ideal candidate will have a proven track record in building, scaling, and optimizing big data pipelines, and hands-on experience in distributed data systems and cloud-based solutions.Key Responsibilities- Design, develop, and optimize large-scale data pipelines and distributed data processing systems.- Work extensively with Scala, Spark (Py Spark), and Python for data processing and transformation.- Develop and integrate streaming solutions using Apache Kafka and orchestration tools like Ni Fi / Airflow.- Write efficient queries and perform data analysis using Jupyter Notebooks and SQL.- Collaborate with cross-functional teams to design scalable cloud-based data architectures.- Ensure delivery of high-quality code through code reviews, performance tuning, and best practices.- Build monitoring and alerting systems leveraging Splunk or equivalent tools.- Participate in CI/CD workflows using Git, Jenkins, and other Dev Ops tools.- Contribute to product development with a focus on scalability, maintainability, and performance.Mandatory Skills- Scala – Minimum 3+ years of hands-on experience.- Strong expertise in Spark (Py Spark) and Python.- Hands-on experience with Apache Kafka.- Knowledge of Ni Fi / Airflow for orchestration.- Strong experience in Distributed Data Systems (5+ years).- Proficiency in SQL and query optimization.- Good understanding of Cloud Architecture.Preferred Skills- Exposure to messaging technologies like Apache Kafka or equivalent.- Experience in designing intuitive, responsive UIs for data analytics visualization.- Familiarity with Splunk or other monitoring/alerting solutions.- Hands-on experience with CI/CD tools (Git, Jenkins).- Strong grasp of software engineering concepts, data modeling, and optimization techniques.
Big data engineer - scala
Posted today
Job Viewed
Job Description
Job Title: Big Data Engineer – ScalaLocation: Bangalore, Chennai, Gurgaon, Pune, Mumbai.Experience: 7–10 Years (Minimum 3+ years in Scala)Notice Period: Immediate to 30 DaysMode of Work: Hybrid ? Click the link below to learn more about the role and take the AI Interview to begin your application journey: Pe NRole OverviewWe are looking for a highly skilled Big Data Engineer (Scala) with strong expertise in Scala, Spark, Python, Ni Fi, and Apache Kafka to join our data engineering team. The ideal candidate will have a proven track record in building, scaling, and optimizing big data pipelines, and hands-on experience in distributed data systems and cloud-based solutions.Key Responsibilities- Design, develop, and optimize large-scale data pipelines and distributed data processing systems.- Work extensively with Scala, Spark (Py Spark), and Python for data processing and transformation.- Develop and integrate streaming solutions using Apache Kafka and orchestration tools like Ni Fi / Airflow.- Write efficient queries and perform data analysis using Jupyter Notebooks and SQL.- Collaborate with cross-functional teams to design scalable cloud-based data architectures.- Ensure delivery of high-quality code through code reviews, performance tuning, and best practices.- Build monitoring and alerting systems leveraging Splunk or equivalent tools.- Participate in CI/CD workflows using Git, Jenkins, and other Dev Ops tools.- Contribute to product development with a focus on scalability, maintainability, and performance.Mandatory Skills- Scala – Minimum 3+ years of hands-on experience.- Strong expertise in Spark (Py Spark) and Python.- Hands-on experience with Apache Kafka.- Knowledge of Ni Fi / Airflow for orchestration.- Strong experience in Distributed Data Systems (5+ years).- Proficiency in SQL and query optimization.- Good understanding of Cloud Architecture.Preferred Skills- Exposure to messaging technologies like Apache Kafka or equivalent.- Experience in designing intuitive, responsive UIs for data analytics visualization.- Familiarity with Splunk or other monitoring/alerting solutions.- Hands-on experience with CI/CD tools (Git, Jenkins).- Strong grasp of software engineering concepts, data modeling, and optimization techniques.
Big data engineer - scala
Posted today
Job Viewed
Job Description
Job Title: Big Data Engineer – ScalaLocation: Bangalore, Chennai, Gurgaon, Pune, Mumbai.Experience: 7–10 Years (Minimum 3+ years in Scala)Notice Period: Immediate to 30 DaysMode of Work: Hybrid ? Click the link below to learn more about the role and take the AI Interview to begin your application journey: Pe NRole OverviewWe are looking for a highly skilled Big Data Engineer (Scala) with strong expertise in Scala, Spark, Python, Ni Fi, and Apache Kafka to join our data engineering team. The ideal candidate will have a proven track record in building, scaling, and optimizing big data pipelines, and hands-on experience in distributed data systems and cloud-based solutions.Key Responsibilities- Design, develop, and optimize large-scale data pipelines and distributed data processing systems.- Work extensively with Scala, Spark (Py Spark), and Python for data processing and transformation.- Develop and integrate streaming solutions using Apache Kafka and orchestration tools like Ni Fi / Airflow.- Write efficient queries and perform data analysis using Jupyter Notebooks and SQL.- Collaborate with cross-functional teams to design scalable cloud-based data architectures.- Ensure delivery of high-quality code through code reviews, performance tuning, and best practices.- Build monitoring and alerting systems leveraging Splunk or equivalent tools.- Participate in CI/CD workflows using Git, Jenkins, and other Dev Ops tools.- Contribute to product development with a focus on scalability, maintainability, and performance.Mandatory Skills- Scala – Minimum 3+ years of hands-on experience.- Strong expertise in Spark (Py Spark) and Python.- Hands-on experience with Apache Kafka.- Knowledge of Ni Fi / Airflow for orchestration.- Strong experience in Distributed Data Systems (5+ years).- Proficiency in SQL and query optimization.- Good understanding of Cloud Architecture.Preferred Skills- Exposure to messaging technologies like Apache Kafka or equivalent.- Experience in designing intuitive, responsive UIs for data analytics visualization.- Familiarity with Splunk or other monitoring/alerting solutions.- Hands-on experience with CI/CD tools (Git, Jenkins).- Strong grasp of software engineering concepts, data modeling, and optimization techniques.