828 Senior Data Engineer jobs in Mumbai
Big Data Engineer
Posted today
Job Viewed
Job Description
Description
:Big Data Engineer
Posted 1 day ago
Job Viewed
Job Description
Greetings from TCS!
TCS is hiring for Big Data
Location: - Chennai/Mumbai/Pune
Desired Experience Range: 6 to 12 years
Must-Have
• PySpark • Hive
Good-to-Have
• Spark • HBase • DQ tool • Agile Scrum experience • Exposure in data ingestion from disparate sources onto Big Data platform
Thanks
Anshika
Big Data Engineer
Posted 1 day ago
Job Viewed
Job Description
Greetings from TCS!
TCS is hiring for Big Data
Location: - Chennai/Mumbai/Pune
Desired Experience Range: 6 to 12 years
Must-Have
• PySpark • Hive
Good-to-Have
• Spark • HBase • DQ tool • Agile Scrum experience • Exposure in data ingestion from disparate sources onto Big Data platform
Thanks
Anshika
Senior Big Data Engineer
Posted today
Job Viewed
Job Description
Job Purpose:
Job Description:
Role Requirements:
Big Data Engineer - Scala
Posted today
Job Viewed
Job Description
Job Title: Big Data Engineer – Scala
Location: Bangalore, Chennai, Gurgaon, Pune, Mumbai.
Experience: 7–10 Years (Minimum 3+ years in Scala)
Notice Period: Immediate to 30 Days
Mode of Work: Hybrid
Click the link below to learn more about the role and take the AI Interview to begin your application journey:
Role Overview
We are looking for a highly skilled Big Data Engineer (Scala) with strong expertise in Scala, Spark, Python, NiFi, and Apache Kafka to join our data engineering team. The ideal candidate will have a proven track record in building, scaling, and optimizing big data pipelines , and hands-on experience in distributed data systems and cloud-based solutions.
Key Responsibilities
- Design, develop, and optimize large-scale data pipelines and distributed data processing systems.
- Work extensively with Scala, Spark (PySpark), and Python for data processing and transformation.
- Develop and integrate streaming solutions using Apache Kafka and orchestration tools like NiFi / Airflow .
- Write efficient queries and perform data analysis using Jupyter Notebooks and SQL .
- Collaborate with cross-functional teams to design scalable cloud-based data architectures .
- Ensure delivery of high-quality code through code reviews, performance tuning, and best practices .
- Build monitoring and alerting systems leveraging Splunk or equivalent tools .
- Participate in CI/CD workflows using Git, Jenkins, and other DevOps tools.
- Contribute to product development with a focus on scalability, maintainability, and performance.
Mandatory Skills
- Scala – Minimum 3+ years of hands-on experience.
- Strong expertise in Spark (PySpark) and Python .
- Hands-on experience with Apache Kafka .
- Knowledge of NiFi / Airflow for orchestration.
- Strong experience in Distributed Data Systems (5+ years) .
- Proficiency in SQL and query optimization.
- Good understanding of Cloud Architecture .
Preferred Skills
- Exposure to messaging technologies like Apache Kafka or equivalent.
- Experience in designing intuitive, responsive UIs for data analytics visualization.
- Familiarity with Splunk or other monitoring/alerting solutions .
- Hands-on experience with CI/CD tools (Git, Jenkins).
- Strong grasp of software engineering concepts, data modeling, and optimization techniques .
Big Data Engineer - Scala
Posted today
Job Viewed
Job Description
Job Title: Big Data Engineer – Scala
Location: Bangalore, Chennai, Gurgaon, Pune, Mumbai.
Experience: 7–10 Years (Minimum 3+ years in Scala)
Notice Period: Immediate to 30 Days
Mode of Work: Hybrid
Click the link below to learn more about the role and take the AI Interview to begin your application journey:
Role Overview
We are looking for a highly skilled Big Data Engineer (Scala) with strong expertise in Scala, Spark, Python, NiFi, and Apache Kafka to join our data engineering team. The ideal candidate will have a proven track record in building, scaling, and optimizing big data pipelines , and hands-on experience in distributed data systems and cloud-based solutions.
Key Responsibilities
- Design, develop, and optimize large-scale data pipelines and distributed data processing systems.
- Work extensively with Scala, Spark (PySpark), and Python for data processing and transformation.
- Develop and integrate streaming solutions using Apache Kafka and orchestration tools like NiFi / Airflow .
- Write efficient queries and perform data analysis using Jupyter Notebooks and SQL .
- Collaborate with cross-functional teams to design scalable cloud-based data architectures .
- Ensure delivery of high-quality code through code reviews, performance tuning, and best practices .
- Build monitoring and alerting systems leveraging Splunk or equivalent tools .
- Participate in CI/CD workflows using Git, Jenkins, and other DevOps tools.
- Contribute to product development with a focus on scalability, maintainability, and performance.
Mandatory Skills
- Scala – Minimum 3+ years of hands-on experience.
- Strong expertise in Spark (PySpark) and Python .
- Hands-on experience with Apache Kafka .
- Knowledge of NiFi / Airflow for orchestration.
- Strong experience in Distributed Data Systems (5+ years) .
- Proficiency in SQL and query optimization.
- Good understanding of Cloud Architecture .
Preferred Skills
- Exposure to messaging technologies like Apache Kafka or equivalent.
- Experience in designing intuitive, responsive UIs for data analytics visualization.
- Familiarity with Splunk or other monitoring/alerting solutions .
- Hands-on experience with CI/CD tools (Git, Jenkins).
- Strong grasp of software engineering concepts, data modeling, and optimization techniques .
Big Data Engineer - Scala
Posted 1 day ago
Job Viewed
Job Description
Job Title: Big Data Engineer – Scala
Location: Bangalore, Chennai, Gurgaon, Pune, Mumbai.
Experience: 7–10 Years (Minimum 3+ years in Scala)
Notice Period: Immediate to 30 Days
Mode of Work: Hybrid
Click the link below to learn more about the role and take the AI Interview to begin your application journey:
Role Overview
We are looking for a highly skilled Big Data Engineer (Scala) with strong expertise in Scala, Spark, Python, NiFi, and Apache Kafka to join our data engineering team. The ideal candidate will have a proven track record in building, scaling, and optimizing big data pipelines , and hands-on experience in distributed data systems and cloud-based solutions.
Key Responsibilities
- Design, develop, and optimize large-scale data pipelines and distributed data processing systems.
- Work extensively with Scala, Spark (PySpark), and Python for data processing and transformation.
- Develop and integrate streaming solutions using Apache Kafka and orchestration tools like NiFi / Airflow .
- Write efficient queries and perform data analysis using Jupyter Notebooks and SQL .
- Collaborate with cross-functional teams to design scalable cloud-based data architectures .
- Ensure delivery of high-quality code through code reviews, performance tuning, and best practices .
- Build monitoring and alerting systems leveraging Splunk or equivalent tools .
- Participate in CI/CD workflows using Git, Jenkins, and other DevOps tools.
- Contribute to product development with a focus on scalability, maintainability, and performance.
Mandatory Skills
- Scala – Minimum 3+ years of hands-on experience.
- Strong expertise in Spark (PySpark) and Python .
- Hands-on experience with Apache Kafka .
- Knowledge of NiFi / Airflow for orchestration.
- Strong experience in Distributed Data Systems (5+ years) .
- Proficiency in SQL and query optimization.
- Good understanding of Cloud Architecture .
Preferred Skills
- Exposure to messaging technologies like Apache Kafka or equivalent.
- Experience in designing intuitive, responsive UIs for data analytics visualization.
- Familiarity with Splunk or other monitoring/alerting solutions .
- Hands-on experience with CI/CD tools (Git, Jenkins).
- Strong grasp of software engineering concepts, data modeling, and optimization techniques .
Be The First To Know
About the latest Senior data engineer Jobs in Mumbai !
Big Data Engineer - Scala
Posted 1 day ago
Job Viewed
Job Description
Job Title: Big Data Engineer – Scala
Location: Bangalore, Chennai, Gurgaon, Pune, Mumbai.
Experience: 7–10 Years (Minimum 3+ years in Scala)
Notice Period: Immediate to 30 Days
Mode of Work: Hybrid
Click the link below to learn more about the role and take the AI Interview to begin your application journey:
Role Overview
We are looking for a highly skilled Big Data Engineer (Scala) with strong expertise in Scala, Spark, Python, NiFi, and Apache Kafka to join our data engineering team. The ideal candidate will have a proven track record in building, scaling, and optimizing big data pipelines , and hands-on experience in distributed data systems and cloud-based solutions.
Key Responsibilities
- Design, develop, and optimize large-scale data pipelines and distributed data processing systems.
- Work extensively with Scala, Spark (PySpark), and Python for data processing and transformation.
- Develop and integrate streaming solutions using Apache Kafka and orchestration tools like NiFi / Airflow .
- Write efficient queries and perform data analysis using Jupyter Notebooks and SQL .
- Collaborate with cross-functional teams to design scalable cloud-based data architectures .
- Ensure delivery of high-quality code through code reviews, performance tuning, and best practices .
- Build monitoring and alerting systems leveraging Splunk or equivalent tools .
- Participate in CI/CD workflows using Git, Jenkins, and other DevOps tools.
- Contribute to product development with a focus on scalability, maintainability, and performance.
Mandatory Skills
- Scala – Minimum 3+ years of hands-on experience.
- Strong expertise in Spark (PySpark) and Python .
- Hands-on experience with Apache Kafka .
- Knowledge of NiFi / Airflow for orchestration.
- Strong experience in Distributed Data Systems (5+ years) .
- Proficiency in SQL and query optimization.
- Good understanding of Cloud Architecture .
Preferred Skills
- Exposure to messaging technologies like Apache Kafka or equivalent.
- Experience in designing intuitive, responsive UIs for data analytics visualization.
- Familiarity with Splunk or other monitoring/alerting solutions .
- Hands-on experience with CI/CD tools (Git, Jenkins).
- Strong grasp of software engineering concepts, data modeling, and optimization techniques .
Big Data Engineer - Scala
Posted 1 day ago
Job Viewed
Job Description
Job Title: Big Data Engineer – Scala
Location: Bangalore, Chennai, Gurgaon, Pune, Mumbai.
Experience: 7–10 Years (Minimum 3+ years in Scala)
Notice Period: Immediate to 30 Days
Mode of Work: Hybrid
Click the link below to learn more about the role and take the AI Interview to begin your application journey:
Role Overview
We are looking for a highly skilled Big Data Engineer (Scala) with strong expertise in Scala, Spark, Python, NiFi, and Apache Kafka to join our data engineering team. The ideal candidate will have a proven track record in building, scaling, and optimizing big data pipelines , and hands-on experience in distributed data systems and cloud-based solutions.
Key Responsibilities
- Design, develop, and optimize large-scale data pipelines and distributed data processing systems.
- Work extensively with Scala, Spark (PySpark), and Python for data processing and transformation.
- Develop and integrate streaming solutions using Apache Kafka and orchestration tools like NiFi / Airflow .
- Write efficient queries and perform data analysis using Jupyter Notebooks and SQL .
- Collaborate with cross-functional teams to design scalable cloud-based data architectures .
- Ensure delivery of high-quality code through code reviews, performance tuning, and best practices .
- Build monitoring and alerting systems leveraging Splunk or equivalent tools .
- Participate in CI/CD workflows using Git, Jenkins, and other DevOps tools.
- Contribute to product development with a focus on scalability, maintainability, and performance.
Mandatory Skills
- Scala – Minimum 3+ years of hands-on experience.
- Strong expertise in Spark (PySpark) and Python .
- Hands-on experience with Apache Kafka .
- Knowledge of NiFi / Airflow for orchestration.
- Strong experience in Distributed Data Systems (5+ years) .
- Proficiency in SQL and query optimization.
- Good understanding of Cloud Architecture .
Preferred Skills
- Exposure to messaging technologies like Apache Kafka or equivalent.
- Experience in designing intuitive, responsive UIs for data analytics visualization.
- Familiarity with Splunk or other monitoring/alerting solutions .
- Hands-on experience with CI/CD tools (Git, Jenkins).
- Strong grasp of software engineering concepts, data modeling, and optimization techniques .
Data Analytics/Big Data Engineer
Posted today
Job Viewed
Job Description
Working in a challenging, fast-paced environment to create a meaningful impact on your work
Identify business problems & use data analysis to find answers
Code and maintain data platform & reporting analytics
Design, build and maintain data pipelines.
Qualifications
Desire to collaborate with a smart, supportive engineering team
Strong passion for data and willingness to learn new skills
Experience with NoSQL Databases (i.e. DynamoDB, Couchbase, MongoDB)
Experience with Data ETL tools (DBT and AWS Glue)
Expert knowledge in SQL and deep experience managing relational databases such as PostgreSQL
Experience coding in Python, Scala, or R
Strong understanding of how to build a data pipeline