932 Senior Data Engineer jobs in Mumbai
Big Data Engineer
Posted today
Job Viewed
Job Description
Spark Scala Developer
Location : Bengaluru, Mumbai
Employment Type : Full-time
What Were Looking For
Were hiring a
Spark Scala Developer
who has real-world experience working in Big Data environments, both on-prem and/or in the cloud. You should know how to write production-grade Spark applications, fine-tune performance, and work fluently with Scalas functional style. Experience with cloud platforms and modern data tools like Snowflake or Databricks is a strong plus.
Your Responsibilities
- Design and develop scalable data pipelines using Apache Spark and Scala
- Optimize and troubleshoot Spark jobs for performance (e.g. memory management, shuffles, skew)
- Work with massive datasets in on-prem Hadoop clusters or cloud platforms like AWS/GCP/Azure
- Write clean, modular Scala code using functional programming principles
- Collaborate with data teams to integrate with platforms like Snowflake, Databricks, or data lakes
- Ensure code quality, documentation, and CI/CD practices are followed
Must-Have Skills
- 3+ years of experience with Apache Spark in Scala
- Deep understanding of Spark internalsDAG, stages, tasks, caching, joins, partitioning
- Hands-on experience with performance tuning in production Spark jobs
- Proficiency in Scala functional programming (e.g. immutability, higher-order functions, Option/Either)
- Proficiency in SQL
- Experience with any major cloud platform: AWS, Azure, or GCP
Nice-to-Have
- Worked with Databricks, Snowflake, or Delta Lake
- Exposure to data pipeline tools like Airflow, Kafka, Glue, or BigQuery
- Familiarity with CI/CD pipelines and Git-based workflows
- Comfortable with SQL optimization and schema design in distributed environments
)
Big Data Engineer
Posted today
Job Viewed
Job Description
Roles & Responsibilities
- 3 to 5 years of experience in Data Engineering.
- Hands-on experience with Azure data tools: ADF, Data Lake, Synapse, Databricks.
- Strong programming skills in SQL and Python
- Good understanding of Big Data frameworks
- Knowledge of data modeling, warehousing, and performance tuning.
- Familiarity with CI/CD, version control (Git), and Agile/Scrum methodologies.
- Design, develop, and maintain ETL/ELT pipelines for large-scale data processing.
- Work with Azure Data Services including Azure Data Factory (ADF), Azure Synapse Analytics, Data Lake, and Databricks.
- Process and manage large datasets using Big Data tools and frameworks
- Implement data integration, transformation, and ingestion workflows from various sources.
- Ensure data quality, performance optimization, and pipeline reliability.
- Collaborate with analysts, data scientists, and other engineers to deliver end-to-end data solutions.
Experience
- 3-4.5 Years
Skills
- Primary Skill: Data Engineering
- Sub Skill(s): Data Engineering
- Additional Skill(s): Data Warehouse, Big Data, Azure Datalake
About The Company
Infogain is a human-centered digital platform and software engineering company based out of Silicon Valley. We engineer business outcomes for Fortune 500 companies and digital natives in the technology, healthcare, insurance, travel, telecom, and retail & CPG industries using technologies such as cloud, microservices, automation, IoT, and artificial intelligence. We accelerate experience-led transformation in the delivery of digital platforms. Infogain is also a Microsoft (NASDAQ: MSFT) Gold Partner and Azure Expert Managed Services Provider (MSP).
Infogain, an Apax Funds portfolio company, has offices in California, Washington, Texas, the UK, the UAE, and Singapore, with delivery centers in Seattle, Houston, Austin, Kraków, Noida, Gurgaon, Mumbai, Pune, and Bengaluru.
Big Data Engineer
Posted today
Job Viewed
Job Description
- Location - Mumbai
- Prefered Experience Range - 7 to 10 years
- Must Have Technical Skills - Spark , Python , SQL , Kafka , Airflow , AWS cloud Data management services
- Must Have Other Skills - Minimum 3 Data Management Project Experience , Hands-on experience in bigdata space , Experience of ingesting data from various source systems , Working experience with storage file formats like Orc , Iceberg , Parquet etc. & storages like object stores , hdfs , no sql and RDBMS
- Nice to have but not mandatory - Databricks , snowflake exposure
Big Data Engineer
Posted 1 day ago
Job Viewed
Job Description
- Design, develop, and implement robust Big Data solutions using technologies such as Hadoop, Spark, and NoSQL databases.
- Build and maintain scalable data pipelines for effective data ingestion, transformation, and analysis.
- Collaborate with data scientists, analysts, and cross-functional teams to understand business requirements and translate them into technical solutions.
- Ensure data quality and integrity through effective validation, monitoring, and troubleshooting techniques.
- Optimize data processing workflows for maximum performance and efficiency.
- Stay up-to-date with evolving Big Data technologies and methodologies to enhance existing systems.
- Implement best practices for data governance, security, and compliance.
- Document technical designs, processes, and procedures to support knowledge sharing across teams.
- Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
- 4+ years of experience as a Big Data Engineer or in a similar role.
- Strong proficiency in Big Data technologies (Hadoop, Spark, Hive, Pig) and frameworks.
- Extensive experience with programming languages such as Python, Scala, or Java.
- Knowledge of data modeling and data warehousing concepts.
- Familiarity with NoSQL databases like Cassandra or MongoDB.
- Proficient in SQL for data querying and analysis.
- Strong analytical and problem-solving skills.
- Excellent communication and collaboration abilities.
- Ability to work independently and effectively in a fast-paced environment.
Competitive salary and benefits package.
Opportunity to work on cutting-edge technologies and solve complex challenges.
Dynamic and collaborative work environment with opportunities for growth and career advancement.
Regular training and professional development opportunities.
Big Data Engineer
Posted today
Job Viewed
Job Description
- Design, develop, and implement robust Big Data solutions using technologies such as Hadoop, Spark, and NoSQL databases.
- Build and maintain scalable data pipelines for effective data ingestion, transformation, and analysis.
- Collaborate with data scientists, analysts, and cross-functional teams to understand business requirements and translate them into technical solutions.
- Ensure data quality and integrity through effective validation, monitoring, and troubleshooting techniques.
- Optimize data processing workflows for maximum performance and efficiency.
- Stay up-to-date with evolving Big Data technologies and methodologies to enhance existing systems.
- Implement best practices for data governance, security, and compliance.
- Document technical designs, processes, and procedures to support knowledge sharing across teams.
Requirements
- Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
- 4+ years of experience as a Big Data Engineer or in a similar role.
- Strong proficiency in Big Data technologies (Hadoop, Spark, Hive, Pig) and frameworks.
- Extensive experience with programming languages such as Python, Scala, or Java.
- Knowledge of data modeling and data warehousing concepts.
- Familiarity with NoSQL databases like Cassandra or MongoDB.
- Proficient in SQL for data querying and analysis.
- Strong analytical and problem-solving skills.
- Excellent communication and collaboration abilities.
- Ability to work independently and effectively in a fast-paced environment.
Benefits
Competitive salary and benefits package.
Opportunity to work on cutting-edge technologies and solve complex challenges.
Dynamic and collaborative work environment with opportunities for growth and career advancement.
Regular training and professional development opportunities.
Big Data Engineer
Posted today
Job Viewed
Job Description
Description
:Senior Big Data Engineer
Posted today
Job Viewed
Job Description
Job Purpose:
Job Description:
Role Requirements:
Be The First To Know
About the latest Senior data engineer Jobs in Mumbai !
Senior big data engineer
Posted 1 day ago
Job Viewed
Job Description
GCP Big Data Engineer
Posted 4 days ago
Job Viewed
Job Description
We are seeking an experienced GCP Big Data Engineer with 8–10 years of expertise in designing, developing, and optimizing large-scale data processing solutions. The ideal candidate will bring strong leadership capabilities, technical depth, and a proven track record of delivering end-to-end big data solutions in cloud environments.
Key Responsibilities:-
- Lead and mentor teams in designing scalable and efficient ETL pipelines on Google Cloud Platform (GCP) .
- Drive best practices for data modeling, data integration, and data quality management .
- Collaborate with stakeholders to define data engineering strategies aligned with business goals.
- Ensure high performance, scalability, and reliability in data systems using SQL and PySpark .
Must-Have Skills:-
- GCP expertise in data engineering services (BigQuery, Dataflow, Dataproc, Pub/Sub, Cloud Storage).
- Strong programming in SQL & PySpark .
- Hands-on experience in ETL pipeline design, development, and optimization .
- Strong problem-solving and leadership skills with experience guiding data engineering teams.
Qualification:-
- Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field .
- Relevant certifications in GCP Data Engineering preferred.
GCP Big Data Engineer
Posted 4 days ago
Job Viewed
Job Description
We are seeking an experienced GCP Big Data Engineer with 8–10 years of expertise in designing, developing, and optimizing large-scale data processing solutions. The ideal candidate will bring strong leadership capabilities, technical depth, and a proven track record of delivering end-to-end big data solutions in cloud environments.
Key Responsibilities:-
- Lead and mentor teams in designing scalable and efficient ETL pipelines on Google Cloud Platform (GCP) .
- Drive best practices for data modeling, data integration, and data quality management .
- Collaborate with stakeholders to define data engineering strategies aligned with business goals.
- Ensure high performance, scalability, and reliability in data systems using SQL and PySpark .
Must-Have Skills:-
- GCP expertise in data engineering services (BigQuery, Dataflow, Dataproc, Pub/Sub, Cloud Storage).
- Strong programming in SQL & PySpark .
- Hands-on experience in ETL pipeline design, development, and optimization .
- Strong problem-solving and leadership skills with experience guiding data engineering teams.
Qualification:-
- Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field .
- Relevant certifications in GCP Data Engineering preferred.