29,372 Data Engineers jobs in India
Senior Big Data Engineers
Posted today
Job Viewed
Job Description
Job Title: Senior Data Engineer
Location: Pan India
Experience: 7+ Years
Joining: Immediate/Short Notice Preferred
Job Summary:
We are looking for an experienced Senior Data Engineer to design, develop, and optimize scalable data solutions across Enterprise Data Lake (EDL) and hybrid cloud platforms. The role involves data architecture, pipeline orchestration, metadata governance, and building reusable data products aligned with business goals.
Key Responsibilities:
- Design & implement scalable data pipelines (Spark, Hive, Kafka, Bronze-Silver-Gold architecture).
- Work on data architecture, modelling, and orchestration for large-scale systems.
- Implement metadata governance, lineage, and business glossary using Apache Atlas.
- Support DataOps/MLOps best practices and mentor teams.
- Integrate data across structured & unstructured sources (ODS, CRM, NoSQL).
Required Skills:
- Strong hands-on experience with Apache Hive, HBase, Kafka, Spark, Elasticsearch.
- Expertise in data architecture, modelling, orchestration, and DataOps.
- Familiarity with Data Mesh, Data Product development, and hybrid cloud (AWS/Azure/GCP).
- Knowledge of metadata governance, ETL/ELT, NoSQL data models.
- Strong problem-solving and communication skills.
Big Data Engineers/Senior Engineers
Posted today
Job Viewed
Job Description
Role: Big Data Engineers/Senior Engineers
About the Role:
We are seeking highly skilled Big Data Engineers/Senior Engineers with a strong background in
building and supporting big data solutions. The ideal candidate will have expertise in big data
technologies, data pipeline development, and performance optimization within a collaborative
and agile environment.
Key Responsibilities:
Core skill - Hive, Impala, SQL, Pyspark, Hadoop, Cloudera Data Platform
Job Description:
5+ yrs of relevant experience in Big data technologies like Hive, Impala, Oozie
Has 5+ yrs of relevant experience in Spark Scala.
Has good knowledge on advanced SQL .
Should have experience working with Shell.
Good to have :
Should have worked on performance tuning of Spark jobs, Hive/Impala queries.
Should have basic understanding of visualization tools like tableau.
Data Engineers
Posted today
Job Viewed
Job Description
SQL/NoSQL, cloud
data solutions
SQL/NoSQL, cloud
data solutions
SQL/NoSQL, cloud
data solutions
SQL/NoSQL, cloud
data solutions
SQL/NoSQL, cloud
data solutions
SQL/NoSQL, cloud
data solutions
SQL/NoSQL, cloud
data solutions
SQL/NoSQL, cloud
data solutions
SQL/NoSQL, cloud
data solutionsSQL/NoSQL, cloud
data solutions
Requirements
• Frontend Framework: Angular
• Servers : Tomcat, Jetty, JBoss, Nginx, Apache HTTP Server
• Tools : Maven, Log4j 2, JUnit 5, Mockito, Postman, Swagger,
JMeter, Logback
• OS : Windows, Linux
• Version Control : Git, GitHub
• IDE : Eclipse, STS, IntelliJ IDEA
• Messaging Systems : Apache Kafka
• Cloud : AWS, Azure
• DevOps Tools : Docker, Kubernetes, GitLab
• Frontend Framework: Angular
• Servers : Tomcat, Jetty, JBoss, Nginx, Apache HTTP Server
• Tools : Maven, Log4j 2, JUnit 5, Mockito, Postman, Swagger,
JMeter, Logback
• OS : Windows, Linux
• Version Control : Git, GitHub
• IDE : Eclipse, STS, IntelliJ IDEA
• Messaging Systems : Apache Kafka
• Cloud : AWS, Azure
• DevOps Tools : Docker, Kubernetes, GitLab
Data Engineers
Posted today
Job Viewed
Job Description
- Hiring: Data Engineers (30 Openings | Immediate Joiners Only)
- Locations: Bangalore, Pune, Gurgaon, Hyderabad
- Experience: 6-12 years (please do not apply if less then 6 years)
- Cloud Expertise: Azure, GCP, AWSMode: Full-time (Onsite)
- Joining: Immediate
- Key RequirementsStrong experience in Data Engineering across large-scale systems
- Hands-on expertise in Azure, GCP, and AWS cloud platforms
- For Azure & AWS roles, Scala is mandatory
- Strong understanding of data pipelines, ETL, and big data technologies
- Interview ProcessOnline AssessmentTechnical Interview (Face-to-Face)
- Non-Technical Discussion (Virtual)
- Data Engineering across large-scale systems.
Data Engineers
Posted today
Job Viewed
Job Description
Title: Data Engineer
Location: Hyderabad
The Data Engineer - Python/Databricks role is essential for building and maintaining a robust data foundation across our platforms. This position focuses on developing and optimising scalable data pipelines that ensure seamless integration between Databricks and relational metadata stores such as PostgreSQL. Key responsibilities include designing efficient ETL workflows in Python, improving performance and cost efficiency of Databricks jobs, and implementing best practices for schema evolution, data quality, and governance. The engineer will also support migration activities, validation processes, and performance enhancements that enable data-driven features to scale effectively. By strengthening the data layer, this role ensures reliable, high-performing, and compliant data operations, positioning our applications to meet evolving business and regulatory requirements.
Skills: engineers,data operations,etl,python,data quality,data
Data Engineers
Posted today
Job Viewed
Job Description
Mandatory Skills required Primary skill Alteryx, Databricks and Azure Data Engineering. Secondary skill: Python
Shift Timings UK Shift
Work Location - Bangalore Should be willing to work from Office mandatorily.
Senior Manager Data Engineering - 13 to 15 years
Associate Architect Data Engineering - 5 to 7.5 yrs
Analyst Data Engineering - 2.5 to 4 yrs
Key Responsibilities:
Design, develop, and optimize scalable data pipelines and workflows using Azure Data Factory, Synapse Pipelines, and Microsoft Fabric.
Responsible for transforming and migrating existing Alteryx workflows into scalable Azure Databricks pipelines, ensuring optimized performance and maintainability.
Collaborate with data engineering and analytics teams to redesign Alteryx pipelines into Spark-based solutions on Azure Databricks, leveraging Delta Lake and Azure services for automation and efficiency.
Build and maintain ETL/ELT processes for ingesting structured and unstructured data from various sources.
Develop and manage data transformation logic using Databricks (PySpark/Spark SQL) and Python.
Collaborate with data analysts, architects, and business stakeholders to understand requirements and deliver high-quality data solutions.
Ensure data quality, integrity, and governance across the data lifecycle.
Implement monitoring and alerting for data pipelines to ensure reliability and performance.
Work with Azure Synapse Analytics to build data models and enable analytics and reporting.
Utilize SQL for querying and managing large datasets efficiently.
Participate in data architecture discussions and contribute to technical design decisions.
Required Skills and Qualifications:
3+ years of experience in data engineering or a related field.
Strong experience in Alteryx workflows and data preparation/ETL processes.
Strong proficiency in the Microsoft Azure data ecosystem including:
oAzure Data Factory (ADF)
oAzure Synapse Analytics
oMicrosoft Fabric
oAzure Databricks
Solid experience with Python and Apache Spark (including PySpark).
Advanced skills in SQL for data manipulation and transformation.
Experience in designing and implementing data lakes and data warehouses.
Familiarity with data governance, security, and compliance standards.
Strong analytical and problem-solving skills.
Excellent communication and collaboration abilities.
Preferred Qualifications:
Microsoft Azure certifications (e.g., Azure Data Engineer Associate).
Experience with DevOps tools and CI/CD practices in data workflows.
Knowledge of REST APIs and integration techniques.
Background in agile methodologies and working in cross-functional teams.
Data Engineers
Posted today
Job Viewed
Job Description
SQL/NoSQL, cloud
data solutions
SQL/NoSQL, cloud
data solutions
SQL/NoSQL, cloud
data solutions
SQL/NoSQL, cloud
data solutions
SQL/NoSQL, cloud
data solutions
SQL/NoSQL, cloud
data solutions
SQL/NoSQL, cloud
data solutions
SQL/NoSQL, cloud
data solutions
SQL/NoSQL, cloud
data solutionsSQL/NoSQL, cloud
data solutions
Requirements
• Frontend Framework: Angular
• Servers : Tomcat, Jetty, JBoss, Nginx, Apache HTTP Server
• Tools : Maven, Log4j 2, JUnit 5, Mockito, Postman, Swagger,
JMeter, Logback
• OS : Windows, Linux
• Version Control : Git, GitHub
• IDE : Eclipse, STS, IntelliJ IDEA
• Messaging Systems : Apache Kafka
• Cloud : AWS, Azure
• DevOps Tools : Docker, Kubernetes, GitLab
• Frontend Framework: Angular
• Servers : Tomcat, Jetty, JBoss, Nginx, Apache HTTP Server
• Tools : Maven, Log4j 2, JUnit 5, Mockito, Postman, Swagger,
JMeter, Logback
• OS : Windows, Linux
• Version Control : Git, GitHub
• IDE : Eclipse, STS, IntelliJ IDEA
• Messaging Systems : Apache Kafka
• Cloud : AWS, Azure
• DevOps Tools : Docker, Kubernetes, GitLab
Be The First To Know
About the latest Data engineers Jobs in India !
Data Engineers
Posted today
Job Viewed
Job Description
SQL/NoSQL, cloud
data solutions
SQL/NoSQL, cloud
data solutions
SQL/NoSQL, cloud
data solutions
SQL/NoSQL, cloud
data solutions
SQL/NoSQL, cloud
data solutions
SQL/NoSQL, cloud
data solutions
SQL/NoSQL, cloud
data solutions
SQL/NoSQL, cloud
data solutions
SQL/NoSQL, cloud
data solutionsSQL/NoSQL, cloud
data solutions
Requirements
• Frontend Framework: Angular
• Servers : Tomcat, Jetty, JBoss, Nginx, Apache HTTP Server
• Tools : Maven, Log4j 2, JUnit 5, Mockito, Postman, Swagger,
JMeter, Logback
• OS : Windows, Linux
• Version Control : Git, GitHub
• IDE : Eclipse, STS, IntelliJ IDEA
• Messaging Systems : Apache Kafka
• Cloud : AWS, Azure
• DevOps Tools : Docker, Kubernetes, GitLab
• Frontend Framework: Angular
• Servers : Tomcat, Jetty, JBoss, Nginx, Apache HTTP Server
• Tools : Maven, Log4j 2, JUnit 5, Mockito, Postman, Swagger,
JMeter, Logback
• OS : Windows, Linux
• Version Control : Git, GitHub
• IDE : Eclipse, STS, IntelliJ IDEA
• Messaging Systems : Apache Kafka
• Cloud : AWS, Azure
• DevOps Tools : Docker, Kubernetes, GitLab
Lead Data Engineers
Posted today
Job Viewed
Job Description
1) Lead Data Engineer – MS Fabric
Experience: 8+ years
Location: Trivandrum / Kochi
Budget: Up to 28 LPA
Notice Period: Immediate – 30 Days
Mandatory Skills: MS Fabric (6 months–2 years hands-on), Azure Stack (ADF, Data Lake, Lakehouse, Power BI)
2) Lead Data Engineer – SSIS & SQL
Experience: 6+ years
Location: Kochi / Trivandrum
Budget: 18–23 LPA
Notice Period: Immediate – 30 Days
Mandatory Skills: MS SQL, SSIS, ADF, Data Lake, Strong SQL & Database expertise, Excellent Communication
Job Purpose
We are looking for a highly motivated and experienced Lead Data Engineer with deep expertise in SSIS, SQL, and a strong ability to drive and lead data projects. The ideal candidate should have moderate experience with Azure Data Factory (ADF) and a passion for designing scalable, high-performance data solutions that support critical enterprise operations and analytics.
You will play a lead role in shaping our data infrastructure and delivering high-quality solutions in collaboration with cross-functional teams.
Key Responsibilities
- Lead the design, development, and deployment of robust data pipelines and ETL processes using SSIS, SQL Server, and Azure Data Factory (ADF).
- Drive end-to-end project execution — including planning, technical architecture, implementation, testing, and delivery.
- Develop and maintain data lakes, data marts, and data warehouse solutions, supporting both structured and semi-structured data.
- Ensure performance tuning and optimization of SQL queries, stored procedures, and ETL jobs.
- Collaborate with business stakeholders and technical teams to gather data requirements and translate them into scalable solutions.
- Mentor junior engineers and conduct code reviews to ensure adherence to best practices.
- Contribute to architectural decisions involving Lakehouse, cloud adoption, and data integration strategies.
- Implement and maintain data quality, governance, and security standards.
- Maintain accurate project documentation and communicate effectively with technical and non-technical audiences.
Skills and Competencies
Mandatory Technical Skills
:
- Advanced proficiency in Microsoft SQL Server and T-SQL (stored procedures, views, indexing, triggers)
- Expert-level experience with SSIS for ETL development
- Hands-on experience in data warehouse and data mart modeling (Star/Snowflake schemas)
- Strong understanding of DDL/DML operations and relational database design
- Solid background in query performance tuning and optimization
Moderate Experience Required
:
- Azure Data Factory (ADF) for data pipeline orchestration and cloud-based ETL
Preferred / Nice to Have
:
- Exposure to Lakehouse architecture
- Experience with Cloud platforms (Azure, AWS, or Snowflake)
- Familiarity with Data Lakes and handling large volumes of data
- Working knowledge of REST APIs, JSON, and integration patterns
To adhere to ISMS policies and procedures.
Azure Data Engineers
Posted today
Job Viewed
Job Description
Sr.Azure Data Engineer (8-12 yrs) Hyd/Blr/Pan India | UK Shift Azure Databricks Engineer (6-12 yrs)
Skills:Azure, ADF, Databricks, Python, PySpark, Kafka, SQL, AKS, Delta Lake