27,288 Azure Data jobs in India
Azure Data
Posted 4 days ago
Job Viewed
Job Description
Skill: Azure Data Factory
Primary: ADF, SQL
Secondary: SSIS
Experience: 3-8 YRS
Location: Bangalore, Chennai, Hyderabad, Pune
Azure Data
Posted today
Job Viewed
Job Description
Skill: Azure Data Factory
Primary: ADF, SQL
Secondary: SSIS
Experience: 3-8 YRS
Location: Bangalore, Chennai, Hyderabad, Pune
Azure Data
Posted 4 days ago
Job Viewed
Job Description
Primary: ADF, SQL
Secondary: SSIS
Experience: 3-8 YRS
Location: Bangalore, Chennai, Hyderabad, Pune
Azure Data
Posted today
Job Viewed
Job Description
Skill: Azure Data Factory
Primary: ADF, SQL
Secondary: SSIS
Experience: 3-8 YRS
Location: Bangalore, Chennai, Hyderabad, Pune
Azure Data
Posted 2 days ago
Job Viewed
Job Description
Skill: Azure Data Factory
Primary: ADF, SQL
Secondary: SSIS
Experience: 3-8 YRS
Location: Bangalore, Chennai, Hyderabad, Pune
Azure Data Engineer

Posted 3 days ago
Job Viewed
Job Description
· Experience in using ETL tools, database management, scripting (primarily Python), API consumption, Source to Target mapping and Advanced SQL queries.
· Designs, builds, and maintains scalable data pipelines and architectures on Microsoft Azure cloud platforms
· Cloud experience is preferred.
· In addition to technical skills, this candidate should possess excellent communication skills and the ability to work autonomously and with minimal direction.
· Develops and optimizes complex ETL processes, monitors system performance, and troubleshoots data-related issues in production environments
Cognizant is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, color, religion, national origin, disability, protected Veteran status, age, or any other characteristic protected by law.
Azure Data Engineer
Posted today
Job Viewed
Job Description
Key Responsibilities
- Pipeline Development – Design, build, and deploy robust ETL/ELT pipelines in Databricks (PySpark, SQL, Delta Lake) to ingest, transform, and curate governance and operational metadata from multiple sources landed in Databricks.
- Granular Data Quality Capture – Implement profiling logic to capture issue-level metadata (source table, column, timestamp, severity, rule type) to support drill-down from dashboards into specific records and enable targeted remediation.
- Governance Metrics Automation – Develop data pipelines to generate metrics for dashboards covering data quality, lineage, job monitoring, access & permissions, query cost, usage & consumption, retention & lifecycle, policy enforcement, sensitive data mapping, and governance KPIs.
- Microsoft Purview Integration – Automate asset onboarding, metadata enrichment, classification tagging, and lineage extraction for integration into governance reporting.
- Data Retention & Policy Enforcement – Implement logic for retention tracking and policy compliance monitoring (masking, RLS, exceptions).
- Job & Query Monitoring – Build pipelines to track job performance, SLA adherence, and query costs for cost and performance optimization.
- Metadata Storage & Optimization – Maintain curated Delta tables for governance metrics, structured for efficient dashboard consumption.
- Testing & Troubleshooting – Monitor pipeline execution, optimize performance, and resolve issues quickly.
- Collaboration – Work closely with the lead engineer, QA, and reporting teams to validate metrics and resolve data quality issues.
- Security & Compliance – Ensure all pipelines meet organizational governance, privacy, and security standards.
Required Qualifications
- Bachelor’s degree in Computer Science, Engineering, Information Systems, or related field
- 4+ years of hands-on data engineering experience, with Azure Databricks and Azure Data Lake
- Proficiency in PySpark , SQL , and ETL/ELT pipeline design
- Demonstrated experience building granular data quality checks and integrating governance logic into pipelines
- Working knowledge of Microsoft Purview for metadata management, lineage capture, and classification
- Experience with Azure Data Factory or equivalent orchestration tools
- Understanding of data modeling, metadata structures, and data cataloging concepts
- Strong debugging, performance tuning, and problem-solving skills
- Ability to document pipeline logic and collaborate with cross-functional teams
Be The First To Know
About the latest Azure data Jobs in India !
Azure Data Engineer
Posted 3 days ago
Job Viewed
Job Description
Strong Proficiency on PySpark and SQL.
Understanding the nature of OLAP and OLTP architecture.
Understanding the Medallion architecture.
Strong Proficiency on Databricks notebook, Databricks Job, DLT Streaming table, DLT Materialized View, DLT Pipeline.
Proficiency on Databricks Unity Catalog.
Strong Proficiency on Azure Data lake.
Sound knowledge on CI/CD pipeline and Databricks Asset Bundle
Knowledge on Python
Looking for immediate joiners
Azure Data Engineer
Posted 3 days ago
Job Viewed
Job Description
Skill: ADB,SQL,Python
Experinece:3-8 yrs
locations: All LTIM locations
Notice period: Who can join in August month /Imm joiner -30 Days.
Azure Data Engineer
Posted 3 days ago
Job Viewed
Job Description
- Excellent understanding on data architecture system (source, target, transformations, processing, etc.,) and migration b/w DB platforms
- Hands-On Experience of 6+ years on Azure data analytics and Datawarehouse in Azure.
- Must have hands-on experience in the Azure services like Azure Data Explorer, Azure Databricks , Azure Data factory, Azure Synapse Analytics and Azure Fabric etc
- Must have strong hands-on experience with Python
- Strong hands-on experience in creating data pipeline monitors in the Azure environment
- Perform pre and post data validations checks to ensure completeness of data migrated
- Experience building and optimizing data pipelines, architectures and data sets
- Experience with the deployment datasets within the customers Cloud or On Premise environments
- Strong analytic skills related to working with unstructured datasets
- Experience supporting and working with cross-functional teams in a dynamic environment
- Assist with validating prototyped deployment options across various environments - Dev, QA, Test
- Assist in security hardening and implement role-based security as needed for Customer requirements
- Write and maintain documentation (e.g. run books, test plans, test results, etc.) for applications
- Configure and tune the platform in each environment based on best practices
- Provide general guidance, best practices, troubleshooting assistance as related to the Data Platform
- Strong analytical, debugging and problem-solving skills
- Quick learner, self-motivated and has the ability to work independently
- Strong verbal, written communication skills and a collaborative problem solving style
- Systems integration, including design and development of APIs, Adapters, and Connectors
- Healthcare domain experience is preferred
- Tools & Technology Experience preferred:
- Object-oriented /object function scripting languages: Python
- Data migration from on premise systems - RDBMS to Cloud Datawarehouse
- Relational SQL and NoSQL databases, including Snowflake and PostgreSQL
- Data pipeline using Azure stack
- Azure cloud services: Datafactory, Databricks, SQL Datawarehouse