771 Data Engineer jobs in Noida
Data Engineer

Posted 3 days ago
Job Viewed
Job Description
At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward - always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities.
**The Role**
As a Data Engineer at Kyndryl, you'll be at the forefront of the data revolution, crafting and shaping data platforms that power our organization's success. This role is not just about code and databases; it's about transforming raw data into actionable insights that drive strategic decisions and innovation.
Technical Professional to Design, Build and Manage the infrastructure and systems that enable organizations to collect, process, store, and analyze large volumes of data. He will be the architects and builders of the data pipelines, ensuring that data is accessible, reliable, and optimized for various uses, including analytics, machine learning, and business intelligence.
In this role, you'll be engineering the backbone of our data infrastructure, ensuring the availability of pristine, refined data sets. With a well-defined methodology, critical thinking, and a rich blend of domain expertise, consulting finesse, and software engineering prowess, you'll be the mastermind of data transformation.
Your journey begins by understanding project objectives and requirements from a business perspective, converting this knowledge into a data puzzle. You'll be delving into the depths of information to uncover quality issues and initial insights, setting the stage for data excellence. But it doesn't stop there. You'll be the architect of data pipelines, using your expertise to cleanse, normalize, and transform raw data into the final dataset-a true data alchemist.
Armed with a keen eye for detail, you'll scrutinize data solutions, ensuring they align with business and technical requirements. Your work isn't just a means to an end; it's the foundation upon which data-driven decisions are made - and your lifecycle management expertise will ensure our data remains fresh and impactful.
Technical Professional to Design, Build and Manage the infrastructure and systems that enable organizations to collect, process, store, and analyze large volumes of data. You will be the architects and builders of the data pipelines, ensuring that data is accessible, reliable, and optimized for various uses, including analytics, machine learning, and business intelligence.
**Key Responsibilities:**
+ **Designing and Building Data Pipelines:** Creating robust, scalable, and efficient ETL (Extract, Transform, Load) or ELT (Extract, Load, Transform) pipelines to move data from various sources into data warehouses, data lakes, or other storage systems. Ingest data which is structured, unstructured.
+ **Data Storage and Management:** Selecting and managing appropriate data storage solutions (e.g., relational databases, S3, ADLS, data warehouses like SQL, Databricks.
+ **Data Architecture:** Understand target data models, schemas, and database structures that support business requirements and data analysis needs.
+ **Data Integration:** Connecting disparate data sources, ensuring data consistency and quality across different systems.
+ **Performance Optimization:** Optimizing data processing systems for speed, efficiency, and scalability, often dealing with large source systems datasets.
+ **Data Governance and Security:** Implementing measures for data quality, security, privacy, and compliance with regulations.
+ **Collaboration:** Working closely with Data Scientists, Data Analysts, Business Intelligence Developers, and other stakeholders to understand their data needs and provide them with clean, reliable data.
+ **Automation:** Automating data processes and workflows to reduce manual effort and improve reliability.
So, if you're a technical enthusiast with a passion for data, we invite you to join us in the exhilarating world of data engineering at Kyndryl. Let's transform data into a compelling story of innovation and growth.
Your Future at Kyndryl
Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won't find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here.
**Who You Are**
You're good at what you do and possess the required experience to prove it. However, equally as important - you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused - someone who prioritizes customer success in their work. And finally, you're open and borderless - naturally inclusive in how you work with others.
**Required Technical and Professional Expertise**
+ 4 - 6 years of experience as an Data Engineer .
+ ETL/ELT Tools: Experience with data integration tools and platforms like SSIS, Azure Data Factory
+ SSIS Package Development
+ Control Flow: Designing and managing the workflow of ETL processes, including tasks, containers, and precedence constraints.
+ Data Flow: Building pipelines for extracting data from sources, transforming it using various built-in components
+ SQL Server Management Studio (SSMS): For database administration, querying, and managing SSIS packages.
+ SQL Server Data Tools (SSDT) / Visual Studio: The primary IDE for developing SSIS packages.
+ Scripting (C# or VB.NET): For advanced transformations, custom components, or complex logic that cannot be achieved with built-in SSIS components.
+ Programming Languages: Advantage if experience on either of Python / Java Scala basics
+ Cloud Platforms: Proficiency with cloud data services from providers like SSIS, Microsoft Azure (Azure Data Lake, Azure Data Factory) etc
+ Data Warehousing: Understanding of data warehousing concepts, dimensional modelling, and schema design.
+ Version Control: Familiarity with Git and collaborative development workflows.
**Preferred Technical and Professional Experience**
+ Degree in a scientific discipline, such as Computer Science, Software Engineering, or Information Technology
**Being You**
Diversity is a whole lot more than what we look like or where we come from, it's how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we're not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you - and everyone next to you - the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That's the Kyndryl Way.
**What You Can Expect**
With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter - wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed.
**Get Referred!**
If you know someone that works at Kyndryl, when asked 'How Did You Hear About Us' during the application process, select 'Employee Referral' and enter your contact's Kyndryl email address.
Kyndryl is committed to creating a diverse environment and is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, pregnancy, disability, age, veteran status, or other characteristics. Kyndryl is also committed to compliance with all fair employment practices regarding citizenship and immigration status.
Data Engineer
Posted 4 days ago
Job Viewed
Job Description
Position: Senior Data Engineer
Exp: 12-15 Years
Location: Remote
We are seeking a Senior Data Engineer. The ideal candidate will be based in India and work remotely. This role requires a blend of design and implementation expertise.
Key Responsibilities:
Design and bootstrap data solutions, including setting up Snowflake for a large, complex conglomerate.
Utilize a combination of Pentaho and dbt for for ETL processes
Design and implement Role-Based Access Control (RBAC) within the data platform, aligned with the organizational structure.
Collaborate with stakeholders to define and execute the overall data strategy.
Required Experience & Skills:
12-14 years of experience in a data engineering role.
Proven expertise in Snowflake is essential.
Proven expertise in Pentaho and dbt for building data pipelines is essential.
Experience in a senior capacity, leading projects from design to implementation.
Knowledge of Infor M3 (an ERP system) would be a plus.
Data Engineer
Posted 4 days ago
Job Viewed
Job Description
Job Title:
Data Engineer (AWS QuickSight, Glue, PySpark)
Location:
Noida
Job Summary:
We are seeking a skilled Data Engineer with 4-5 years of experience to design, build, and maintain scalable data pipelines and analytics solutions within the AWS cloud environment. The ideal candidate will leverage AWS Glue, PySpark, and QuickSight to deliver robust data integration, transformation, and visualization capabilities. This role is critical in supporting business intelligence, analytics, and reporting needs across the organization.
Key Responsibilities:
- Design, develop, and maintain data pipelines using AWS Glue, PySpark, and related AWS services to extract, transform, and load (ETL) data from diverse sources
- Build and optimize data warehouse/data lake infrastructure on AWS, ensuring efficient data storage, processing, and retrieval
- Develop and manage ETL processes to source data from various systems, including databases, APIs, and file storage, and create unified data models for analytics and reporting
- Implement and maintain business intelligence dashboards using Amazon QuickSight, enabling stakeholders to derive actionable insights
- Collaborate with cross-functional teams (business analysts, data scientists, product managers) to understand requirements and deliver scalable data solutions
- Ensure data quality, integrity, and security throughout the data lifecycle, implementing best practices for governance and compliance5 .
- Support self-service analytics by empowering internal users to access and analyze data through QuickSight and other reporting tools1 .
- Troubleshoot and resolve data pipeline issues , optimizing performance and reliability as needed
Required Skills & Qualifications:
- Proficiency in AWS cloud services: AWS Glue, QuickSight, S3, Lambda, Athena, Redshift, EMR, and related technologies
- Strong experience with PySpark for large-scale data processing and transformation
- Expertise in SQL and data modeling for relational and non-relational databases
- Experience building and optimizing ETL pipelines and data integration workflows
- Familiarity with business intelligence and visualization tools , especially Amazon QuickSight
- Knowledge of data governance, security, and compliance best practices 5 .
- Strong programming skills in Python ; experience with automation and scripting
- Ability to work collaboratively in agile environments and manage multiple priorities effectively
- Excellent problem-solving and communication skills .
Preferred Qualifications:
- AWS certification (e.g., AWS Certified Data Analytics – Specialty, AWS Certified Developer)
Good to have skills - understanding of machine learning , deep learning and Generative AI concepts, Regression, Classification, Predictive modeling, Clustering, Deep Learning
Data Engineer
Posted 4 days ago
Job Viewed
Job Description
Job Title: Data Engineer
Location: Noida
Experience: 3+ years
Job Description: We are seeking a skilled and experienced Data Engineer to join our dynamic team. The ideal candidate will have a strong background in data engineering, with a focus on PySpark, Python, and SQL. Experience with Azure Databricks is a plus.
Key Responsibilities:
- Design, develop, and maintain scalable data pipelines and systems.
- Work closely with data scientists and analysts to ensure data quality and availability.
- Implement data integration and transformation processes using PySpark and Python.
- Optimize and maintain SQL databases and queries.
- Collaborate with cross-functional teams to understand data requirements and deliver solutions.
- Monitor and troubleshoot data pipeline issues to ensure data integrity and performance.
Required Skills and Qualifications:
- Bachelor's degree in Computer Science, Information Technology, or a related field.
- 3+ years of experience in data engineering.
- Proficiency in PySpark, Python, and SQL.
- Experience with Azure Databricks is a plus.
- Strong problem-solving skills and attention to detail.
- Excellent communication and teamwork abilities.
Preferred Qualifications:
- Experience with cloud platforms such as Azure, AWS, or Google Cloud.
- Knowledge of data warehousing concepts and technologies.
- Familiarity with ETL tools and processes.
How to Apply: Apart from Easy apply on Linkedin also Click on this link
Data Engineer
Posted 4 days ago
Job Viewed
Job Description
About Position:
We are looking for an experienced Data Engineer to join our growing data team. The ideal candidate will have a strong background in building and optimizing data pipelines, data architecture, and data sets. You will work closely with data scientists, analysts, and software engineers to support data initiatives and ensure optimal data delivery architecture is consistent throughout ongoing projects.
- Role: Data Engineer
- Location: Noida
- Experience: 5+ years
- Job Type: Full Time Employment
What You'll Do:
- Design and develop ETL/ELT pipelines using Azure Data Factory, Databricks, and other Azure services.
- Build and maintain data lakes, data warehouses, and real-time data streaming solutions.
- Write efficient and reusable Python scripts for data transformation, automation, and orchestration.
- Optimize and manage SQL queries and database performance across Azure SQL, Synapse, and other platforms.
- Collaborate with data scientists, analysts, and business stakeholders to deliver actionable insights.
- Implement CI/CD pipelines for data workflows and ensure robust data governance and security.
- Monitor and troubleshoot data pipelines and ensure high availability and reliability.
Expertise You'll Bring:
- 5+ years of experience in Microsoft Azure Cloud, Azure Data Factory, Data Bricks, Spark, Scala/Python, ADO.
- 5+ years of experience working with Relational database (SQL, Oracle)
- 5+ years of experience working with Provider and Payer data
- 5+ years of combined experience in data engineering, ingestion, normalization, transformation, aggregation, structuring, and storage
- 5+ years of combined experience working with industry standard relational, dimensional or non-relational data storage systems
- 5+ years of experience in designing ETL/ELT solutions using tools like Informatica, DataStage, SSIS , PL/SQL, T-SQL, etc.
- 5+ years of experience in managing data assets using SQL, Python, Scala, VB.NET or other similar querying/coding language
- 3+ years of experience working with healthcare data or data to support healthcare organizations
- Working knowledge of various tools such as ALM, Rally etc.
- Experience in US healthcare
Inclusive Environment:
Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds.
- We offer hybrid work options and flexible working hours to accommodate various needs and preferences.
- Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities.
- If you are a person with disabilities and have specific requirements, please inform us during the application process or at any time during your employment. We are committed to creating an inclusive environment where all employees can thrive.
Our company fosters a values-driven and people-centric work environment that enables our employees to:
- Accelerate growth, both professionally and personally
- Impact the world in powerful, positive ways, using the latest technologies
- Enjoy collaborative innovation, with diversity and work-life wellbeing at the core
- Unlock global opportunities to work and learn with the industry’s best
Let’s unleash your full potential at Persistent
“Persistent is an Equal Opportunity Employer and prohibits discrimination and harassment of any kind.”
Data Engineer
Posted 4 days ago
Job Viewed
Job Description
Job Title: Data Engineer
Company: Enablemining
Location: Remote
Employment Type: Full-time
Seniority Level: Mid-Level
Experience: Minimum 2 years
Education: BE/BTech or MCA
Enablemining is a global mining consultancy headquartered in Australia. We specialize in strategy, mine planning, and technical evaluations for coal and metalliferous mines. Our work is grounded in structured problem-solving and innovation — helping clients maximize the value of their mining assets.
We are looking for a skilled Data Engineer to join our data and analytics team. You’ll be responsible for building and optimizing data pipelines, transforming raw datasets into usable formats, and enabling insight through interactive reporting solutions.
You will work across modern tools such as PySpark, Python, SQL, Power BI, and DAX , and collaborate closely with business teams to create scalable, impactful data systems.
- Design and maintain data pipelines using PySpark and SQL
- Develop efficient ETL workflows and automate data ingestion
- Support data transformation and analytics with Python
- Create and manage interactive dashboards using Power BI and DAX
- Integrate and manage data from Databricks and other platforms
- Ensure accuracy, performance, and scalability of data solutions
- Work with stakeholders to understand and deliver on reporting needs
- Minimum 2 years of experience as a Data Engineer or in a related role
- Proficiency in PySpark, SQL, and Python
- Experience in Power BI , with strong skills in DAX
- Familiarity with Databricks or other data lakehouse platforms
- Strong analytical, problem-solving, and communication skills
Data Engineer
Posted 4 days ago
Job Viewed
Job Description
We are looking for an experienced Data Engineer with strong expertise in Databricks and Azure Data Factory (ADF) to design, build, and manage scalable data pipelines and integration solutions. The ideal candidate will have a solid background in big data technologies, cloud platforms, and data processing frameworks to support enterprise-level data transformation and analytics initiatives.
Key Responsibilities:
- Design, develop, and maintain robust data pipelines using Azure Data Factory and Databricks .
- Build and optimize data flows and transformations for structured and unstructured data.
- Develop scalable ETL/ELT processes to extract data from various sources including SQL, APIs, and flat files.
- Implement data quality checks, error handling, and performance tuning of data pipelines.
- Collaborate with data scientists, analysts, and business stakeholders to understand data requirements.
- Work with Azure services such as Azure Data Lake Storage (ADLS) , Azure Synapse Analytics , and Azure SQL .
- Participate in code reviews, version control, and CI/CD processes.
- Ensure data security, privacy, and compliance with governance standards.
Required Skills & Qualifications:
- 4–8 years of experience in Data Engineering or related field.
- Strong hands-on experience with Azure Data Factory and Azure Databricks (Spark-based development).
- Proficiency in Python , SQL , and PySpark for data manipulation.
- Experience with Delta Lake , data versioning , and streaming/batch data processing .
- Working knowledge of Azure services such as ADLS, Azure Blob Storage, and Azure Key Vault.
- Familiarity with DevOps , Git , and CI/CD pipelines in data engineering workflows.
- Strong understanding of data modeling, data warehousing, and performance tuning.
- Excellent analytical, communication, and problem-solving skills.
Be The First To Know
About the latest Data engineer Jobs in Noida !
Data Engineer
Posted today
Job Viewed
Job Description
Responsibilities
- At least 1.5 Years of experience in Data Engineering support or a related role, with hand on exposure to AWS.
Technical Skills:
- Proficiency in AWS Services (e.g S3, EMR, Glue, Athena, Secret Manager, EC2, Cloud Watch, RDS, Redshift)
- Basic Knowledge of SQL for data querying and analysis
- Familiarity with scripting languages such as Python or shell.
- Basic understanding of ETL Processes.
Soft Skills:
- Effective communication and teamwork abilities
- Ability to work in 24*7 support environment with 6 days working
Qualifications
Education: Bachelor Degree in any field.
Required Skills
- Proficiency in AWS Services (e.g S3, EMR, Glue, Athena, Secret Manager, EC2, Cloud Watch, RDS, Redshift)
- Basic Knowledge of SQL for data querying and analysis
- Familiarity with scripting languages such as Python or shell.
- Basic understanding of ETL Processes.
Preferred Skills
- Effective communication and teamwork abilities
- Ability to work in 24*7 support environment with 6 days working
Data Engineer
Posted today
Job Viewed
Job Description
Job Title: Data Engineer
Location: Noida
Experience: 3+ years
Job Description: We are seeking a skilled and experienced Data Engineer to join our dynamic team. The ideal candidate will have a strong background in data engineering, with a focus on PySpark, Python, and SQL. Experience with Azure Databricks is a plus.
Key Responsibilities:
- Design, develop, and maintain scalable data pipelines and systems.
- Work closely with data scientists and analysts to ensure data quality and availability.
- Implement data integration and transformation processes using PySpark and Python.
- Optimize and maintain SQL databases and queries.
- Collaborate with cross-functional teams to understand data requirements and deliver solutions.
- Monitor and troubleshoot data pipeline issues to ensure data integrity and performance.
Required Skills and Qualifications:
- Bachelor's degree in Computer Science, Information Technology, or a related field.
- 3+ years of experience in data engineering.
- Proficiency in PySpark, Python, and SQL.
- Experience with Azure Databricks is a plus.
- Strong problem-solving skills and attention to detail.
- Excellent communication and teamwork abilities.
Preferred Qualifications:
- Experience with cloud platforms such as Azure, AWS, or Google Cloud.
- Knowledge of data warehousing concepts and technologies.
- Familiarity with ETL tools and processes.
How to Apply: Apart from Easy apply on Linkedin also Click on this link
Data Engineer
Posted today
Job Viewed
Job Description
Job Title:
Data Engineer (AWS QuickSight, Glue, PySpark)
Location:
Noida
Job Summary:
We are seeking a skilled Data Engineer with 4-5 years of experience to design, build, and maintain scalable data pipelines and analytics solutions within the AWS cloud environment. The ideal candidate will leverage AWS Glue, PySpark, and QuickSight to deliver robust data integration, transformation, and visualization capabilities. This role is critical in supporting business intelligence, analytics, and reporting needs across the organization.
Key Responsibilities:
- Design, develop, and maintain data pipelines using AWS Glue, PySpark, and related AWS services to extract, transform, and load (ETL) data from diverse sources
- Build and optimize data warehouse/data lake infrastructure on AWS, ensuring efficient data storage, processing, and retrieval
- Develop and manage ETL processes to source data from various systems, including databases, APIs, and file storage, and create unified data models for analytics and reporting
- Implement and maintain business intelligence dashboards using Amazon QuickSight, enabling stakeholders to derive actionable insights
- Collaborate with cross-functional teams (business analysts, data scientists, product managers) to understand requirements and deliver scalable data solutions
- Ensure data quality, integrity, and security throughout the data lifecycle, implementing best practices for governance and compliance5.
- Support self-service analytics by empowering internal users to access and analyze data through QuickSight and other reporting tools1.
- Troubleshoot and resolve data pipeline issues, optimizing performance and reliability as needed
Required Skills & Qualifications:
- Proficiency in AWS cloud services: AWS Glue, QuickSight, S3, Lambda, Athena, Redshift, EMR, and related technologies
- Strong experience with PySpark for large-scale data processing and transformation
- Expertise in SQL and data modeling for relational and non-relational databases
- Experience building and optimizing ETL pipelines and data integration workflows
- Familiarity with business intelligence and visualization tools, especially Amazon QuickSight
- Knowledge of data governance, security, and compliance best practices5.
- Strong programming skills in Python; experience with automation and scripting
- Ability to work collaboratively in agile environments and manage multiple priorities effectively
- Excellent problem-solving and communication skills.
Preferred Qualifications:
- AWS certification (e.g., AWS Certified Data Analytics – Specialty, AWS Certified Developer)
Good to have skills - understanding of machine learning , deep learning and Generative AI concepts, Regression, Classification, Predictive modeling, Clustering, Deep Learning