23,124 Data Engineers jobs in India
Data Engineers
Posted today
Job Viewed
Job Description
At Rearc, we're committed to empowering engineers to build awesome products and experiences. Success as a business hinges on our people's ability to think freely, challenge the status quo, and speak up about alternative problem-solving approaches. If you're an engineer driven by the desire to solve problems and make a difference, you're in the right place!
Our approach is simple — empower engineers with the best tools possible to make an impact within their industry.
We're on the lookout for engineers who thrive on ownership and freedom, possessing not just technical prowess, but also exceptional leadership skills. Our ideal candidates are hands-on-keyboard leaders who don't just talk the talk but also walk the walk, designing and building solutions that push the boundaries of cloud computing.
Founded in 2016, we pride ourselves on fostering an environment where creativity flourishes, bureaucracy is non-existent, and individuals are encouraged to challenge the status quo. We're not just a company; we're a community of problem-solvers dedicated to improving the lives of fellow software engineers.
Our commitment is simple - finding the right fit for our team and cultivating a desire to make things better. If you're a cloud professional intrigued by our problem space and eager to make a difference, you've come to the right place. Join us, and let's solve problems together!
About the role
As a Data Engineer at Rearc, you'll contribute to the technical excellence of our data engineering team. Your expertise in data architecture, ETL processes, and data modeling will help optimize data workflows for efficiency, scalability, and reliability. You'll work closely with cross-functional teams to design and implement robust data solutions that meet business objectives and adhere to best practices in data management. Building strong partnerships with technical teams and stakeholders will be essential as you support data-driven initiatives and contribute to their successful implementation.
What you'll do- Collaborate with Colleagues : Work closely with colleagues to understand customers' data requirements and challenges, contributing to the development of robust data solutions tailored to client needs.
- Apply DataOps Principles : Embrace a DataOps mindset and utilize modern data engineering tools and frameworks like Apache Airflow, Apache Spark, or similar, to create scalable and efficient data pipelines and architectures.
- Support Data Engineering Projects : Assist in managing and executing data engineering projects, providing technical support and contributing to project success.
- Promote Knowledge Sharing : Contribute to our knowledge base through technical blogs and articles, advocating for best practices in data engineering, and fostering a culture of continuous learning and innovation.
- 2+ years of experience in data engineering, data architecture, or related fields, bringing valuable expertise in managing and optimizing data pipelines and architectures.
- Solid track record of contributing to complex data engineering projects, including assisting in the design and implementation of scalable data solutions.
- Hands-on experience with ETL processes, data warehousing, and data modelling tools, enabling the support and delivery of efficient and robust data pipelines.
- Good understanding of data integration tools and best practices, facilitating seamless data flow across systems.
- Familiarity with cloud-based data services and technologies (e.g., AWS Redshift, Azure Synapse Analytics, Google BigQuery) ensuring effective utilization of cloud resources for data processing and analytics.
- Strong analytical skills to address data challenges and support data-driven decision-making.
- Proficiency in implementing and optimizing data pipelines using modern tools and frameworks.
- Strong communication and interpersonal skills enabling effective collaboration with cross-functional teams and stakeholder engagement.
Your first few weeks at Rearc will be spent in an immersive learning environment where our team will help you get up to speed. Within the first few months, you'll have the opportunity to experiment with a lot of different tools as you find your place on the team.
Rearc is committed to a diverse and inclusive workplace. Rearc is an equal opportunity employer and does not discriminate on the basis of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status.
PI9e8f4fa793be-
Data Engineers
Posted today
Job Viewed
Job Description
At Rearc, we're committed to empowering engineers to build awesome products and experiences. Success as a business hinges on our people's ability to think freely, challenge the status quo, and speak up about alternative problem-solving approaches. If you're an engineer driven by the desire to solve problems and make a difference, you're in the right place!
Our approach is simple — empower engineers with the best tools possible to make an impact within their industry.
We're on the lookout for engineers who thrive on ownership and freedom, possessing not just technical prowess, but also exceptional leadership skills. Our ideal candidates are hands-on-keyboard leaders who don't just talk the talk but also walk the walk, designing and building solutions that push the boundaries of cloud computing.
Founded in 2016, we pride ourselves on fostering an environment where creativity flourishes, bureaucracy is non-existent, and individuals are encouraged to challenge the status quo. We're not just a company; we're a community of problem-solvers dedicated to improving the lives of fellow software engineers.
Our commitment is simple - finding the right fit for our team and cultivating a desire to make things better. If you're a cloud professional intrigued by our problem space and eager to make a difference, you've come to the right place. Join us, and let's solve problems together!
About the role
As a Data Engineer at Rearc, you'll contribute to the technical excellence of our data engineering team. Your expertise in data architecture, ETL processes, and data modeling will help optimize data workflows for efficiency, scalability, and reliability. You'll work closely with cross-functional teams to design and implement robust data solutions that meet business objectives and adhere to best practices in data management. Building strong partnerships with technical teams and stakeholders will be essential as you support data-driven initiatives and contribute to their successful implementation.
What you'll do- Collaborate with Colleagues : Work closely with colleagues to understand customers' data requirements and challenges, contributing to the development of robust data solutions tailored to client needs.
- Apply DataOps Principles : Embrace a DataOps mindset and utilize modern data engineering tools and frameworks like Apache Airflow, Apache Spark, or similar, to create scalable and efficient data pipelines and architectures.
- Support Data Engineering Projects : Assist in managing and executing data engineering projects, providing technical support and contributing to project success.
- Promote Knowledge Sharing : Contribute to our knowledge base through technical blogs and articles, advocating for best practices in data engineering, and fostering a culture of continuous learning and innovation.
- 2+ years of experience in data engineering, data architecture, or related fields, bringing valuable expertise in managing and optimizing data pipelines and architectures.
- Solid track record of contributing to complex data engineering projects, including assisting in the design and implementation of scalable data solutions.
- Hands-on experience with ETL processes, data warehousing, and data modelling tools, enabling the support and delivery of efficient and robust data pipelines.
- Good understanding of data integration tools and best practices, facilitating seamless data flow across systems.
- Familiarity with cloud-based data services and technologies (e.g., AWS Redshift, Azure Synapse Analytics, Google BigQuery) ensuring effective utilization of cloud resources for data processing and analytics.
- Strong analytical skills to address data challenges and support data-driven decision-making.
- Proficiency in implementing and optimizing data pipelines using modern tools and frameworks.
- Strong communication and interpersonal skills enabling effective collaboration with cross-functional teams and stakeholder engagement.
Your first few weeks at Rearc will be spent in an immersive learning environment where our team will help you get up to speed. Within the first few months, you'll have the opportunity to experiment with a lot of different tools as you find your place on the team.
Rearc is committed to a diverse and inclusive workplace. Rearc is an equal opportunity employer and does not discriminate on the basis of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status.
PI7d3aab4e1f
Data Engineers
Posted today
Job Viewed
Job Description
SQL/NoSQL, cloud
data solutions
SQL/NoSQL, cloud
data solutions
SQL/NoSQL, cloud
data solutions
SQL/NoSQL, cloud
data solutions
SQL/NoSQL, cloud
data solutions
SQL/NoSQL, cloud
data solutions
SQL/NoSQL, cloud
data solutions
SQL/NoSQL, cloud
data solutions
SQL/NoSQL, cloud
data solutionsSQL/NoSQL, cloud
data solutions
Requirements
• Frontend Framework: Angular
• ervers : Tomcat, Jetty, JBoss, Nginx, Apache HTTP Server
• T ols : Maven, Log4j 2, JUnit 5, Mockito, Postman, Swagger,
JMeter, Logback
• O : Windows, Linux
• V rsion Control : Git, GitHub
• I E : Eclipse, STS, IntelliJ IDEA
• M ssaging Systems : Apache Kafka
• C oud : AWS, Azure
• D vOps Tools : Docker, Kubernetes, GitLab
• F ontend Framework: Angular
• S rvers : Tomcat, Jetty, JBoss, Nginx, Apache HTTP Server
• T ols : Maven, Log4j 2, JUnit 5, Mockito, Postman, Swagger,
JMeter, Logback
• O : Windows, Linux
• V rsion Control : Git, GitHub
• I E : Eclipse, STS, IntelliJ IDEA
• M ssaging Systems : Apache Kafka
• C oud : AWS, Azure
• D vOps Tools : Docker, Kubernetes, GitLab
Senior Data Engineers
Posted today
Job Viewed
Job Description
Role and Responsibilities
Skills and Experience
Senior Data Engineers
Posted today
Job Viewed
Job Description
Role and Responsibilities
Skills and Experience
Data Engineers 3-5yrs
Posted today
Job Viewed
Job Description
This role outlines the day-to-day tasks which involve refactoring legacy Spark jobs to new standards, upgrading Airflow jobs, and completing migrations. Manager mentions that AI-based automations are available to assist with code refactoring, and three full-time engineers will provide support. The required skills include expertise in Airflow and Spark, with AWS exposure being a plus also experience with modern editors like Cursor would be beneficial.The role involves refactoring existing data pipelines, so the new hires are not expected to build it from scratch.
Non negotiable Skills:
Python, Spark, and Airflow experience, with at least 3 years of experience in these tools, though Airflow experience could be less than 3 years.
Candidates without all three required skills might be considered if they have strong experience in Python and Spark.
Nice to Haves:
AWS experience, particularly with S3 and EMR, is desirable but not strictly required.
Interview process: 2 rounds
Coding round focusing on Python and SQL /PySpark
Manager is open for candidates who will request to WFH but would be open to come to office if required.
Regular time: 10 am to 7pm
Duration: 6 months ( no obligation for renewal - depending on Business needs and performance)
Get to Know the Role
You will support the mission of the team by maintaining and extending the platform capabilities through implementation of new features and continuous improvements. You will also explore new developments in the space and continuously bring them to our platform there by helping the data community at Client
The Critical Tasks You Will Perform
? You will maintain and extend the Python/Go/Scala backend for Client's Airflow, Spark, Trino and Starrocks platform ? You will modify and extend Python/Scala Spark applications and Airflow pipelines for better performance, reliability, and cost. ? You will design and implement architectural improvements for new use cases or efficiency.
? You will build platforms that can scale to the 3 Vs of Big Data (Volume, Velocity, Variety)
? You will follow various testing best practices and SRE best practices to ensure system stability and reliability.
Qualifications What Essential Skills You Will Need
? Software Engineering, Computer Science, or related undergraduate degree. Proficient in at least one of the following: Python, Go, or Scala and strong appetite to learn other programming languages.
? You have 3-5 years of relevant professional experience
? Good working knowledge in 3 or more of the following: Airflow, Spark, relational databases (ideally MySQL), Kubernetes, Starrocks, Trino, and backend API implementation and being passionate about learning the others.
? Experience with AWS services (S3, EKS, IAM) and infrastructure as code tools like Terraform.
? Proficiency in CI/CD tools (Jenkins, GitLab, etc.)
? You are highly motivated to work smart and intelligently using available AI resources at Client Skills that are Good to have
? Proficient in Kubernetes with hands of experience with building custom resources using frameworks like kubebuilder.
? Proficient in Apache Spark, with good knowledge of resource managers like Yarn, Kubernetes and how spark interacts and work with them
? Advanced understanding of Apache Airflow and its working with Celery and/or Kubernetes executor backend with exposure to Python SQLAlchemy framework.
? Advanced knowledge of other query engines like Trino, Starrocks and others
? Advanced knowledge of AWS Cloud
? Good understanding of lakehouse table formats like Iceberg and Delta lake, how query engines work with it.
AI Developers & Data Engineers
Posted today
Job Viewed
Job Description
Job Description:
Key Responsibilities:
• Design, develop, and maintain scalable, efficient, and reliable systems to support GenAI and machine learning-based applications and use cases
• ead the development of data pipelines, architectures, and tools to support data-intensive projects, ensuring high performance, security, and compliance
• C llaborate with other stakeholders to integrate AI and ML models into production-ready systems
• W rk closely with non-backend expert counterparts, such as data scientists and ML engineers, to ensure seamless integration of AI and ML models into backend systems
• E sure high-quality code, following best practices, and adhering to industry standards and company guidelines
Hard Requirements:
• S nior backend engineer with a proven track record of owning the backend portion of projects
• E perience collaborating with product, project, and domain team members
• S rong understanding of data pipelines, architectures, and tools
• P oficiency in Python (ability to read, write and debug Python code with minimal guidance)
Mandatory Skills:
• M chine Learning: experience with machine learning frameworks, such as scikit-learn, TensorFlow, or PyTorch
• P thon: proficiency in Python programming, with experience working with libraries and frameworks, such as NumPy, pandas, and Flask
• N tural Language Processing: experience with NLP techniques, such as text processing, sentiment analysis, and topic modeling
• D ep Learning: experience with deep learning frameworks, such as TensorFlow, or PyTorch
• D ta Science: experience working with data science tools
• B ckend: experience with backend development, including design, development, and deployment of scalable and modular systems
• A tificial Intelligence: experience with AI concepts, including computer vision, robotics, and expert systems
• P ttern Recognition: experience with pattern recognition techniques, such as clustering, classification, and regression
• S atistical Modeling: experience with statistical modeling, including hypothesis testing, confidence intervals, and regr
Be The First To Know
About the latest Data engineers Jobs in India !
Ab Initio Data Engineers
Posted today
Job Viewed
Job Description
Exusia, a cutting-edge digital transformation consultancy, is looking for top talent in the Data Engineering space with specific skills in Ab Initio / Azure Data Engineering services to join our global delivery team's Industry Analytics practice.
What’s the Role?
Full-time job to work with Exusia's clients to design, develop and maintain large scale data engineering solutions. The right candidates will also get a chance to work across the entire data landscape including Data Governance, Metadata Management and will work closely with client stakeholders to capture the requirements, design and implement Analytical reporting, Compliance and Data Governance solutions.
Qualifications & Role Responsibilities
- Master of Science (preferably in Computer and Information Sciences or Business Information Technology) or an Engineering degree in the above areas.
- Have a minimum of 4 years experience in Data Management, Data Engineering & Data Governance space with hands on project experience using Ab Initio, Pyspark, Databricks and SAS
- Should have worked on large data initiatives and should have exposure to different ETL / Data engineering tools
- Work with business stakeholders to gather and analyze business requirements, building a solid understanding of the Data Analytics and Data Governance domain
- Document, discuss and resolve business, data and reporting issues within the team, across functional teams, and with business stakeholders
- Should be able to work independently and come up with solution design
- Build optimized data processing and data governance solutions using the given toolset
- Collaborate with delivery leadership to deliver projects on time adhering to the quality standards
Mandatory Skills:
- Must have strong Data Warehousing / Data Engineering foundational skills with exposure to different types of data architecture
- Strong conceptual understanding of Data Management & Data Governance principles
- Hands-on experience using
- Strong Ab Initio skills with hands on experience on GDE, Express>IT, Conduct>IT, MetadataHub
- Databricks and should be fluent with Pyspark & Spark SQL
- Experience working with multiple databases like Oracle/SQL Server/Netezza as well as cloud hosted DWH like Snowflake/Redshift/Synapse and Big Query
- Exposure to Azure services relevant for Data engineering - ADF/Databricks/Synapse Analytics
- Experience working in an agile software delivery model is required.
- Prior data modelling experience is mandatory preferably for DWH/Data marts/Lakehouse
- Discuss & document data and analytics requirements with cross functional teams and business stakeholders.
- Analyze requirements and come up with technical specifications, source-to-target mapping, data and data models
- Manage changing priorities during the software development lifecycle (SDLC)
- Transforming business/functional requirements into technical specifications.
- Azure Certification relevant for Data Engineering/Analytics
- Experience and knowledge of one or more domains within Banking and Financial Services
Nice-to-Have Skills:
- Exposure to tools like Talend,Informatica,SAS for data processing
- Prior experience in converting Talend/Informatica/Mainframe based data pipelines to Ab Initio will be a big plus
- Data validation and testing using SQL or any tool-based testing methods
- Reporting/Visualization tool experience - PowerBI
- Exposure to Data Governance projects including Metadata Management, Data Dictionary, Data Glossary, Data Lineage and Data Quality aspects.
About Exusia
Exusia ( ) is a global technology consulting company that empowers its clients to gain a competitive edge by accelerating business objectives and providing strategy and solutions in data management and analytics. The company has established its leadership position by solving some of the world's largest and most complex data problems in the financial, healthcare, telecommunications and high technology industries.
Exusia’s mission is to transform the world through the innovative use of information.
Exusia was recognized by Inc. 5000 and by Crain’s publications as one of the fastest growing privately held companies in the world. Since the company’s founding in 2012, Exusia has experienced an impressive seven years of revenue growth and has expanded its operations in the Americas, Asia, Africa and UK. Exusia has recently also been recognized by publications such as the CIO Review, Industry Era, Insight Success and the CIO Bulletin for the company’s innovation in IT Services, the Telecommunications and Healthcare industries and its entrepreneurship. The company is headquartered in Miami city of Florida, United States with development centers in Pune, Hyderabad, Bengaluru and Gurugram, India.
Interested applicants should apply by forwarding their CV to:
Ab Initio Data Engineers
Posted today
Job Viewed
Job Description
About The Position
Exusia, a cutting-edge digital transformation consultancy, is looking for top talent in the Data Engineering space with specific skills in Ab Initio / Azure Data Engineering services to join our global delivery team's Industry Analytics practice.
What’s the Role?
Full-time job to work with Exusia's clients to design, develop and maintain large scale data engineering solutions. The right candidates will also get a chance to work across the entire data landscape including Data Governance, Metadata Management and will work closely with client stakeholders to capture the requirements, design and implement Analytical reporting, Compliance and Data Governance solutions.
Qualifications & Role Responsibilities
Requirements
Mandatory Skills:
Nice-to-Have Skills:
About Exusia
Exusia () is a global technology consulting company that empowers its clients to gain a competitive edge by accelerating business objectives and providing strategy and solutions in data management and analytics. The company has established its leadership position by solving some of the world's largest and most complex data problems in the financial, healthcare, telecommunications and high technology industries.
Exusia’s mission is to transform the world through the innovative use of information.
Exusia was recognized by Inc. 5000 and by Crain’s publications as one of the fastest growing privately held companies in the world. Since the company’s founding in 2012, Exusia has experienced an impressive seven years of revenue growth and has expanded its operations in the Americas, Asia, Africa and UK. Exusia has recently also been recognized by publications such as the CIO Review, Industry Era, Insight Success and the CIO Bulletin for the company’s innovation in IT Services, the Telecommunications and Healthcare industries and its entrepreneurship. The company is headquartered in Miami city of Florida, United States with development centers in Pune, Hyderabad, Bengaluru and Gurugram, India.
Interested applicants should apply by forwarding their CV to:
6 Lead Data Engineers
Posted today
Job Viewed
Job Description
Job Description
Lead Data Engineers
- 12 months contract with 2x6 months ext. options!
- Hybrid work arrangement
- Australian Citizens with current Baseline Clearance
Infinite Consulting is seeking Lead Data Engineers for our esteemed Federal Government Client. This is a July start for a 12 month initial contract – 2x6 months further extensions possible based on funding and approval.
About the Role:
You will join a well-established team specialising in data innovation, data solutions and cloud platform development.
- As the Lead Data Engineer, you will lead data projects and work closely with key stakeholders to create solutions for business problems.
- You will be responsible to design and develop Azure-based data and analytic solutions and platforms.
Essential criteria
- 3+ years of experience in Data Development;
- Perform detail design based on high level architecture; Experience in detailed design for data integration on Azure cloud data services including ADLS, SQL DB, data lake Synapse and blob storage;
- Solid experience developing complex ETLs based on SSIS and customised coding C#, .net or vb.net Development experience with Power BI (stored procedure, DAX, functions, views);
- Proficient with Azure data factory and Azure Event Hub; Strong understanding of DevOps or other version control system;
- Experience working in an Agile environment Exceptional communication skills including written and oral skills.
Submission Requirements:
Duration: July 2025 start! 12 months with extension options
Clearance: Australian Citizens with current Baseline clearance
Submission deadline: 11/04/2025
If you are interested in finding out more about the role, apply today or contact Varsha on for a full assignment brief.