23,124 Data Engineers jobs in India

Data Engineers

Pune, Maharashtra Rearc

Posted today

Job Viewed

Tap Again To Close

Job Description

About Rearc

At Rearc, we're committed to empowering engineers to build awesome products and experiences. Success as a business hinges on our people's ability to think freely, challenge the status quo, and speak up about alternative problem-solving approaches. If you're an engineer driven by the desire to solve problems and make a difference, you're in the right place!


Our approach is simple — empower engineers with the best tools possible to make an impact within their industry.

We're on the lookout for engineers who thrive on ownership and freedom, possessing not just technical prowess, but also exceptional leadership skills. Our ideal candidates are hands-on-keyboard leaders who don't just talk the talk but also walk the walk, designing and building solutions that push the boundaries of cloud computing.


Founded in 2016, we pride ourselves on fostering an environment where creativity flourishes, bureaucracy is non-existent, and individuals are encouraged to challenge the status quo. We're not just a company; we're a community of problem-solvers dedicated to improving the lives of fellow software engineers.


Our commitment is simple - finding the right fit for our team and cultivating a desire to make things better. If you're a cloud professional intrigued by our problem space and eager to make a difference, you've come to the right place. Join us, and let's solve problems together!


About the role

As a Data Engineer at Rearc, you'll contribute to the technical excellence of our data engineering team. Your expertise in data architecture, ETL processes, and data modeling will help optimize data workflows for efficiency, scalability, and reliability. You'll work closely with cross-functional teams to design and implement robust data solutions that meet business objectives and adhere to best practices in data management. Building strong partnerships with technical teams and stakeholders will be essential as you support data-driven initiatives and contribute to their successful implementation.

What you'll do
  • Collaborate with Colleagues : Work closely with colleagues to understand customers' data requirements and challenges, contributing to the development of robust data solutions tailored to client needs.
  • Apply DataOps Principles : Embrace a DataOps mindset and utilize modern data engineering tools and frameworks like Apache Airflow, Apache Spark, or similar, to create scalable and efficient data pipelines and architectures.
  • Support Data Engineering Projects : Assist in managing and executing data engineering projects, providing technical support and contributing to project success.
  • Promote Knowledge Sharing : Contribute to our knowledge base through technical blogs and articles, advocating for best practices in data engineering, and fostering a culture of continuous learning and innovation.
We're looking for:
  • 2+ years of experience in data engineering, data architecture, or related fields, bringing valuable expertise in managing and optimizing data pipelines and architectures.
  • Solid track record of contributing to complex data engineering projects, including assisting in the design and implementation of scalable data solutions.
  • Hands-on experience with ETL processes, data warehousing, and data modelling tools, enabling the support and delivery of efficient and robust data pipelines.
  • Good understanding of data integration tools and best practices, facilitating seamless data flow across systems.
  • Familiarity with cloud-based data services and technologies (e.g., AWS Redshift, Azure Synapse Analytics, Google BigQuery) ensuring effective utilization of cloud resources for data processing and analytics.
  • Strong analytical skills to address data challenges and support data-driven decision-making.
  • Proficiency in implementing and optimizing data pipelines using modern tools and frameworks.
  • Strong communication and interpersonal skills enabling effective collaboration with cross-functional teams and stakeholder engagement.

Your first few weeks at Rearc will be spent in an immersive learning environment where our team will help you get up to speed. Within the first few months, you'll have the opportunity to experiment with a lot of different tools as you find your place on the team.


Rearc is committed to a diverse and inclusive workplace. Rearc is an equal opportunity employer and does not discriminate on the basis of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status.





PI9e8f4fa793be-

This advertiser has chosen not to accept applicants from your region.

Data Engineers

Bengaluru, Karnataka Rearc

Posted today

Job Viewed

Tap Again To Close

Job Description

About Rearc

At Rearc, we're committed to empowering engineers to build awesome products and experiences. Success as a business hinges on our people's ability to think freely, challenge the status quo, and speak up about alternative problem-solving approaches. If you're an engineer driven by the desire to solve problems and make a difference, you're in the right place!


Our approach is simple — empower engineers with the best tools possible to make an impact within their industry.

We're on the lookout for engineers who thrive on ownership and freedom, possessing not just technical prowess, but also exceptional leadership skills. Our ideal candidates are hands-on-keyboard leaders who don't just talk the talk but also walk the walk, designing and building solutions that push the boundaries of cloud computing.


Founded in 2016, we pride ourselves on fostering an environment where creativity flourishes, bureaucracy is non-existent, and individuals are encouraged to challenge the status quo. We're not just a company; we're a community of problem-solvers dedicated to improving the lives of fellow software engineers.


Our commitment is simple - finding the right fit for our team and cultivating a desire to make things better. If you're a cloud professional intrigued by our problem space and eager to make a difference, you've come to the right place. Join us, and let's solve problems together!


About the role

As a Data Engineer at Rearc, you'll contribute to the technical excellence of our data engineering team. Your expertise in data architecture, ETL processes, and data modeling will help optimize data workflows for efficiency, scalability, and reliability. You'll work closely with cross-functional teams to design and implement robust data solutions that meet business objectives and adhere to best practices in data management. Building strong partnerships with technical teams and stakeholders will be essential as you support data-driven initiatives and contribute to their successful implementation.

What you'll do
  • Collaborate with Colleagues : Work closely with colleagues to understand customers' data requirements and challenges, contributing to the development of robust data solutions tailored to client needs.
  • Apply DataOps Principles : Embrace a DataOps mindset and utilize modern data engineering tools and frameworks like Apache Airflow, Apache Spark, or similar, to create scalable and efficient data pipelines and architectures.
  • Support Data Engineering Projects : Assist in managing and executing data engineering projects, providing technical support and contributing to project success.
  • Promote Knowledge Sharing : Contribute to our knowledge base through technical blogs and articles, advocating for best practices in data engineering, and fostering a culture of continuous learning and innovation.
We're looking for:
  • 2+ years of experience in data engineering, data architecture, or related fields, bringing valuable expertise in managing and optimizing data pipelines and architectures.
  • Solid track record of contributing to complex data engineering projects, including assisting in the design and implementation of scalable data solutions.
  • Hands-on experience with ETL processes, data warehousing, and data modelling tools, enabling the support and delivery of efficient and robust data pipelines.
  • Good understanding of data integration tools and best practices, facilitating seamless data flow across systems.
  • Familiarity with cloud-based data services and technologies (e.g., AWS Redshift, Azure Synapse Analytics, Google BigQuery) ensuring effective utilization of cloud resources for data processing and analytics.
  • Strong analytical skills to address data challenges and support data-driven decision-making.
  • Proficiency in implementing and optimizing data pipelines using modern tools and frameworks.
  • Strong communication and interpersonal skills enabling effective collaboration with cross-functional teams and stakeholder engagement.

Your first few weeks at Rearc will be spent in an immersive learning environment where our team will help you get up to speed. Within the first few months, you'll have the opportunity to experiment with a lot of different tools as you find your place on the team.


Rearc is committed to a diverse and inclusive workplace. Rearc is an equal opportunity employer and does not discriminate on the basis of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status.





PI7d3aab4e1f

This advertiser has chosen not to accept applicants from your region.

Data Engineers

Bengaluru, Karnataka Axiom Software Solutions Limited

Posted today

Job Viewed

Tap Again To Close

Job Description

SQL/NoSQL, cloud

data solutions

SQL/NoSQL, cloud

data solutions

SQL/NoSQL, cloud

data solutions

SQL/NoSQL, cloud

data solutions

SQL/NoSQL, cloud

data solutions

SQL/NoSQL, cloud

data solutions

SQL/NoSQL, cloud

data solutions

SQL/NoSQL, cloud

data solutions

SQL/NoSQL, cloud

data solutionsSQL/NoSQL, cloud

data solutions

Requirements

• Frontend Framework: Angular

• ervers : Tomcat, Jetty, JBoss, Nginx, Apache HTTP Server

• T ols : Maven, Log4j 2, JUnit 5, Mockito, Postman, Swagger,

JMeter, Logback

• O : Windows, Linux

• V rsion Control : Git, GitHub

• I E : Eclipse, STS, IntelliJ IDEA

• M ssaging Systems : Apache Kafka

• C oud : AWS, Azure

• D vOps Tools : Docker, Kubernetes, GitLab

• F ontend Framework: Angular

• S rvers : Tomcat, Jetty, JBoss, Nginx, Apache HTTP Server

• T ols : Maven, Log4j 2, JUnit 5, Mockito, Postman, Swagger,

JMeter, Logback

• O : Windows, Linux

• V rsion Control : Git, GitHub

• I E : Eclipse, STS, IntelliJ IDEA

• M ssaging Systems : Apache Kafka

• C oud : AWS, Azure

• D vOps Tools : Docker, Kubernetes, GitLab

This advertiser has chosen not to accept applicants from your region.

Senior Data Engineers

Hyderabad, Andhra Pradesh GSPANN

Posted today

Job Viewed

Tap Again To Close

Job Description

Description GSPANN is hiring a Senior Data Engineer to design, develop, and optimize scalable data solutions. The role requires expertise in Azure Data Factory, Azure Databricks, PySpark, Delta Tables, and advanced data modeling, along with skills in performance optimization, API integrations, DevOps, and data governance.

Role and Responsibilities

  • Design, develop, and orchestrate scalable data pipelines using Azure Data Factory (ADF).
  • Build and manage Apache Spark clusters, create notebooks, and run jobs in Azure Databricks.
  • Ingest, organize, and transform data within the Microsoft Fabric ecosystem using OneLake.
  • Author complex transformations and write SQL (Structured Query Language) queries for large-scale data processing using PySpark and Spark SQL.
  • Create, optimize, and maintain Delta Lake tables, applying operations such as VACUUM, ZORDER, and OPTIMIZE.
  • Parse, validate, and transform semi-structured JSON (JavaScript Object Notation) datasets.
  • Build and consume REST/OData services for custom data ingestion through API (Application Programming Interface) integration.
  • Implement bronze, silver, and gold layers in data lakes using the Medallion Architecture to ensure clean and reliable data.
  • Apply partitioning, caching, and resource tuning to efficiently process high volumes of data for large-scale performance optimization.
  • Design star and snowflake schemas along with fact and dimension tables for multidimensional modeling in reporting use cases.
  • Work with tabular and OLAP (Online Analytical Processing) cube structures in Azure Analysis Services to enable downstream business intelligence.
  • Collaborate with the DevOps team to define infrastructure, manage access and security, and automate deployments.
  • Skills and Experience

  • Ingest and harmonize data from SAP (Systems, Applications, and Products) ECC (ERP Central Component) and S/4HANA systems using Data Sphere.
  • Use Git, Azure DevOps Pipelines, Terraform, or Azure Resource Manager (ARM) templates for CI/CD (Continuous Integration/Continuous Deployment) and DevOps tooling.
  • Leverage Azure Monitor, Log Analytics, and data pipeline metrics for data observability and monitoring.
  • Conduct query diagnostics, identify bottlenecks, and determine root causes for performance troubleshooting.
  • Apply metadata management, track data lineage, and enforce compliance best practices for data governance and cataloging.
  • Document processes, designs, and solutions effectively in Confluence.
  • This advertiser has chosen not to accept applicants from your region.

    Senior Data Engineers

    Gurugram, Uttar Pradesh GSPANN

    Posted today

    Job Viewed

    Tap Again To Close

    Job Description

    Description GSPANN is hiring a Senior Data Engineer to design, develop, and optimize scalable data solutions. The role requires expertise in Azure Data Factory, Azure Databricks, PySpark, Delta Tables, and advanced data modeling, along with skills in performance optimization, API integrations, DevOps, and data governance.

    Role and Responsibilities

  • Design, develop, and orchestrate scalable data pipelines using Azure Data Factory (ADF).
  • Build and manage Apache Spark clusters, create notebooks, and run jobs in Azure Databricks.
  • Ingest, organize, and transform data within the Microsoft Fabric ecosystem using OneLake.
  • Author complex transformations and write SQL (Structured Query Language) queries for large-scale data processing using PySpark and Spark SQL.
  • Create, optimize, and maintain Delta Lake tables, applying operations such as VACUUM, ZORDER, and OPTIMIZE.
  • Parse, validate, and transform semi-structured JSON (JavaScript Object Notation) datasets.
  • Build and consume REST/OData services for custom data ingestion through API (Application Programming Interface) integration.
  • Implement bronze, silver, and gold layers in data lakes using the Medallion Architecture to ensure clean and reliable data.
  • Apply partitioning, caching, and resource tuning to efficiently process high volumes of data for large-scale performance optimization.
  • Design star and snowflake schemas along with fact and dimension tables for multidimensional modeling in reporting use cases.
  • Work with tabular and OLAP (Online Analytical Processing) cube structures in Azure Analysis Services to enable downstream business intelligence.
  • Collaborate with the DevOps team to define infrastructure, manage access and security, and automate deployments.
  • Skills and Experience

  • Ingest and harmonize data from SAP (Systems, Applications, and Products) ECC (ERP Central Component) and S/4HANA systems using Data Sphere.
  • Use Git, Azure DevOps Pipelines, Terraform, or Azure Resource Manager (ARM) templates for CI/CD (Continuous Integration/Continuous Deployment) and DevOps tooling.
  • Leverage Azure Monitor, Log Analytics, and data pipeline metrics for data observability and monitoring.
  • Conduct query diagnostics, identify bottlenecks, and determine root causes for performance troubleshooting.
  • Apply metadata management, track data lineage, and enforce compliance best practices for data governance and cataloging.
  • Document processes, designs, and solutions effectively in Confluence.
  • This advertiser has chosen not to accept applicants from your region.

    Data Engineers 3-5yrs

    Bengaluru, Karnataka Rangam India

    Posted today

    Job Viewed

    Tap Again To Close

    Job Description

    Job Description

    This role outlines the day-to-day tasks which involve refactoring legacy Spark jobs to new standards, upgrading Airflow jobs, and completing migrations. Manager mentions that AI-based automations are available to assist with code refactoring, and three full-time engineers will provide support. The required skills include expertise in Airflow and Spark, with AWS exposure being a plus also experience with modern editors like Cursor would be beneficial.The role involves refactoring existing data pipelines, so the new hires are not expected to build it from scratch.

    Non negotiable Skills:

    Python, Spark, and Airflow experience, with at least 3 years of experience in these tools, though Airflow experience could be less than 3 years. 

    Candidates without all three required skills might be considered if they have strong experience in Python and Spark.

    Nice to Haves:

    AWS experience, particularly with S3 and EMR, is desirable but not strictly required.

    Interview process: 2 rounds 

    Coding round focusing on Python and SQL /PySpark 

    Manager is open for candidates who will request to WFH but would be open to come to office if required. 

    Regular time: 10 am to 7pm

    Duration: 6 months ( no obligation for renewal - depending on Business needs and performance)

    Get to Know the Role
    You will support the mission of the team by maintaining and extending the platform capabilities through implementation of new features and continuous improvements. You will also explore new developments in the space and continuously bring them to our platform there by helping the data community at Client

    The Critical Tasks You Will Perform
    ? You will maintain and extend the Python/Go/Scala backend for Client's Airflow, Spark, Trino and Starrocks platform ? You will modify and extend Python/Scala Spark applications and Airflow pipelines for better performance, reliability, and cost. ? You will design and implement architectural improvements for new use cases or efficiency.
    ? You will build platforms that can scale to the 3 Vs of Big Data (Volume, Velocity, Variety)
    ? You will follow various testing best practices and SRE best practices to ensure system stability and reliability.

    Qualifications What Essential Skills You Will Need
    ? Software Engineering, Computer Science, or related undergraduate degree. Proficient in at least one of the following: Python, Go, or Scala and strong appetite to learn other programming languages.
    ? You have 3-5 years of relevant professional experience
    ? Good working knowledge in 3 or more of the following: Airflow, Spark, relational databases (ideally MySQL), Kubernetes, Starrocks, Trino, and backend API implementation and being passionate about learning the others.
    ? Experience with AWS services (S3, EKS, IAM) and infrastructure as code tools like Terraform.
    ? Proficiency in CI/CD tools (Jenkins, GitLab, etc.)
    ? You are highly motivated to work smart and intelligently using available AI resources at Client Skills that are Good to have
    ? Proficient in Kubernetes with hands of experience with building custom resources using frameworks like kubebuilder.
    ? Proficient in Apache Spark, with good knowledge of resource managers like Yarn, Kubernetes and how spark interacts and work with them
    ? Advanced understanding of Apache Airflow and its working with Celery and/or Kubernetes executor backend with exposure to Python SQLAlchemy framework.
    ? Advanced knowledge of other query engines like Trino, Starrocks and others
    ? Advanced knowledge of AWS Cloud
    ? Good understanding of lakehouse table formats like Iceberg and Delta lake, how query engines work with it.

    This advertiser has chosen not to accept applicants from your region.

    AI Developers & Data Engineers

    Mumbai, Maharashtra Trigyn Technologies

    Posted today

    Job Viewed

    Tap Again To Close

    Job Description

    Job Description:

    Key Responsibilities:
    • Design, develop, and maintain scalable, efficient, and reliable systems to support GenAI and machine learning-based applications and use cases
    • ead the development of data pipelines, architectures, and tools to support data-intensive projects, ensuring high performance, security, and compliance
    • C llaborate with other stakeholders to integrate AI and ML models into production-ready systems
    • W rk closely with non-backend expert counterparts, such as data scientists and ML engineers, to ensure seamless integration of AI and ML models into backend systems
    • E sure high-quality code, following best practices, and adhering to industry standards and company guidelines
    Hard Requirements:
    • S nior backend engineer with a proven track record of owning the backend portion of projects
    • E perience collaborating with product, project, and domain team members
    • S rong understanding of data pipelines, architectures, and tools
    • P oficiency in Python (ability to read, write and debug Python code with minimal guidance)
    Mandatory Skills:
    • M chine Learning: experience with machine learning frameworks, such as scikit-learn, TensorFlow, or PyTorch
    • P thon: proficiency in Python programming, with experience working with libraries and frameworks, such as NumPy, pandas, and Flask
    • N tural Language Processing: experience with NLP techniques, such as text processing, sentiment analysis, and topic modeling
    • D ep Learning: experience with deep learning frameworks, such as TensorFlow, or PyTorch
    • D ta Science: experience working with data science tools
    • B ckend: experience with backend development, including design, development, and deployment of scalable and modular systems
    • A tificial Intelligence: experience with AI concepts, including computer vision, robotics, and expert systems
    • P ttern Recognition: experience with pattern recognition techniques, such as clustering, classification, and regression
    • S atistical Modeling: experience with statistical modeling, including hypothesis testing, confidence intervals, and regr

    This advertiser has chosen not to accept applicants from your region.
    Be The First To Know

    About the latest Data engineers Jobs in India !

    Ab Initio Data Engineers

    Pune, Maharashtra Exusia

    Posted today

    Job Viewed

    Tap Again To Close

    Job Description

    Exusia, a cutting-edge digital transformation consultancy, is looking for top talent in the Data Engineering space with specific skills in Ab Initio / Azure Data Engineering services to join our global delivery team's Industry Analytics practice.


    What’s the Role?

    Full-time job to work with Exusia's clients to design, develop and maintain large scale data engineering solutions. The right candidates will also get a chance to work across the entire data landscape including Data Governance, Metadata Management and will work closely with client stakeholders to capture the requirements, design and implement Analytical reporting, Compliance and Data Governance solutions.


    Qualifications & Role Responsibilities

    • Master of Science (preferably in Computer and Information Sciences or Business Information Technology) or an Engineering degree in the above areas.
    • Have a minimum of 4 years experience in Data Management, Data Engineering & Data Governance space with hands on project experience using Ab Initio, Pyspark, Databricks and SAS
    • Should have worked on large data initiatives and should have exposure to different ETL / Data engineering tools
    • Work with business stakeholders to gather and analyze business requirements, building a solid understanding of the Data Analytics and Data Governance domain
    • Document, discuss and resolve business, data and reporting issues within the team, across functional teams, and with business stakeholders
    • Should be able to work independently and come up with solution design
    • Build optimized data processing and data governance solutions using the given toolset 
    • Collaborate with delivery leadership to deliver projects on time adhering to the quality standards



    Mandatory Skills:

    • Must have strong Data Warehousing / Data Engineering foundational skills with exposure to different types of data architecture
    • Strong conceptual understanding of Data Management & Data Governance principles
    • Hands-on experience using 
    • Strong Ab Initio skills with hands on experience on GDE, Express>IT, Conduct>IT, MetadataHub
    • Databricks and should be fluent with Pyspark & Spark SQL 
    • Experience working with multiple databases like Oracle/SQL Server/Netezza as well as cloud hosted DWH like Snowflake/Redshift/Synapse and Big Query
    • Exposure to Azure services relevant for Data engineering - ADF/Databricks/Synapse Analytics
    • Experience working in an agile software delivery model is required.
    • Prior data modelling experience is mandatory preferably for DWH/Data marts/Lakehouse
    • Discuss & document data and analytics requirements with cross functional teams and business stakeholders.
    • Analyze requirements and come up with technical specifications, source-to-target mapping, data and data models
    • Manage changing priorities during the software development lifecycle (SDLC)
    • Transforming business/functional requirements into technical specifications.
    • Azure Certification relevant for Data Engineering/Analytics
    • Experience and knowledge of one or more domains within Banking and Financial Services 


    Nice-to-Have Skills:

    • Exposure to tools like Talend,Informatica,SAS for data processing
    • Prior experience in converting Talend/Informatica/Mainframe based data pipelines to Ab Initio will be a big plus 
    • Data validation and testing using SQL or any tool-based testing methods
    • Reporting/Visualization tool experience - PowerBI 
    • Exposure to Data Governance projects including Metadata Management, Data Dictionary, Data Glossary, Data Lineage and Data Quality aspects.


    About Exusia

    Exusia ( ) is a global technology consulting company that empowers its clients to gain a competitive edge by accelerating business objectives and providing strategy and solutions in data management and analytics. The company has established its leadership position by solving some of the world's largest and most complex data problems in the financial, healthcare, telecommunications and high technology industries.

    Exusia’s mission is to transform the world through the innovative use of information. 


    Exusia was recognized by Inc. 5000 and by Crain’s publications as one of the fastest growing privately held companies in the world. Since the company’s founding in 2012, Exusia has experienced an impressive seven years of revenue growth and has expanded its operations in the Americas, Asia, Africa and UK. Exusia has recently also been recognized by publications such as the CIO Review, Industry Era, Insight Success and the CIO Bulletin for the company’s innovation in IT Services, the Telecommunications and Healthcare industries and its entrepreneurship. The company is headquartered in Miami city of Florida, United States with development centers in Pune, Hyderabad, Bengaluru and Gurugram, India.

    Interested applicants should apply by forwarding their CV to:

    This advertiser has chosen not to accept applicants from your region.

    Ab Initio Data Engineers

    Exusia

    Posted today

    Job Viewed

    Tap Again To Close

    Job Description

    About The Position

    Exusia, a cutting-edge digital transformation consultancy, is looking for top talent in the Data Engineering space with specific skills in Ab Initio / Azure Data Engineering services to join our global delivery team's Industry Analytics practice.

    What’s the Role?

    Full-time job to work with Exusia's clients to design, develop and maintain large scale data engineering solutions. The right candidates will also get a chance to work across the entire data landscape including Data Governance, Metadata Management and will work closely with client stakeholders to capture the requirements, design and implement Analytical reporting, Compliance and Data Governance solutions.

    Qualifications & Role Responsibilities

  • Master of Science (preferably in Computer and Information Sciences or Business Information Technology) or an Engineering degree in the above areas.
  • Have a minimum of 4 years experience in Data Management, Data Engineering & Data Governance space with hands on project experience using Ab Initio, Pyspark, Databricks and SAS
  • Should have worked on large data initiatives and should have exposure to different ETL / Data engineering tools
  • Work with business stakeholders to gather and analyze business requirements, building a solid understanding of the Data Analytics and Data Governance domain
  • Document, discuss and resolve business, data and reporting issues within the team, across functional teams, and with business stakeholders
  • Should be able to work independently and come up with solution design
  • Build optimized data processing and data governance solutions using the given toolset 
  • Collaborate with delivery leadership to deliver projects on time adhering to the quality standards
  • Requirements

    Mandatory Skills:

  • Must have strong Data Warehousing / Data Engineering foundational skills with exposure to different types of data architecture
  • Strong conceptual understanding of Data Management & Data Governance principles
  • Hands-on experience using 
  • Strong Ab Initio skills with hands on experience on GDE, Express>IT, Conduct>IT, MetadataHub
  • Databricks and should be fluent with Pyspark & Spark SQL 
  • Experience working with multiple databases like Oracle/SQL Server/Netezza as well as cloud hosted DWH like Snowflake/Redshift/Synapse and Big Query
  • Exposure to Azure services relevant for Data engineering - ADF/Databricks/Synapse Analytics
  • Experience working in an agile software delivery model is required.
  • Prior data modelling experience is mandatory preferably for DWH/Data marts/Lakehouse
  • Discuss & document data and analytics requirements with cross functional teams and business stakeholders.
  • Analyze requirements and come up with technical specifications, source-to-target mapping, data and data models
  • Manage changing priorities during the software development lifecycle (SDLC)
  • Transforming business/functional requirements into technical specifications.
  • Azure Certification relevant for Data Engineering/Analytics
  • Experience and knowledge of one or more domains within Banking and Financial Services 
  • Nice-to-Have Skills:

  • Exposure to tools like Talend,Informatica,SAS for data processing
  • Prior experience in converting Talend/Informatica/Mainframe based data pipelines to Ab Initio will be a big plus 
  • Data validation and testing using SQL or any tool-based testing methods
  • Reporting/Visualization tool experience - PowerBI 
  • Exposure to Data Governance projects including Metadata Management, Data Dictionary, Data Glossary, Data Lineage and Data Quality aspects.
  • About Exusia

    Exusia () is a global technology consulting company that empowers its clients to gain a competitive edge by accelerating business objectives and providing strategy and solutions in data management and analytics. The company has established its leadership position by solving some of the world's largest and most complex data problems in the financial, healthcare, telecommunications and high technology industries.

    Exusia’s mission is to transform the world through the innovative use of information. 

    Exusia was recognized by Inc. 5000 and by Crain’s publications as one of the fastest growing privately held companies in the world. Since the company’s founding in 2012, Exusia has experienced an impressive seven years of revenue growth and has expanded its operations in the Americas, Asia, Africa and UK. Exusia has recently also been recognized by publications such as the CIO Review, Industry Era, Insight Success and the CIO Bulletin for the company’s innovation in IT Services, the Telecommunications and Healthcare industries and its entrepreneurship. The company is headquartered in Miami city of Florida, United States with development centers in Pune, Hyderabad, Bengaluru and Gurugram, India.

    Interested applicants should apply by forwarding their CV to:

    This advertiser has chosen not to accept applicants from your region.

    6 Lead Data Engineers

    Prayagraj, Uttar Pradesh Infinite Consulting

    Posted today

    Job Viewed

    Tap Again To Close

    Job Description

    Job Description

    Lead Data Engineers

    • 12 months contract with 2x6 months ext. options!
    • Hybrid work arrangement
    • Australian Citizens with current Baseline Clearance

    Infinite Consulting is seeking Lead Data Engineers for our esteemed Federal Government Client. This is a July start for a 12 month initial contract – 2x6 months further extensions possible based on funding and approval.

    About the Role:

    You will join a well-established team specialising in data innovation, data solutions and cloud platform development.

    • As the Lead Data Engineer, you will lead data projects and work closely with key stakeholders to create solutions for business problems.
    • You will be responsible to design and develop Azure-based data and analytic solutions and platforms.

    Essential criteria

    • 3+ years of experience in Data Development;
    • Perform detail design based on high level architecture; Experience in detailed design for data integration on Azure cloud data services including ADLS, SQL DB, data lake Synapse and blob storage;
    • Solid experience developing complex ETLs based on SSIS and customised coding C#, .net or vb.net Development experience with Power BI (stored procedure, DAX, functions, views);
    • Proficient with Azure data factory and Azure Event Hub; Strong understanding of DevOps or other version control system;
    • Experience working in an Agile environment Exceptional communication skills including written and oral skills.

    Submission Requirements:

    Duration: July 2025 start! 12 months with extension options

    Clearance: Australian Citizens with current Baseline clearance

    Submission deadline: 11/04/2025

    If you are interested in finding out more about the role, apply today or contact Varsha on for a full assignment brief.

    This advertiser has chosen not to accept applicants from your region.
     

    Nearby Locations

    Other Jobs Near Me

    Industry

    1. request_quote Accounting
    2. work Administrative
    3. eco Agriculture Forestry
    4. smart_toy AI & Emerging Technologies
    5. school Apprenticeships & Trainee
    6. apartment Architecture
    7. palette Arts & Entertainment
    8. directions_car Automotive
    9. flight_takeoff Aviation
    10. account_balance Banking & Finance
    11. local_florist Beauty & Wellness
    12. restaurant Catering
    13. volunteer_activism Charity & Voluntary
    14. science Chemical Engineering
    15. child_friendly Childcare
    16. foundation Civil Engineering
    17. clean_hands Cleaning & Sanitation
    18. diversity_3 Community & Social Care
    19. construction Construction
    20. brush Creative & Digital
    21. currency_bitcoin Crypto & Blockchain
    22. support_agent Customer Service & Helpdesk
    23. medical_services Dental
    24. medical_services Driving & Transport
    25. medical_services E Commerce & Social Media
    26. school Education & Teaching
    27. electrical_services Electrical Engineering
    28. bolt Energy
    29. local_mall Fmcg
    30. gavel Government & Non Profit
    31. emoji_events Graduate
    32. health_and_safety Healthcare
    33. beach_access Hospitality & Tourism
    34. groups Human Resources
    35. precision_manufacturing Industrial Engineering
    36. security Information Security
    37. handyman Installation & Maintenance
    38. policy Insurance
    39. code IT & Software
    40. gavel Legal
    41. sports_soccer Leisure & Sports
    42. inventory_2 Logistics & Warehousing
    43. supervisor_account Management
    44. supervisor_account Management Consultancy
    45. supervisor_account Manufacturing & Production
    46. campaign Marketing
    47. build Mechanical Engineering
    48. perm_media Media & PR
    49. local_hospital Medical
    50. local_hospital Military & Public Safety
    51. local_hospital Mining
    52. medical_services Nursing
    53. local_gas_station Oil & Gas
    54. biotech Pharmaceutical
    55. checklist_rtl Project Management
    56. shopping_bag Purchasing
    57. home_work Real Estate
    58. person_search Recruitment Consultancy
    59. store Retail
    60. point_of_sale Sales
    61. science Scientific Research & Development
    62. wifi Telecoms
    63. psychology Therapy
    64. pets Veterinary
    View All Data Engineers Jobs