7,263 Integration jobs in India
Data Integration Engineer
Posted today
Job Viewed
Job Description
Role: Data Integration Engineer
Location : Bangalore
Shift Time: 2-11 PM with cab facility
Exp: 5 to 7 yrs.
Familiarity with using APIs for application development.
Knowledge of Python, experience with ETL.
Good working knowledge of GCP and GCP Serverless functions.
Good working experience with Unix/Linux.
Prior knowledge of Instana or work with monitoring tools in general is desirable.
Data Integration Engineer
Posted today
Job Viewed
Job Description
Job Description :
Must Have
:
- Experience in Data Engineering with a strong focus on Databricks
- Proficiency in Python, SQL, and Spark (PySpark) programming
- Hands-on experience with Delta Lake, Unity Catalog, and MLFlow
- Experience working with CI/CD pipelines
Nice to Have
:
- Exposure to the Azure ecosystem and its services
- Experience developing ELT/ETL frameworks
- Automation of workflows for ingesting structured, semi-structured, and unstructured data
- Familiarity with data visualization tools such as Power BI
Data Integration Engineer
Posted today
Job Viewed
Job Description
Job Description – Senior Data Integration Engineer (Azure Fabric + CRM Integrations)
Location:
Noida
Employment Type:
Full-time
About Crenovent Technologies
Crenovent is building
RevAi Pro
, an enterprise-grade
Revenue Operations SaaS platform
that integrates CRM, billing, contract, and marketing systems with AI agents and Generative AI search. Our vision is to redefine RevOps with AI-driven automation, real-time intelligence, and industry-specific workflows.
We are now hiring a
Senior Data Integration Engineer
to lead the integration of
CRM platforms (Salesforce, Microsoft Dynamics, HubSpot)
into
Azure Fabric
and enable secure, multi-tenant ingestion pipelines for RevAi Pro.
Role Overview
You will be responsible for designing, building, and scaling data pipelines that bring CRM data into
Azure Fabric (OneLake, Data Factory, Synapse-style pipelines)
and transform it into RevAi Pro's
standardized schema (50+ core fields, industry-specific mappings)
.
This is a
hands-on, architecture + build role
where you will work closely with RevOps SMEs, product engineers, and AI teams to ensure seamless data availability, governance, and performance across multi-tenant environments.
Key Responsibilities
Data Integration & Pipelines
Design and implement
data ingestion pipelines
from Salesforce, Dynamics 365, and HubSpot into Azure Fabric.- Build
ETL/ELT workflows
using Azure Data Factory, Fabric pipelines, and Python/SQL. Ensure
real-time and batch sync
options for CRM objects (Leads, Accounts, Opportunities, Forecasts, Contracts).Schema & Mapping
Map CRM fields to RevAi Pro's
standardized schema (50+ fields across industries)
.- Maintain schema consistency across SaaS, Banking, Insurance, and E-commerce use cases.
Implement data transformation, validation, and enrichment logic.
Data Governance & Security
Implement
multi-tenant isolation
policies in Fabric (Purview, RBAC, field-level masking).- Ensure
PII compliance, GDPR, SOC2 readiness
. Build audit logs, lineage tracking, and monitoring dashboards.
Performance & Reliability
Optimize pipeline performance (latency, refresh frequency, cost efficiency).
- Implement
autoscaling, retry logic, error handling
in pipelines. Work with DevOps to set up CI/CD for Fabric integrations.
Collaboration
Work with
RevOps SMEs
to validate business logic for CRM fields.- Partner with AI/ML engineers to expose clean data to agents and GenAI models.
- Collaborate with frontend/backend developers to provide APIs for RevAi Pro modules.
Required Skills & Experience
- 3+ years
in Data Engineering / Integration roles. - Strong expertise in
Microsoft Azure Fabric
, including: - OneLake
,
Data Factory
,
Synapse pipelines
,
Power Query
. - Hands-on experience with
CRM APIs & data models
: Salesforce, Dynamics 365, HubSpot. - Strong SQL and Python for data transformations.
- Experience with
ETL/ELT workflows
, schema mapping, and multi-tenant SaaS data handling. - Knowledge of
data governance tools (Azure Purview, RBAC, PII controls)
. - Strong grasp of
cloud security & compliance (GDPR, SOC2, HIPAA optional)
.
Preferred (Nice to Have)
- Prior experience building integrations for
Revenue Operations, Sales, or CRM platforms
. - Knowledge of
middleware
(MuleSoft, Boomi, Workato, Azure Logic Apps). - Familiarity with
AI/ML data pipelines
. - Experience with
multi-cloud integrations (AWS, GCP)
. - Understanding of
business RevOps metrics
(pipeline, forecast, quota, comp plans).
Soft Skills
- Strong ownership and problem-solving ability.
- Ability to translate business needs (RevOps fields) into technical data pipelines.
- Collaborative mindset with cross-functional teams.
- Comfortable working in a fast-paced
startup environment
.
Data Integration Engineer
Posted today
Job Viewed
Job Description
• Strong proficiency in Python and data libraries like Pandas.
• Experience with web frameworks like Django, FastAPI, or Flask.
• Hands-on experience with MongoDB or other NoSQL databases
• Proficiency in working with RESTful APIs and JSON.
Data Integration Engineer
Posted today
Job Viewed
Job Description
Interested can dm or call me
Job Description Data Integration Engineer
Position: Data Integration Engineer
Experience: 5 8 years
Overview
We are seeking a skilled Data Integration Engineer to lead the integration of client data from
multiple source systems—including QuickBooks, Excel, CSV files, and other legacy
platforms—into Microsoft Access or SQL Server. This role will focus on designing and
automating data pipelines, ensuring data accuracy, consistency, and performance across
systems.
Responsibilities
- Collaborate with Implementation and technical teams to gather data integration
requirements.
- Design and implement automated data pipelines to extract, transform, and load
(ETL) data into Access or SQL databases.
- Analyze and map source data to target schemas, ensuring alignment with business
rules and data quality standards.
- Develop and document data mapping specifications, transformation logic, and
validation procedures.
- Automate data extraction and transformation using tools such as SQL, Python, or
ETL platforms.
- Ensure referential integrity and optimize performance of integrated data systems.
- Validate integrated data against source systems to ensure completeness and
accuracy.
- Support testing and troubleshooting during integration and post-deployment
phases.
- Maintain documentation of integration processes, mappings, and automation
scripts.
Required Skills & Qualifications
- Strong experience with Microsoft Access and/or SQL Server (queries, schema
design, performance tuning).
- Proficiency in data transformation and automation using SQL, Excel, or scripting
languages.
- Experience with ETL processes and data integration best practices.
Ability to troubleshoot data issues and resolve discrepancies independently.
Excellent documentation and communication skills.
- Experience with ERP systems and strong data mapping skills are a plus.
- Strong data mapping skills and ability to translate business requirements into
technical specifications.
- Prior experience in automating data workflows and building scalable integration
solutions.
Primary Skills
SQL , Microsoft Access, ETL ADF
Data Integration Engineer
Posted today
Job Viewed
Job Description
Key Details:
- Location: Bangalore-Onsite
- Type- Work From Office-Bangalore E-city
We are seeking a motivated Data Integration Engineer to join our engineering team. This individual will play a critical role in integrating and transforming large-scale data to power intelligent decision-making systems.
Key Responsibilities
- Design, build, and maintain data pipelines and APIs using Python.
- Integrate data from various sources including third-party APIs and internal systems.
- Work with large, unstructured datasets and transform them into usable formats.
- Collaborate with cross-functional teams to define data requirements and deliver timely solutions.
- Leverage cloud-based services, especially AWS (EC2, S3), Snowflake / Databricks to scale data infrastructure.
- Ensure high performance and responsiveness of applications.
- Write clean, maintainable code with a focus on craftsmanship.
Required Skills & Experience
- Strong proficiency in Python and data libraries like Pandas.
- Experience with web frameworks like Django, FastAPI, or Flask.
- Hands-on experience with MongoDB or other NoSQL databases.
- Proficiency in working with RESTful APIs and JSON.
- Familiarity with AWS services: EC2, S3, Snowflake / Databricks.
- Solid understanding of data mining, data exploration, and troubleshooting data issues.
- Real-world experience with large-scale data systems in cloud environments.
- Ability to thrive in a fast-paced, high-growth, deadline-driven setting.
- Self-starter with a strong sense of ownership and a passion for problem-solving.
- Comfortable working with messy or unstructured data.
Preferred Qualifications
- Bachelor's or Master's degree in Computer Science.
- Exposure to Big Data and Machine Learning technologies is a plus.
Interview Process for selected candidates
First Round: Conducted via Google Meet.
Second Round: Technical round Face to face.
Job Type: Full-time
Pay: ₹40, ₹120,000.00 per month
Benefits:
- Health insurance
- Paid sick time
- Provident Fund
Ability to commute/relocate:
- Electronic City, Bengaluru, Karnataka: Reliably commute or planning to relocate before starting work (Required)
Location:
- Electronic City, Bengaluru, Karnataka (Required)
Work Location: In person
Data integration engineer
Posted today
Job Viewed
Job Description
Role Overview
We are seeking a motivated
Data Integration Engineer
to join our engineering team. This individual will play a critical role in integrating and transforming large-scale data to power intelligent decision-making systems.
Key Responsibilities
- Design, build, and maintain data pipelines and APIs using Python.
- Integrate data from various sources including third-party APIs and internal systems.
- Work with large, unstructured datasets and transform them into usable formats.
- Collaborate with cross-functional teams to define data requirements and deliver timely solutions.
- Leverage cloud-based services, especially AWS (EC2, S3), Snowflake / Databricks to scale data infrastructure.
- Ensure high performance and responsiveness of applications.
- Write clean, maintainable code with a focus on craftsmanship.
Required Skills & Experience
- Strong proficiency in Python and data libraries like Pandas.
- Experience with web frameworks like Django, FastAPI, or Flask.
- Hands-on experience with MongoDB or other NoSQL databases.
- Proficiency in working with RESTful APIs and JSON.
- Familiarity with AWS services: EC2, S3, Snowflake / Databricks.
- Solid understanding of data mining, data exploration, and troubleshooting data issues.
- Real-world experience with large-scale data systems in cloud environments.
- Ability to thrive in a fast-paced, high-growth, deadline-driven setting.
- Self-starter with a strong sense of ownership and a passion for problem-solving.
- Comfortable working with messy or unstructured data.
Preferred Qualifications
- Bachelor's or Master's degree in Computer Science.
- Exposure to Big Data and Machine Learning technologies is a plus.
Be The First To Know
About the latest Integration Jobs in India !
Data Integration Engineer
Posted today
Job Viewed
Job Description
Description
: SSIS Conversion to AbinitioResponsibilities for Ab Initio developer:
. Knowledge of databases and database concepts to support Design/development of Datawarehouse/Datalake application.
. Analyze the business requirements and work with the business and data modeler to support dataflow from source to target destination
. Follow release and change processes: distribution of software builds and releases to development , test environments and production
. Adheres to project's SDLC process (Agile), participates in Team discussion, scrum call, and works collaboratively with internal and external team members
. Develop ETL using Ab Initio, Databases: MS SQL server, Bigdata, text/excel files
. Should engage in the intake/release/change/incident/problem management processes
. Should be able to prioritize and drive all the relevant support priorities including (Incident, change, problem, knowledge, engagement with projects )
. Develop and document a high-level Conceptual Data Process Design for review by Architect, data analysts that will serve as a basis for writing ETL code and designing test plans
. Thoroughly unit test ETL code to ensure error free/efficient delivery
. Analyze several aspects of code prior to release to ensure that it will run efficiently and can be supported in the production environment.
. Should be able to provide data modeling solutions.
Be independent developer. Seek support from seniors in the team for ensuring smooth delivery
Qualifications for Ab Initio developer:
. Minimum 7+ years professional experience in ETL-Ab Initio, RDBMS(MS SQL Server), Batch Processing
. Strong in-depth knowledge of RDBMS database, Datawarehouse/Datalake concepts
. Should be well proficient in Ab Initio, MS SQL Server, Hadoop/MangoDB, other Scripting to support ETL development
. 4+ years of experience in Datawarehouse/Datalake development using the Ab Initio ETL tool as well as have used MS SQL database
. Experience with scripting language such as T-SQl, hive/hql, batch scripting, shell scripting and any other popular scripting
. Experience working in complex development environment with Large data
. Strong Communication/team skills and analytical skills
. Ability to articulate advanced technical topics to both technical and non-technical staff
Data Integration Engineer
Posted today
Job Viewed
Job Description
Description
:Skillsets :