62 Data Integration jobs in Delhi
APM Data Integration Analyst
Posted today
Job Viewed
Job Description
Location - Remote
Timings - 3pm – 11pm IST
APM Data Integration Analyst
This role ensures data integrity, integration, and governance across the APM ecosystem. The Data Integration Analyst will manage application metadata completeness and accuracy, working across Orbus and ServiceNow, integrating with Informatica, and other systems to maintain a single source of truth. They will also help design and enforce data quality rules and cross-references, enabling reliable reporting and decision-making.
Expectations:
- Ensure consistency and accuracy of application metadata in Orbus consistent with ServiceNow.
- Design and implement data governance rules (validation, de-duplication, attribute mapping, normalization).
- Partner with integration teams to establish data pipelines and synchronization (Informatica, EDW, ServiceNow, Orbus, etc.).
- Map application attributes across multiple systems, ensuring cross-referencing accuracy.
- Maintain data lineage and traceability for reporting and audits.
- Gather application usage data and metrics.
- Provide proactive data quality monitoring and monitor dashboards to highlight gaps or issues.
- Document data models, attribute definitions, and mapping rules for transparency and onboarding.
Core Competencies:
- Highly detail-oriented with a passion for data quality.
- Proactive in identifying and resolving gaps and discrepancies.
- Able to balance technical accuracy with business usability.
- Comfortable in a complex, fragmented tool landscape.
- Action Oriented: Proactive in identifying and resolving gaps.
- Problem Solving: Uses rigorous logic and methods to solve difficult problems with effective solutions; probes all fruitful sources for answers; can see hidden problems; is excellent at honest analysis; looks beyond the obvious and doesn't stop at the first answers
Qualifications
- Strong hands-on Orbus experience (data modeling, meta model configuration, attribute management).
- Hands-on ServiceNow SAM/APM experience (application and contract data, integrations).
- Proven track record of Application Portfolio Management activities.
- Experience with data mapping capabilities.
- Experience with Informatica, EDW integration, and Power BI reporting preferred.
- Knowledge of data governance frameworks (DAMA, ISO 8000 preferred).
- Strong documentation and requirements gathering skills.
Education:
Bachelor’s Degree is preferred or experience equal to.
APM Data Integration Analyst
Posted 13 days ago
Job Viewed
Job Description
Title - APM Data Integration Analyst
Location - Remote
Timings - 3pm – 11pm IST
APM Data Integration Analyst
This role ensures data integrity, integration, and governance across the APM ecosystem. The Data Integration Analyst will manage application metadata completeness and accuracy, working across Orbus and ServiceNow, integrating with Informatica, and other systems to maintain a single source of truth. They will also help design and enforce data quality rules and cross-references, enabling reliable reporting and decision-making.
Expectations:
- Ensure consistency and accuracy of application metadata in Orbus consistent with ServiceNow.
- Design and implement data governance rules (validation, de-duplication, attribute mapping, normalization).
- Partner with integration teams to establish data pipelines and synchronization (Informatica, EDW, ServiceNow, Orbus, etc.).
- Map application attributes across multiple systems, ensuring cross-referencing accuracy.
- Maintain data lineage and traceability for reporting and audits.
- Gather application usage data and metrics.
- Provide proactive data quality monitoring and monitor dashboards to highlight gaps or issues.
- Document data models, attribute definitions, and mapping rules for transparency and onboarding.
Core Competencies:
- Highly detail-oriented with a passion for data quality.
- Proactive in identifying and resolving gaps and discrepancies.
- Able to balance technical accuracy with business usability.
- Comfortable in a complex, fragmented tool landscape.
- Action Oriented: Proactive in identifying and resolving gaps.
- Problem Solving: Uses rigorous logic and methods to solve difficult problems with effective solutions; probes all fruitful sources for answers; can see hidden problems; is excellent at honest analysis; looks beyond the obvious and doesn't stop at the first answers
Qualifications
- Strong hands-on Orbus experience (data modeling, meta model configuration, attribute management).
- Hands-on ServiceNow SAM/APM experience (application and contract data, integrations).
- Proven track record of Application Portfolio Management activities.
- Experience with data mapping capabilities.
- Experience with Informatica, EDW integration, and Power BI reporting preferred.
- Knowledge of data governance frameworks (DAMA, ISO 8000 preferred).
- Strong documentation and requirements gathering skills.
Education:
Bachelor’s Degree is preferred or experience equal to.
APM Data Integration Analyst
Posted 13 days ago
Job Viewed
Job Description
Title - APM Data Integration Analyst
Location - Remote
Timings - 3pm – 11pm IST
APM Data Integration Analyst
This role ensures data integrity, integration, and governance across the APM ecosystem. The Data Integration Analyst will manage application metadata completeness and accuracy, working across Orbus and ServiceNow, integrating with Informatica, and other systems to maintain a single source of truth. They will also help design and enforce data quality rules and cross-references, enabling reliable reporting and decision-making.
Expectations:
- Ensure consistency and accuracy of application metadata in Orbus consistent with ServiceNow.
- Design and implement data governance rules (validation, de-duplication, attribute mapping, normalization).
- Partner with integration teams to establish data pipelines and synchronization (Informatica, EDW, ServiceNow, Orbus, etc.).
- Map application attributes across multiple systems, ensuring cross-referencing accuracy.
- Maintain data lineage and traceability for reporting and audits.
- Gather application usage data and metrics.
- Provide proactive data quality monitoring and monitor dashboards to highlight gaps or issues.
- Document data models, attribute definitions, and mapping rules for transparency and onboarding.
Core Competencies:
- Highly detail-oriented with a passion for data quality.
- Proactive in identifying and resolving gaps and discrepancies.
- Able to balance technical accuracy with business usability.
- Comfortable in a complex, fragmented tool landscape.
- Action Oriented: Proactive in identifying and resolving gaps.
- Problem Solving: Uses rigorous logic and methods to solve difficult problems with effective solutions; probes all fruitful sources for answers; can see hidden problems; is excellent at honest analysis; looks beyond the obvious and doesn't stop at the first answers
Qualifications
- Strong hands-on Orbus experience (data modeling, meta model configuration, attribute management).
- Hands-on ServiceNow SAM/APM experience (application and contract data, integrations).
- Proven track record of Application Portfolio Management activities.
- Experience with data mapping capabilities.
- Experience with Informatica, EDW integration, and Power BI reporting preferred.
- Knowledge of data governance frameworks (DAMA, ISO 8000 preferred).
- Strong documentation and requirements gathering skills.
Education:
Bachelor’s Degree is preferred or experience equal to.
APM Data Integration Analyst
Posted 13 days ago
Job Viewed
Job Description
Title - APM Data Integration Analyst
Location - Remote
Timings - 3pm – 11pm IST
APM Data Integration Analyst
This role ensures data integrity, integration, and governance across the APM ecosystem. The Data Integration Analyst will manage application metadata completeness and accuracy, working across Orbus and ServiceNow, integrating with Informatica, and other systems to maintain a single source of truth. They will also help design and enforce data quality rules and cross-references, enabling reliable reporting and decision-making.
Expectations:
- Ensure consistency and accuracy of application metadata in Orbus consistent with ServiceNow.
- Design and implement data governance rules (validation, de-duplication, attribute mapping, normalization).
- Partner with integration teams to establish data pipelines and synchronization (Informatica, EDW, ServiceNow, Orbus, etc.).
- Map application attributes across multiple systems, ensuring cross-referencing accuracy.
- Maintain data lineage and traceability for reporting and audits.
- Gather application usage data and metrics.
- Provide proactive data quality monitoring and monitor dashboards to highlight gaps or issues.
- Document data models, attribute definitions, and mapping rules for transparency and onboarding.
Core Competencies:
- Highly detail-oriented with a passion for data quality.
- Proactive in identifying and resolving gaps and discrepancies.
- Able to balance technical accuracy with business usability.
- Comfortable in a complex, fragmented tool landscape.
- Action Oriented: Proactive in identifying and resolving gaps.
- Problem Solving: Uses rigorous logic and methods to solve difficult problems with effective solutions; probes all fruitful sources for answers; can see hidden problems; is excellent at honest analysis; looks beyond the obvious and doesn't stop at the first answers
Qualifications
- Strong hands-on Orbus experience (data modeling, meta model configuration, attribute management).
- Hands-on ServiceNow SAM/APM experience (application and contract data, integrations).
- Proven track record of Application Portfolio Management activities.
- Experience with data mapping capabilities.
- Experience with Informatica, EDW integration, and Power BI reporting preferred.
- Knowledge of data governance frameworks (DAMA, ISO 8000 preferred).
- Strong documentation and requirements gathering skills.
Education:
Bachelor’s Degree is preferred or experience equal to.
Data Integration & LLM Engineer
Posted 13 days ago
Job Viewed
Job Description
About the Role
We are seeking a highly motivated Software Engineer with a strong foundation in Java (Spring Boot) , data integration , and a growing expertise in Large Language Models (LLMs) . This role is ideal for engineers who enjoy working at the intersection of scalable data systems and AI-driven applications , building robust pipelines while also exploring cutting-edge generative AI solutions.
Key Responsibilities
- Design and implement data integrations including APIs, SaaS connectors, and ETL/ELT pipelines to ensure reliable and scalable data flows.
- Build and maintain backend services and applications using Java (Spring Boot or equivalent frameworks) .
- Develop Python-based workflows for AI/ML pipelines, experimentation, and automation scripting.
- Integrate and experiment with LLMs (OpenAI, Anthropic, LLaMA, Mistral, etc.) for use cases such as retrieval-augmented generation (RAG), summarization, and intelligent data insights.
- Implement vector search solutions using Pinecone, Weaviate, Milvus, or FAISS for LLM-backed applications.
- Collaborate with product, data, and ML teams to design end-to-end solutions that combine data engineering with AI capabilities .
- Ensure systems meet high standards of performance, scalability, security, and compliance .
Required Qualifications
- Strong programming experience in Java (Spring Boot or equivalent frameworks) .
- Familiarity with Python , particularly for AI/ML workflows and scripting .
- Proven experience with data integrations : APIs, SaaS connectors, ETL/ELT pipelines.
- Exposure to LLMs (OpenAI, Anthropic, LLaMA, Mistral, etc.) and associated frameworks (LangChain, LlamaIndex, Hugging Face Transformers).
- Experience working with databases (SQL/NoSQL) and vector search technologies (Pinecone, Weaviate, Milvus, FAISS).
Preferred Skills
- Knowledge of cloud platforms (AWS, GCP, or Azure) for deploying scalable systems and ML workloads.
- Familiarity with containerization and orchestration (Docker, Kubernetes).
- Understanding of data governance, observability, and security best practices .
- Interest in generative AI advancements and a passion for building practical applications on top of them.
Data Integration & LLM Engineer
Posted 13 days ago
Job Viewed
Job Description
About the Role
We are seeking a highly motivated Software Engineer with a strong foundation in Java (Spring Boot) , data integration , and a growing expertise in Large Language Models (LLMs) . This role is ideal for engineers who enjoy working at the intersection of scalable data systems and AI-driven applications , building robust pipelines while also exploring cutting-edge generative AI solutions.
Key Responsibilities
- Design and implement data integrations including APIs, SaaS connectors, and ETL/ELT pipelines to ensure reliable and scalable data flows.
- Build and maintain backend services and applications using Java (Spring Boot or equivalent frameworks) .
- Develop Python-based workflows for AI/ML pipelines, experimentation, and automation scripting.
- Integrate and experiment with LLMs (OpenAI, Anthropic, LLaMA, Mistral, etc.) for use cases such as retrieval-augmented generation (RAG), summarization, and intelligent data insights.
- Implement vector search solutions using Pinecone, Weaviate, Milvus, or FAISS for LLM-backed applications.
- Collaborate with product, data, and ML teams to design end-to-end solutions that combine data engineering with AI capabilities .
- Ensure systems meet high standards of performance, scalability, security, and compliance .
Required Qualifications
- Strong programming experience in Java (Spring Boot or equivalent frameworks) .
- Familiarity with Python , particularly for AI/ML workflows and scripting .
- Proven experience with data integrations : APIs, SaaS connectors, ETL/ELT pipelines.
- Exposure to LLMs (OpenAI, Anthropic, LLaMA, Mistral, etc.) and associated frameworks (LangChain, LlamaIndex, Hugging Face Transformers).
- Experience working with databases (SQL/NoSQL) and vector search technologies (Pinecone, Weaviate, Milvus, FAISS).
Preferred Skills
- Knowledge of cloud platforms (AWS, GCP, or Azure) for deploying scalable systems and ML workloads.
- Familiarity with containerization and orchestration (Docker, Kubernetes).
- Understanding of data governance, observability, and security best practices .
- Interest in generative AI advancements and a passion for building practical applications on top of them.
Data Integration & LLM Engineer
Posted 13 days ago
Job Viewed
Job Description
About the Role
We are seeking a highly motivated Software Engineer with a strong foundation in Java (Spring Boot) , data integration , and a growing expertise in Large Language Models (LLMs) . This role is ideal for engineers who enjoy working at the intersection of scalable data systems and AI-driven applications , building robust pipelines while also exploring cutting-edge generative AI solutions.
Key Responsibilities
- Design and implement data integrations including APIs, SaaS connectors, and ETL/ELT pipelines to ensure reliable and scalable data flows.
- Build and maintain backend services and applications using Java (Spring Boot or equivalent frameworks) .
- Develop Python-based workflows for AI/ML pipelines, experimentation, and automation scripting.
- Integrate and experiment with LLMs (OpenAI, Anthropic, LLaMA, Mistral, etc.) for use cases such as retrieval-augmented generation (RAG), summarization, and intelligent data insights.
- Implement vector search solutions using Pinecone, Weaviate, Milvus, or FAISS for LLM-backed applications.
- Collaborate with product, data, and ML teams to design end-to-end solutions that combine data engineering with AI capabilities .
- Ensure systems meet high standards of performance, scalability, security, and compliance .
Required Qualifications
- Strong programming experience in Java (Spring Boot or equivalent frameworks) .
- Familiarity with Python , particularly for AI/ML workflows and scripting .
- Proven experience with data integrations : APIs, SaaS connectors, ETL/ELT pipelines.
- Exposure to LLMs (OpenAI, Anthropic, LLaMA, Mistral, etc.) and associated frameworks (LangChain, LlamaIndex, Hugging Face Transformers).
- Experience working with databases (SQL/NoSQL) and vector search technologies (Pinecone, Weaviate, Milvus, FAISS).
Preferred Skills
- Knowledge of cloud platforms (AWS, GCP, or Azure) for deploying scalable systems and ML workloads.
- Familiarity with containerization and orchestration (Docker, Kubernetes).
- Understanding of data governance, observability, and security best practices .
- Interest in generative AI advancements and a passion for building practical applications on top of them.
Be The First To Know
About the latest Data integration Jobs in Delhi !
Data Integration & Modeling Specialist
Posted today
Job Viewed
Job Description
Job Title: Data Integration & Modeling Specialist
Job Type: Contract
Location: Remote
Duration: 6 Months
Job Summary:
We are seeking a highly skilled Data Integration & Modeling Specialist with hands-on experience in developing common metamodels, defining integration specifications, and working with semantic web technologies and various data formats. The ideal candidate will bring deep technical expertise and a collaborative mindset to support enterprise-level data integration and standardization initiatives.
Key Responsibilities:
Develop common metamodels by integrating requirements across diverse systems and organizations.
Define integration specifications, establish data standards, and develop logical and physical data models.
Collaborate with stakeholders to align data architectures with organizational needs and industry best practices.
Implement and govern semantic data solutions using RDF and SPARQL.
Perform data transformations and scripting using TCL, Python, and Java.
Work with multiple data formats including FRL, VRL, HRL, XML, and JSON to support integration and processing pipelines.
Document technical specifications and provide guidance on data standards and modeling best practices.
Required Qualifications:
3+ years of experience (within the last 8 years) in developing common metamodels, preferably using NIEM standards.
3+ years of experience (within the last 8 years) in:
Defining integration specifications
Developing data models
Governing data standards
2+ years of recent experience with:
Tool Command Language (TCL)
Python
Java
2+ years of experience with:
Resource Description Framework (RDF)
SPARQL Query Language
2+ years of experience working with:
Fixed Record Layout (FRL)
Variable Record Layout (VRL)
Hierarchical Record Layout (HRL)
XML
JSONodeling Specialist
Data Integration & Modeling Specialist
Posted 6 days ago
Job Viewed
Job Description
Job Title: Data Integration & Modeling Specialist
Job Type: Contract
Location: Remote
Duration: 6 Months
Job Summary:
We are seeking a highly skilled Data Integration & Modeling Specialist with hands-on experience in developing common metamodels, defining integration specifications, and working with semantic web technologies and various data formats. The ideal candidate will bring deep technical expertise and a collaborative mindset to support enterprise-level data integration and standardization initiatives.
Key Responsibilities:
Develop common metamodels by integrating requirements across diverse systems and organizations.
Define integration specifications, establish data standards, and develop logical and physical data models.
Collaborate with stakeholders to align data architectures with organizational needs and industry best practices.
Implement and govern semantic data solutions using RDF and SPARQL.
Perform data transformations and scripting using TCL, Python, and Java.
Work with multiple data formats including FRL, VRL, HRL, XML, and JSON to support integration and processing pipelines.
Document technical specifications and provide guidance on data standards and modeling best practices.
Required Qualifications:
3+ years of experience (within the last 8 years) in developing common metamodels, preferably using NIEM standards.
3+ years of experience (within the last 8 years) in:
Defining integration specifications
Developing data models
Governing data standards
2+ years of recent experience with:
Tool Command Language (TCL)
Python
Java
2+ years of experience with:
Resource Description Framework (RDF)
SPARQL Query Language
2+ years of experience working with:
Fixed Record Layout (FRL)
Variable Record Layout (VRL)
Hierarchical Record Layout (HRL)
XML
JSONodeling Specialist
Data Engineering Manager
Posted today
Job Viewed
Job Description
About Us:
YipitData is the leading market research and analytics firm for the disruptive economy and most recently raised $475M from The Carlyle Group at a valuation of over $1B. Every day, our proprietary technology analyzes billions of alternative data points to uncover actionable insights across sectors like software, AI, cloud, e-commerce, ridesharing, and payments.
Our data and research teams transform raw data into strategic intelligence, delivering accurate, timely, and deeply contextualized analysis that our customers—ranging from the world's top investment funds to Fortune 500 companies—depend on to drive high-stakes decisions. From sourcing and licensing novel datasets to rigorous analysis and expert narrative framing, our teams ensure clients get not just data, but clarity and confidence.
We operate globally with offices in the US (NYC, Austin, Miami, Mountain View), APAC (Hong Kong, Shanghai, Beijing, Guangzhou, Singapore), and India. Our award-winning, people-centric culture—recognized by Inc. as a Best Workplace for three consecutive years—emphasizes transparency, ownership, and continuous mastery.
What It's Like to Work at YipitData:
YipitData isn't a place for coasting—it's a launchpad for ambitious, impact-driven professionals. From day one, you'll take the lead on meaningful work, accelerate your growth, and gain exposure that shapes careers.
Why Top Talent Chooses YipitData:
- Ownership That Matters: You'll lead high-impact projects with real business outcomes
- Rapid Growth: We compress years of learning into months
- Merit Over Titles: Trust and responsibility are earned through execution, not tenure
- Velocity with Purpose: We move fast, support each other, and aim high—always with purpose and intention
If your ambition is matched by your work ethic—and you're hungry for a place where growth, impact, and ownership are the norm—YipitData might be the opportunity you've been waiting for.
This is a remote opportunity based in India.
- Standard IST working hours are permitted with the exception of 2-3 days per week, when you will join meetings with the US and LatAm team. On these days, work hours will be between 2:30 - 10:30pm IST. (Please note that we allow for flexibility on the following days to make up for the previous day's late work schedule)
Why You Should Apply NOW:
We're scaling fast and need a hands-on Data Engineering Manager to join our dynamic Data Engineering team who can both lead people and shape data architecture. The ideal candidate possesses 3+ years of managing data engineers and 5+ years of experience working with PySpark, Python is a must. Data Bricks/ Snow Apache Iceberg/ Apache Flink/ and various orchestration tools, ETL pipelines, and data modeling.
As our Data Engineering Manager, you will own the data-orchestration strategy end-to-end. You'll lead and mentor a team of engineers while researching, planning, and institutionalizing best practices that boost our pipeline performance, reliability, and cost-efficiency. This is a hands-on leadership role for someone who thrives on deep technical challenges, enjoys rolling up their sleeves to debug or design, and can chart a clear, forward-looking roadmap for various data engineering projects.
As Our Data Engineer Manager, You Will:
- Report directly to the Director of Data Engineering, who will provide significant, hands-on training on cutting-edge data tools and techniques.
- Hire, onboard, and develop a high-performing team—1-on-1s, growth plans, and performance reviews.
- Manage a team of 3-5 Data Engineers.
- Serve as the team's technical north star—review PRs, pair program, and set engineering standards.
- Architect and evolve our data platform (batch & streaming) for scale, cost, and reliability.
- Own the end-to-end vision and strategic roadmap for various projects.
- Create documentation, architecture diagrams, and other training materials.
- Translate product and analytics needs into a clear data engineering roadmap and OKRs.
You Are Likely To Succeed If:
- You hold a Bachelor's or Master's degree in Computer Science, STEM, or a related technical discipline.
- 7+ years in data engineering (or adjacent), including 2-3+ years formally managing 1-3 engineers.
- Experience in PySpark, Python is a must.
- Experience with Data Bricks/ Snow Apache Iceberg/ Apache Flink/ Snowflake/ Databricks/Microsoft Fabrics
- Proven experience designing and operating large-scale orchestration and ETL/ELT pipelines.
- A track record of mentoring engineers, elevating team productivity, and hiring bar-raising talent.
- The ability to distill complex technical topics into crisp updates for non-technical partners.
- You are eager to constantly learn new technologies.
- You are a self-starter who enjoys working with both internal and external stakeholders.
You have exceptional verbal and written communication skills.
Nice to have: Experience with Airflow, Docker, or equivalent.
What We Offer:
Our compensation package includes comprehensive benefits, perks, and a competitive salary:
- We care about your personal life, and we mean it. We offer flexible work hours, flexible vacation, a generous 401K match, parental leave, team events, wellness budget, learning reimbursement, and more
- Your growth at YipitData is determined by the impact that you are making, not by tenure, unnecessary facetime, or office politics. Everyone at YipitData is empowered to learn, self-improve, and master their skills in an environment focused on ownership, respect, and trust. See more on our high-impact, high-opportunity work environment above
- The final offer may be determined by a number of factors, including, but not limited to, the applicant's experience, knowledge, skills, abilities, as well as internal team benchmarks.
We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, marital status, disability, gender, gender identity or expression, or veteran status. We are proud to be an equal-opportunity employer.
Job Applicant Privacy Notice