4,022 Research Data jobs in India
Clinical data management specialist
Job Viewed
Job Description
Responsibilities:Define project specifications for Data Management services, including Protocol Conversion, Database Build, CRF Design and Data Review and Data Reconciliation tools.Understand external data collection, its integration into the clinical trial, and the management and reconciliation processes required to ensure its accuracy and relevance.Execute data cleaning strategies to accelerate the time to achieve subject data cleanliness and ensure high-quality, timely deliverables.Perform holistic data review and trending analysis via reporting and elluminate analytics to proactively identify issue, risks and develop mitigation strategies.Utilize artificial intelligence (AI) and machine learning (ML) for anomaly and outlier detection to enhance the efficiency and quality of trial data.Monitor and interpret key performance indicators (KPIs), metrics, dashboards, Clinical Trial Operational Analytics (CTOA), and reports to provide actionable recommendations to study lead(s)/project manager.Perform Query Management.Define specifications and collaborate with technical team on configuration of centralized data management platform, elluminate Data Central for data cleaning strategy and oversight activities.Prepare and maintain data management documentation (e.g., DMP, CCGs, Help Text, DVS) and update throughout trial lifecycle.Review and ensure the quality control of team-developed deliverables, covering e CRFs, study documents, program/report specifications, outputs, and elluminate Data Central with analytics modules. Actively evaluate and contribute to enhance processes to increase efficiency and effectiveness.Collaborate and work as a team to ensure the deliverables are completed on time with high quality.Ensure compliance with industry quality standards, regulations, guidelines, and procedures.Other duties as assigned.
Job No Longer Available
This position is no longer listed on WhatJobs. The employer may be reviewing applications, filled the role, or has removed the listing.
However, we have similar jobs available for you below.
Research Data Engineer
Posted 6 days ago
Job Viewed
Job Description
Position Overview
Job Title: Software Development Engineer 2
Department: Technology
Location: Bangalore, India
Reporting To: Senior Research Manager - Data
Position Purpos e
The Research Engineer – Data will play a pivotal role in advancing TookiTaki’s AI-driven compliance and financial crime prevention platforms through applied research, experimentation, and data innovation. This role is ideal for professionals who thrive at the intersection of research and engineering, turning cutting-edge data science concepts into production-ready capabilities that enhance TookiTaki’s competitive edge in fraud prevention, AML compliance, and data intelligence.
The role exists to bridge research and engineering by
- Designing and executing experiments on large, complex datasets
- Prototyping new data-driven algorithms for financial crime detection and compliance automation.
- Collaborating across product, data science, and engineering teams to transition research outcomes into scalable, real-world solutions.
- Ensuring the robustness, fairness, and explainability of AI models within TookiTaki’s compliance platform.
Key Responsibilities.
Applied Research & Prototyping.
- Conduct literature reviews and competitive analysis to identify innovative approaches for data processing, analytics, and model developments.
- Build experimental frameworks to test hypotheses using real-world financial datase
- Prototype algorithms in areas such as anomaly detection, graph-based analytics, and natural language processing for compliance workflows.
Data Engineering for Research
- Develop data ingestion, transformation, and exploration pipelines to support experimentation.
- Work with structured, semi-structured, and unstructured datasets at scale.
- Ensure reproducibility and traceability of experiments
Algorithm Evaluation & Optimization.
- Evaluate research prototypes using statistical, ML, and domain-specific metrics.
- Optimize algorithms for accuracy, latency, and scalability.
- Conduct robustness, fairness, and bias evaluations on mode.
Collaboration & Integration
- Partner with data scientists to transition validated research outcomes into production-ready to code.
- Work closely with product managers to align research priorities with business goals
- Collaborate with cloud engineering teams to deploy research pipelines in hybrid environments.
Documentation & Knowledge Sharing
- Document experimental designs, results, and lessons learned
- Share best practices across engineering and data science teams to accelerate innovation
Qualifications and Skills
EducationRequired:
- Bachelor’s degree in Computer Science, Data Science, Applied Mathematics, or related field
- Preferred: Master’s or PhD in Machine Learning, Data Engineering, or a related research intensive field
Experience
- Minimum 4–7 years in data-centric engineering or applied research roles.
- Proven track record of developing and validating algorithms for large-scale data processing or machine learning applications.
- Experience in financial services, compliance, or fraud detection is a strong plus.
Technical Expertise.
- Progra mming: Proficiency in Scala, Java, or Python
- Data Proce ssing: Experience with Spark, Hadoop, and Flink
- ML/Research Frame works: Hands-on with TensorFlow, PyTorch, or Scikit-learn
- Data bases: Experience with both relational (PostgreSQL, MySQL) and NoSQL databases (MongoDB, Cassandra, ElasticSearch).
- Cloud Plat forms: Experience with AWS (preferred) or GCP for research and data pipelines.
- Tools: Familiarity with experiment tracking tools like MLflow or Weights & Biases.
- Application Deploy ment: Strong experience with CI/CD practices, Containerized Deployments through Kubernetes, Docker Etc.
- Streaming framew orks: Strong experience in creating highly performant and scalable real time streaming applications with Kafka at the core
- Data Lakeh ouse: Experience with one of the modern data lakehouse platforms/formats such as Apache Hudi, Iceberg, Paimon is a very strong Plus.
Soft Skils
- SkillsStrong analytical and problem-solving abilities.
- Clear concise communication skills for cross-functional collaboration.
- Adaptability in fast-paced, evolving environments.
- Curiosity-driven with a bias towards experimentation and iteration.
Key Competencies
- Innovation Mindset: Ability to explore and test novel approaches that push boundaries in data analytics.
- Collab oration: Works effectively with researchers, engineers, and business stakeholders.
- Technical Depth: Strong grasp of advanced algorithms and data engineering principles.
- Problem Solving: Dives deep into the logs, metrics and code and identifying problems opportunities for performance tuning and optimization.
- Own ership: Drives research projects from concept to prototype to production.
- Adapta bility: Thrives in ambiguity and rapidly changing priorities.
- Preferred Certifications in AWS Big Data, Apache Spark, or similar technologies.
- Experience in compliance or financial services domains.
Success Metrics
- Research to Production Co nversion: % of validated research projects integrated into TookiTaki’s platform
- Model Performan ce Gains: Documented improvements in accuracy, speed, or robustness from research initiatives.
- Efficiency of Research P ipelines: Reduced time from ideation to prototype completion.
- Collaboratio n Impact: Positive feedback from cross-functional teams on research integration.
Benefits
- Competiti ve Salary: Aligned with industry standards and experience.
- Professional De velopment: Access to training in big data, cloud computing, and data integration tools.
- Comprehensive Benefits: Health insurance and flexible working options.
- Growth Oppo rtunities: Career progression within Tookitaki’s rapidly expanding Services Delive ry team.
Introducing:
TookitakiTookitaki: The Trust Layer for Financial
Services Tookitaki is transforming financial services by building a robust trust layer that focuses on two crucial pillars: preventing fraud to build consumer trust and combating money laundering to secure institutional trust. Our trust layer leverages collaborative intelligence and a federated AI approach, delivering powerful, AI-driven solutions for real-time fraud detection and AML (Anti-Money Laundering)
compliance.How We Build Trust: Our Unique Value Propositions
- AFC Ecosystem – Community-Driven Financial Crime Protection
- The Anti-Financial Crime (AFC) Ecosystem is a community-driven platform that continuously updates financial crime patterns with real-time intelligence from industry experts. This enables our clients to stay ahead of the latest money laundering and fraud tactics. Leading digital banks and payment platforms rely on Tookitaki to protect them against evolving financial crime threats. By joining this ecosystem, institutions benefit from the collective intelligence of top industry players, ensuring robust
- protection.FinCense – End-to-End Compliance.
- PlatformOur FinCense platform is a comprehensive compliance solution that covers all aspects of AML and fraud prevention—from name screening and customer due diligence (CDD) to transaction monitoring and fraud detection. This ensures financial institutions not only meet regulatory requirements but also mitigate risks of non-compliance, providing the peace of mind they need as they scale.
Industry Recognition and Global Impact
Tookitaki’s innovative approach has been recognized by some of the leading financial entities in Asia. We have also earned accolades from key industry bodies such as FATF and received prestigious awards like the World Economic Forum Technology Pioneer, Forbes Asia 100 to Watch, and Chartis
RiskTech100.Serving some of the world’s most prominent banks and fintech companies, Tookitaki is continuously redefining the standards of financial crime detection and prevention, creating a safer and more trustworthy financial ecosystem for everyone.
Research Data Engineer
Posted 3 days ago
Job Viewed
Job Description
Position Overview
Job Title: Software Development Engineer 2
Department: Technology
Location: Bangalore, India
Reporting To: Senior Research Manager - Data
Position Purpose
The Research Engineer – Data will play a pivotal role in advancing TookiTaki’s AI-driven compliance and financial crime prevention platforms through applied research, experimentation, and data innovation. This role is ideal for professionals who thrive at the intersection of research and engineering, turning cutting-edge data science concepts into production-ready capabilities that enhance TookiTaki’s competitive edge in fraud prevention, AML compliance, and data intelligence.
The role exists to bridge research and engineering by
- Designing and executing experiments on large, complex datasets
- Prototyping new data-driven algorithms for financial crime detection and compliance automation.
- Collaborating across product, data science, and engineering teams to transition research outcomes into scalable, real-world solutions.
- Ensuring the robustness, fairness, and explainability of AI models within TookiTaki’s compliance platform.
Key Responsibilities.
Applied Research & Prototyping.
- Conduct literature reviews and competitive analysis to identify innovative approaches for data processing, analytics, and model developments.
- Build experimental frameworks to test hypotheses using real-world financial datase
- Prototype algorithms in areas such as anomaly detection, graph-based analytics, and natural language processing for compliance workflows.
Data Engineering for Research
- Develop data ingestion, transformation, and exploration pipelines to support experimentation.
- Work with structured, semi-structured, and unstructured datasets at scale.
- Ensure reproducibility and traceability of experiments
Algorithm Evaluation & Optimization.
- Evaluate research prototypes using statistical, ML, and domain-specific metrics.
- Optimize algorithms for accuracy, latency, and scalability.
- Conduct robustness, fairness, and bias evaluations on mode.
Collaboration & Integration
- Partner with data scientists to transition validated research outcomes into production-ready to code.
- Work closely with product managers to align research priorities with business goals
- Collaborate with cloud engineering teams to deploy research pipelines in hybrid environments.
Documentation & Knowledge Sharing
- Document experimental designs, results, and lessons learned
- Share best practices across engineering and data science teams to accelerate innovation
Qualifications and Skills
EducationRequired:
- Bachelor’s degree in Computer Science, Data Science, Applied Mathematics, or related field
- Preferred: Master’s or PhD in Machine Learning, Data Engineering, or a related research intensive field
Experience
- Minimum 4–7 years in data-centric engineering or applied research roles.
- Proven track record of developing and validating algorithms for large-scale data processing or machine learning applications.
- Experience in financial services, compliance, or fraud detection is a strong plus.
Technical Expertise.
- Programming: Proficiency in Scala, Java, or Python
- Data Processing: Experience with Spark, Hadoop, and Flink
- ML/Research Frameworks: Hands-on with TensorFlow, PyTorch, or Scikit-learn
- Databases: Experience with both relational (PostgreSQL, MySQL) and NoSQL databases (MongoDB, Cassandra, ElasticSearch).
- Cloud Platforms: Experience with AWS (preferred) or GCP for research and data pipelines.
- Tools: Familiarity with experiment tracking tools like MLflow or Weights & Biases.
- Application Deployment: Strong experience with CI/CD practices, Containerized Deployments through Kubernetes, Docker Etc.
- Streaming frameworks: Strong experience in creating highly performant and scalable real time streaming applications with Kafka at the core
- Data Lakehouse: Experience with one of the modern data lakehouse platforms/formats such as Apache Hudi, Iceberg, Paimon is a very strong Plus.
Soft Skils
- SkillsStrong analytical and problem-solving abilities.
- Clear concise communication skills for cross-functional collaboration.
- Adaptability in fast-paced, evolving environments.
- Curiosity-driven with a bias towards experimentation and iteration.
Key Competencies
- Innovation Mindset: Ability to explore and test novel approaches that push boundaries in data analytics.
- Collaboration: Works effectively with researchers, engineers, and business stakeholders.
- Technical Depth: Strong grasp of advanced algorithms and data engineering principles.
- Problem Solving: Dives deep into the logs, metrics and code and identifying problems opportunities for performance tuning and optimization.
- Ownership: Drives research projects from concept to prototype to production.
- Adaptability: Thrives in ambiguity and rapidly changing priorities.
- Preferred Certifications in AWS Big Data, Apache Spark, or similar technologies.
- Experience in compliance or financial services domains.
Success Metrics
- Research to Production Conversion: % of validated research projects integrated into TookiTaki’s platform
- Model Performance Gains: Documented improvements in accuracy, speed, or robustness from research initiatives.
- Efficiency of Research Pipelines: Reduced time from ideation to prototype completion.
- Collaboration Impact: Positive feedback from cross-functional teams on research integration.
Benefits
- Competitive Salary: Aligned with industry standards and experience.
- Professional Development: Access to training in big data, cloud computing, and data integration tools.
- Comprehensive Benefits: Health insurance and flexible working options.
- Growth Opportunities: Career progression within Tookitaki’s rapidly expanding Services Delivery team.
Introducing:
TookitakiTookitaki: The Trust Layer for Financial
Services Tookitaki is transforming financial services by building a robust trust layer that focuses on two crucial pillars: preventing fraud to build consumer trust and combating money laundering to secure institutional trust. Our trust layer leverages collaborative intelligence and a federated AI approach, delivering powerful, AI-driven solutions for real-time fraud detection and AML (Anti-Money Laundering)
compliance.How We Build Trust: Our Unique Value Propositions
- AFC Ecosystem – Community-Driven Financial Crime Protection
- The Anti-Financial Crime (AFC) Ecosystem is a community-driven platform that continuously updates financial crime patterns with real-time intelligence from industry experts. This enables our clients to stay ahead of the latest money laundering and fraud tactics. Leading digital banks and payment platforms rely on Tookitaki to protect them against evolving financial crime threats. By joining this ecosystem, institutions benefit from the collective intelligence of top industry players, ensuring robust
- protection.FinCense – End-to-End Compliance.
- PlatformOur FinCense platform is a comprehensive compliance solution that covers all aspects of AML and fraud prevention—from name screening and customer due diligence (CDD) to transaction monitoring and fraud detection. This ensures financial institutions not only meet regulatory requirements but also mitigate risks of non-compliance, providing the peace of mind they need as they scale.
Industry Recognition and Global Impact
Tookitaki’s innovative approach has been recognized by some of the leading financial entities in Asia. We have also earned accolades from key industry bodies such as FATF and received prestigious awards like the World Economic Forum Technology Pioneer, Forbes Asia 100 to Watch, and Chartis
RiskTech100.Serving some of the world’s most prominent banks and fintech companies, Tookitaki is continuously redefining the standards of financial crime detection and prevention, creating a safer and more trustworthy financial ecosystem for everyone.
Research Data Engineer
Posted 6 days ago
Job Viewed
Job Description
Job Title: Software Development Engineer 2
Department: Technology
Location: Bangalore, India
Reporting To: Senior Research Manager - Data
Position Purpos e
The Research Engineer – Data will play a pivotal role in advancing TookiTaki’s AI-driven compliance and financial crime prevention platforms through applied research, experimentation, and data innovation. This role is ideal for professionals who thrive at the intersection of research and engineering, turning cutting-edge data science concepts into production-ready capabilities that enhance TookiTaki’s competitive edge in fraud prevention, AML compliance, and data intelligence .
The role exists to bridge research and engineering by
Designing and executing experiments on large, complex datasets
Prototyping new data-driven algorithms for financial crime detection and compliance automation.
Collaborating across product, data science, and engineering teams to transition research outcomes into scalable, real-world solutions.
Ensuring the robustness, fairness, and explainability of AI models within TookiTaki’s compliance platfor m.
Key Responsibilities.
Applied Research & Prototyping.
Conduct literature reviews and competitive analysis to identify innovative approaches for data processing, analytics, and model developments.
Build experimental frameworks to test hypotheses using real-world financial datase
Prototype algorithms in areas such as anomaly detection, graph-based analytics, and natural language processing for compliance workflows.
Data Engineering for Research
Develop data ingestion, transformation, and exploration pipelines to support experimentation.
Work with structured, semi-structured, and unstructured datasets at scale.
Ensure reproducibility and traceability of experiments
Algorithm Evaluation & Optimization.
Evaluate research prototypes using statistical, ML, and domain-specific metrics.
Optimize algorithms for accuracy, latency, and scalability.
Conduct robustness, fairness, and bias evaluations on mode.
Collaboration & Integration
Partner with data scientists to transition validated research outcomes into production-ready to code.
Work closely with product managers to align research priorities with business goals
Collaborate with cloud engineering teams to deploy research pipelines in hybrid environments.
Documentation & Knowledge Sharing
Document experimental designs, results, and lessons learned
Share best practices across engineering and data science teams to accelerate innovation
Qualifications and Skills
EducationRequired:
Bachelor’s degree in Computer Science, Data Science, Applied Mathematics, or related field
Preferred: Master’s or PhD in Machine Learning, Data Engineering, or a related research intensive field
Experience
Minimum 4–7 years in data-centric engineering or applied research roles.
Proven track record of developing and validating algorithms for large-scale data processing or machine learning applications.
Experience in financial services, compliance, or fraud detection is a strong plus.
Technical Expertise.
Progra mming: Proficiency in Scala, Java, or Python
Data Proce ssing: Experience with Spark, Hadoop, and Flink
ML/Research Frame works: Hands-on with TensorFlow, PyTorch, or Scikit-learn
Data bases: Experience with both relational (PostgreSQL, MySQL) and NoSQL databases (MongoDB, Cassandra, ElasticSearch).
Cloud Plat forms: Experience with AWS (preferred) or GCP for research and data pipelines.
Tools: Familiarity with experiment tracking tools like MLflow or Weights & Biases.
Application Deploy ment: Strong experience with CI/CD practices, Containerized Deployments through Kubernetes, Docker Etc.
Streaming framew orks: Strong experience in creating highly performant and scalable real time streaming applications with Kafka at the core
Data Lakeh ouse: Experience with one of the modern data lakehouse platforms/formats such as Apache Hudi, Iceberg, Paimon is a very strong Plus.
Soft Skils
SkillsStrong analytical and problem-solving abilities.
Clear concise communication skills for cross-functional collaboration.
Adaptability in fast-paced, evolving environments.
Curiosity-driven with a bias towards experimentation and iteration.
Key Competencies
Innovation Mindset: Ability to explore and test novel approaches that push boundaries in data analytics.
Collab oration: Works effectively with researchers, engineers, and business stakeholders.
Technical Depth: Strong grasp of advanced algorithms and data engineering principles.
Problem Solving: Dives deep into the logs, metrics and code and identifying problems opportunities for performance tuning and optimization.
Own ership: Drives research projects from concept to prototype to production.
Adapta bility: Thrives in ambiguity and rapidly changing priorities.
Preferred Certifications in AWS Big Data, Apache Spark, or similar technologies.
Experience in compliance or financial services domains.
Success Metrics
Research to Production Co nversion: % of validated research projects integrated into TookiTaki’s platform
Model Performan ce Gains: Documented improvements in accuracy, speed, or robustness from research initiatives.
Efficiency of Research P ipelines: Reduced time from ideation to prototype completion.
Collaboratio n Impact: Positive feedback from cross-functional teams on research integration.
Benefits
Competiti ve Salary: Aligned with industry standards and experience.
Professional De velopment: Access to training in big data, cloud computing, and data integration tools.
Comprehensive Benefits: Health insurance and flexible working options.
Growth Oppo rtunities: Career progression within Tookitaki’s rapidly expanding Services Deliv e ry team.
Introducing:
TookitakiTookitaki: The Trust Layer for Financial
Services Tookitaki is transforming financial services by building a robust trust layer that focuses on two crucial pillars: preventing fraud to build consumer trust and combating money laundering to secure institutional trust. Our trust layer leverages collaborative intelligence and a federated AI approach, delivering powerful, AI-driven solutions for real-time fraud detection and AML (Anti-Money Laundering)
compliance.How We Build Trust: Our Unique Value Propositions
AFC Ecosystem – Community-Driven Financial Crime Protection
The Anti-Financial Crime (AFC) Ecosystem is a community-driven platform that continuously updates financial crime patterns with real-time intelligence from industry experts. This enables our clients to stay ahead of the latest money laundering and fraud tactics. Leading digital banks and payment platforms rely on Tookitaki to protect them against evolving financial crime threats. By joining this ecosystem, institutions benefit from the collective intelligence of top industry players, ensuring robust
protection.FinCense – End-to-End Compliance.
PlatformOur FinCense platform is a comprehensive compliance solution that covers all aspects of AML and fraud prevention—from name screening and customer due diligence (CDD) to transaction monitoring and fraud detection. This ensures financial institutions not only meet regulatory requirements but also mitigate risks of non-compliance, providing the peace of mind they need as they scale.
Industry Recognition and Global Impact
Tookitaki’s innovative approach has been recognized by some of the leading financial entities in Asia. We have also earned accolades from key industry bodies such as FATF and received prestigious awards like the World Economic Forum Technology Pioneer, Forbes Asia 100 to Watch, and Chartis
RiskTech100.Serving some of the world’s most prominent banks and fintech companies, Tookitaki is continuously redefining the standards of financial crime detection and prevention, creating a safer and more trustworthy financial ecosystem for everyone.
Research Data Engineer
Posted today
Job Viewed
Job Description
Position Overview
Job Title: Software Development Engineer 2
Department: Technology
Location: Bangalore, India
Reporting To: Senior Research Manager - Data
Position Purpos e
The Research Engineer – Data will play a pivotal role in advancing TookiTaki’s AI-driven compliance and financial crime prevention platforms through applied research, experimentation, and data innovation. This role is ideal for professionals who thrive at the intersection of research and engineering, turning cutting-edge data science concepts into production-ready capabilities that enhance TookiTaki’s competitive edge in fraud prevention, AML compliance, and data intelligence.
The role exists to bridge research and engineering by
- Designing and executing experiments on large, complex datasets
- Prototyping new data-driven algorithms for financial crime detection and compliance automation.
- Collaborating across product, data science, and engineering teams to transition research outcomes into scalable, real-world solutions.
- Ensuring the robustness, fairness, and explainability of AI models within TookiTaki’s compliance platform.
Key Responsibilities.
Applied Research & Prototyping.
- Conduct literature reviews and competitive analysis to identify innovative approaches for data processing, analytics, and model developments.
- Build experimental frameworks to test hypotheses using real-world financial datase
- Prototype algorithms in areas such as anomaly detection, graph-based analytics, and natural language processing for compliance workflows.
Data Engineering for Research
- Develop data ingestion, transformation, and exploration pipelines to support experimentation.
- Work with structured, semi-structured, and unstructured datasets at scale.
- Ensure reproducibility and traceability of experiments
Algorithm Evaluation & Optimization.
- Evaluate research prototypes using statistical, ML, and domain-specific metrics.
- Optimize algorithms for accuracy, latency, and scalability.
- Conduct robustness, fairness, and bias evaluations on mode.
Collaboration & Integration
- Partner with data scientists to transition validated research outcomes into production-ready to code.
- Work closely with product managers to align research priorities with business goals
- Collaborate with cloud engineering teams to deploy research pipelines in hybrid environments.
Documentation & Knowledge Sharing
- Document experimental designs, results, and lessons learned
- Share best practices across engineering and data science teams to accelerate innovation
Qualifications and Skills
EducationRequired:
- Bachelor’s degree in Computer Science, Data Science, Applied Mathematics, or related field
- Preferred: Master’s or PhD in Machine Learning, Data Engineering, or a related research intensive field
Experience
- Minimum 4–7 years in data-centric engineering or applied research roles.
- Proven track record of developing and validating algorithms for large-scale data processing or machine learning applications.
- Experience in financial services, compliance, or fraud detection is a strong plus.
Technical Expertise.
- Progra mming: Proficiency in Scala, Java, or Python
- Data Proce ssing: Experience with Spark, Hadoop, and Flink
- ML/Research Frame works: Hands-on with TensorFlow, PyTorch, or Scikit-learn
- Data bases: Experience with both relational (PostgreSQL, MySQL) and NoSQL databases (MongoDB, Cassandra, ElasticSearch).
- Cloud Plat forms: Experience with AWS (preferred) or GCP for research and data pipelines.
- Tools: Familiarity with experiment tracking tools like MLflow or Weights & Biases.
- Application Deploy ment: Strong experience with CI/CD practices, Containerized Deployments through Kubernetes, Docker Etc.
- Streaming framew orks: Strong experience in creating highly performant and scalable real time streaming applications with Kafka at the core
- Data Lakeh ouse: Experience with one of the modern data lakehouse platforms/formats such as Apache Hudi, Iceberg, Paimon is a very strong Plus.
Soft Skils
- SkillsStrong analytical and problem-solving abilities.
- Clear concise communication skills for cross-functional collaboration.
- Adaptability in fast-paced, evolving environments.
- Curiosity-driven with a bias towards experimentation and iteration.
Key Competencies
- Innovation Mindset: Ability to explore and test novel approaches that push boundaries in data analytics.
- Collab oration: Works effectively with researchers, engineers, and business stakeholders.
- Technical Depth: Strong grasp of advanced algorithms and data engineering principles.
- Problem Solving: Dives deep into the logs, metrics and code and identifying problems opportunities for performance tuning and optimization.
- Own ership: Drives research projects from concept to prototype to production.
- Adapta bility: Thrives in ambiguity and rapidly changing priorities.
- Preferred Certifications in AWS Big Data, Apache Spark, or similar technologies.
- Experience in compliance or financial services domains.
Success Metrics
- Research to Production Co nversion: % of validated research projects integrated into TookiTaki’s platform
- Model Performan ce Gains: Documented improvements in accuracy, speed, or robustness from research initiatives.
- Efficiency of Research P ipelines: Reduced time from ideation to prototype completion.
- Collaboratio n Impact: Positive feedback from cross-functional teams on research integration.
Benefits
- Competiti ve Salary: Aligned with industry standards and experience.
- Professional De velopment: Access to training in big data, cloud computing, and data integration tools.
- Comprehensive Benefits: Health insurance and flexible working options.
- Growth Oppo rtunities: Career progression within Tookitaki’s rapidly expanding Services Delive ry team.
Introducing:
TookitakiTookitaki: The Trust Layer for Financial
Services Tookitaki is transforming financial services by building a robust trust layer that focuses on two crucial pillars: preventing fraud to build consumer trust and combating money laundering to secure institutional trust. Our trust layer leverages collaborative intelligence and a federated AI approach, delivering powerful, AI-driven solutions for real-time fraud detection and AML (Anti-Money Laundering)
compliance.How We Build Trust: Our Unique Value Propositions
- AFC Ecosystem – Community-Driven Financial Crime Protection
- The Anti-Financial Crime (AFC) Ecosystem is a community-driven platform that continuously updates financial crime patterns with real-time intelligence from industry experts. This enables our clients to stay ahead of the latest money laundering and fraud tactics. Leading digital banks and payment platforms rely on Tookitaki to protect them against evolving financial crime threats. By joining this ecosystem, institutions benefit from the collective intelligence of top industry players, ensuring robust
- protection.FinCense – End-to-End Compliance.
- PlatformOur FinCense platform is a comprehensive compliance solution that covers all aspects of AML and fraud prevention—from name screening and customer due diligence (CDD) to transaction monitoring and fraud detection. This ensures financial institutions not only meet regulatory requirements but also mitigate risks of non-compliance, providing the peace of mind they need as they scale.
Industry Recognition and Global Impact
Tookitaki’s innovative approach has been recognized by some of the leading financial entities in Asia. We have also earned accolades from key industry bodies such as FATF and received prestigious awards like the World Economic Forum Technology Pioneer, Forbes Asia 100 to Watch, and Chartis
RiskTech100.Serving some of the world’s most prominent banks and fintech companies, Tookitaki is continuously redefining the standards of financial crime detection and prevention, creating a safer and more trustworthy financial ecosystem for everyone.
Research Data Engineer
Posted 5 days ago
Job Viewed
Job Description
Position Overview
Job Title: Software Development Engineer 2
Department: Technology
Location: Bangalore, India
Reporting To: Senior Research Manager - Data
Position Purpos e
The Research Engineer – Data will play a pivotal role in advancing TookiTaki’s AI-driven compliance and financial crime prevention platforms through applied research, experimentation, and data innovation. This role is ideal for professionals who thrive at the intersection of research and engineering, turning cutting-edge data science concepts into production-ready capabilities that enhance TookiTaki’s competitive edge in fraud prevention, AML compliance, and data intelligence.
The role exists to bridge research and engineering by
- Designing and executing experiments on large, complex datasets
- Prototyping new data-driven algorithms for financial crime detection and compliance automation.
- Collaborating across product, data science, and engineering teams to transition research outcomes into scalable, real-world solutions.
- Ensuring the robustness, fairness, and explainability of AI models within TookiTaki’s compliance platform.
Key Responsibilities.
Applied Research & Prototyping.
- Conduct literature reviews and competitive analysis to identify innovative approaches for data processing, analytics, and model developments.
- Build experimental frameworks to test hypotheses using real-world financial datase
- Prototype algorithms in areas such as anomaly detection, graph-based analytics, and natural language processing for compliance workflows.
Data Engineering for Research
- Develop data ingestion, transformation, and exploration pipelines to support experimentation.
- Work with structured, semi-structured, and unstructured datasets at scale.
- Ensure reproducibility and traceability of experiments
Algorithm Evaluation & Optimization.
- Evaluate research prototypes using statistical, ML, and domain-specific metrics.
- Optimize algorithms for accuracy, latency, and scalability.
- Conduct robustness, fairness, and bias evaluations on mode.
Collaboration & Integration
- Partner with data scientists to transition validated research outcomes into production-ready to code.
- Work closely with product managers to align research priorities with business goals
- Collaborate with cloud engineering teams to deploy research pipelines in hybrid environments.
Documentation & Knowledge Sharing
- Document experimental designs, results, and lessons learned
- Share best practices across engineering and data science teams to accelerate innovation
Qualifications and Skills
EducationRequired:
- Bachelor’s degree in Computer Science, Data Science, Applied Mathematics, or related field
- Preferred: Master’s or PhD in Machine Learning, Data Engineering, or a related research intensive field
Experience
- Minimum 4–7 years in data-centric engineering or applied research roles.
- Proven track record of developing and validating algorithms for large-scale data processing or machine learning applications.
- Experience in financial services, compliance, or fraud detection is a strong plus.
Technical Expertise.
- Progra mming: Proficiency in Scala, Java, or Python
- Data Proce ssing: Experience with Spark, Hadoop, and Flink
- ML/Research Frame works: Hands-on with TensorFlow, PyTorch, or Scikit-learn
- Data bases: Experience with both relational (PostgreSQL, MySQL) and NoSQL databases (MongoDB, Cassandra, ElasticSearch).
- Cloud Plat forms: Experience with AWS (preferred) or GCP for research and data pipelines.
- Tools: Familiarity with experiment tracking tools like MLflow or Weights & Biases.
- Application Deploy ment: Strong experience with CI/CD practices, Containerized Deployments through Kubernetes, Docker Etc.
- Streaming framew orks: Strong experience in creating highly performant and scalable real time streaming applications with Kafka at the core
- Data Lakeh ouse: Experience with one of the modern data lakehouse platforms/formats such as Apache Hudi, Iceberg, Paimon is a very strong Plus.
Soft Skils
- SkillsStrong analytical and problem-solving abilities.
- Clear concise communication skills for cross-functional collaboration.
- Adaptability in fast-paced, evolving environments.
- Curiosity-driven with a bias towards experimentation and iteration.
Key Competencies
- Innovation Mindset: Ability to explore and test novel approaches that push boundaries in data analytics.
- Collab oration: Works effectively with researchers, engineers, and business stakeholders.
- Technical Depth: Strong grasp of advanced algorithms and data engineering principles.
- Problem Solving: Dives deep into the logs, metrics and code and identifying problems opportunities for performance tuning and optimization.
- Own ership: Drives research projects from concept to prototype to production.
- Adapta bility: Thrives in ambiguity and rapidly changing priorities.
- Preferred Certifications in AWS Big Data, Apache Spark, or similar technologies.
- Experience in compliance or financial services domains.
Success Metrics
- Research to Production Co nversion: % of validated research projects integrated into TookiTaki’s platform
- Model Performan ce Gains: Documented improvements in accuracy, speed, or robustness from research initiatives.
- Efficiency of Research P ipelines: Reduced time from ideation to prototype completion.
- Collaboratio n Impact: Positive feedback from cross-functional teams on research integration.
Benefits
- Competiti ve Salary: Aligned with industry standards and experience.
- Professional De velopment: Access to training in big data, cloud computing, and data integration tools.
- Comprehensive Benefits: Health insurance and flexible working options.
- Growth Oppo rtunities: Career progression within Tookitaki’s rapidly expanding Services Delive ry team.
Introducing:
TookitakiTookitaki: The Trust Layer for Financial
Services Tookitaki is transforming financial services by building a robust trust layer that focuses on two crucial pillars: preventing fraud to build consumer trust and combating money laundering to secure institutional trust. Our trust layer leverages collaborative intelligence and a federated AI approach, delivering powerful, AI-driven solutions for real-time fraud detection and AML (Anti-Money Laundering)
compliance.How We Build Trust: Our Unique Value Propositions
- AFC Ecosystem – Community-Driven Financial Crime Protection
- The Anti-Financial Crime (AFC) Ecosystem is a community-driven platform that continuously updates financial crime patterns with real-time intelligence from industry experts. This enables our clients to stay ahead of the latest money laundering and fraud tactics. Leading digital banks and payment platforms rely on Tookitaki to protect them against evolving financial crime threats. By joining this ecosystem, institutions benefit from the collective intelligence of top industry players, ensuring robust
- protection.FinCense – End-to-End Compliance.
- PlatformOur FinCense platform is a comprehensive compliance solution that covers all aspects of AML and fraud prevention—from name screening and customer due diligence (CDD) to transaction monitoring and fraud detection. This ensures financial institutions not only meet regulatory requirements but also mitigate risks of non-compliance, providing the peace of mind they need as they scale.
Industry Recognition and Global Impact
Tookitaki’s innovative approach has been recognized by some of the leading financial entities in Asia. We have also earned accolades from key industry bodies such as FATF and received prestigious awards like the World Economic Forum Technology Pioneer, Forbes Asia 100 to Watch, and Chartis
RiskTech100.Serving some of the world’s most prominent banks and fintech companies, Tookitaki is continuously redefining the standards of financial crime detection and prevention, creating a safer and more trustworthy financial ecosystem for everyone.
Research Data Audit
Posted today
Job Viewed
Job Description
**Primary role**
- Audit/cross-comparison of data sets of various data providers for completeness and accuracy (Various data sets include Company's Fundamental data - Annual / Interim Financial Statements data, Stock Price / Index data, Governance related data etc.
- Liaisoning with the technical team of various data providers for reporting of observations found during the audit and for prompt resolution of the same
- Contribution in developing various data audit techniques to find outliers/ errors in large set of data
- Suggesting the most appropriate fields available in database for development of various financial / non-financial ratios
Job Overview (8082)
**Experience**
0 Month(s).
**City**
Surat.
**Qualification**
M.COM
**Area of Expertise**
Accounting Finance
**Prefer Gender**
Male
**Function**
INVESTMENT
**Audio / Video Profile**
NA
Senior Team Leader, Research & Data Analysis
Posted today
Job Viewed
Job Description
Be The First To Know
About the latest Research data Jobs in India !
Quantitative Research Data Analysis and Model
Posted today
Job Viewed
Job Description
Are you looking for an exciting opportunity to join a dynamic and growing team in a fast paced and challenging area? This is a unique opportunity for you to work in our team to partner with the Business to provide a comprehensive view.
As a Quantitative Research Production Analyst within our IBQR Data and Production team, you will have the unique opportunity to gain exposure in areas such as Credit Risk, Basel regulations, Allowance determination, and stress forecasting. We work across the Wholesale bank and closely collaborate with firm-wide partners including Quantitative Research, Finance, Model Risk & Development, Technology, and the Regulatory Capital Management Office. This role provides an exciting opportunity to contribute to our dynamic team and grow your career.
Be part of team focused on delivering data processing and forecasting results production for the wholesale loans and commitment portfolio to meet the FRB CCAR requirements for Y-14A stress testing, IFRS-9 Expected Credit Loss, and US-GAAP CECL.
**Job responsibilities**
- Execute and deliver the quarterly stress forecasting results with focus on data needs
- Perform portfolio analysis, data segmentation, data transformation and database interaction to prepare various datasets
- Work in an Agile framework to write business requirements in the form of JIRA epics & user stories to develop data and system requirements for credit risk modelling platform
- Define data models, metadata and data dictionary that will enable data analysis and analytical explorations
- Understand the implementation of the forecasting architecture, systems and dataflow
- Identify issues and streamline data flow from acquisition to results production
- Support the strategic build-out of stress testing workflow/dataflow for future initiatives
- Liaise with technology partners and LOBs to improve and enhance the forecasting process
- Help to plan and perform for all production/testing needs
- Perform control and reconciliation against deliveries in addition to creating/enhancing formal governance
**Required qualifications, capabilities, and skills**
- Minimum 3 years of experience working in Wholesale Risk Management, Corporate Finance, Data Analytics
- Familiarity with the software development lifecycle
- Data Analysis and data manipulation skills using SQL, Python & MS Excel is Required
- Coding knowledge and experience in Python/SQL is required
- Understanding of big-data analytics and strategic data synthesis
- Ability to organize work and solve problems independently and excel in a high-pressure, deadline oriented environment.
- Excellent written and verbal communication skills, ability to engage and work with internal and external stakeholders
- Tableau and data visualization familiarity
- Familiarity with Traditional Credit Products (Loans, Letter of Credit, Stand by Letter of Credit, Syndications)
- Experience with wholesale loans and commitments product and systems, as well as its data flows
- BA/BS degree required; Master degree or CFA
**ABOUT US**
JPMorgan Chase & Co., one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management.
We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation.
**ABOUT THE TEAM**
As part of Risk Management and Compliance, you are at the center of keeping JPMorgan Chase strong and resilient. You help the firm grow its business in a responsible way by anticipating new and emerging risks, and using your expert judgement to solve real-world challenges that impact our company, customers and communities. Our culture in Risk Management and Compliance is all about thinking outside the box, challenging the status quo and striving to be best-in-class. We also take great pride in on our commitment to operating with integrity and discipline in all that we do. If you are a team player, are solutions-oriented and have an appetite for learning, you’ll
Healthcare Research & Data Analyst
Posted today
Job Viewed
Job Description
About Clarivate
Who are you?
What will you do?
What do you know?
Requirements:
Skills:
Education:
Preferred (Good to have) skills:
Work Mode: Hybrid, Monday to Friday 12:00 pm to 8:00 pm
At Clarivate, we are committed to providing equal employment opportunities for all qualified persons with respect to hiring, compensation, promotion, training, and other terms, conditions, and privileges of employment. We comply with applicable laws and regulations governing non-discrimination in all locations.
Research & Data Entry Executive
Posted today
Job Viewed
Job Description
**Responsibilities**:
- Browse the internet to look up for specific information quickly as required by the Client
- Verifies and logs receipt of data as per client's guidelines/ instructions
- Performs high-volume data entry using word processing, spreadsheet, google docs, or other computer software/tools as required.
- Verifies integrity of data by comparing it to source data
- Reviews data for errors, missing pages, or missing information and resolves any discrepancies
- Maintains a filing system and protects confidential customer information.
- Performs regular backups to ensure data preservation
- Maintains a satisfactory level of quality and productivity as per department standards
**Data Entry Operator Qualifications/Skills**:
- Excellent attention to detail
- Ability to multitask effectively
- Written and verbal communication skills
- Ability to perform repetitive tasks with a high degree of accuracy
- Comfortable working independently with mínimal supervision
- Eager to learn new tools and upgrade skill
**Education and Experience Requirements**:
- Graduation degree preferred but not required
- 1-3 years of experience in data entry or equivalent training
- Ability to type a minimum of 30 WPM
- Experience with Microsoft Office (Microsoft Excel, Microsoft Word, PowerPoint), Google docs, etc.
**Salary**: ₹10,000.00 - ₹13,000.00 per month
**Benefits**:
- Leave encashment
Schedule:
- Day shift
Supplemental pay types:
- Overtime pay
COVID-19 considerations:
We encourage use of hand sanitizer in our office and maintain other Covid safety measures.
Ability to commute/relocate:
- Chandni Market, Kolkata - 700072, West Bengal: Reliably commute or planning to relocate before starting work (required)
Application Question(s):
- What is your expected monthly salary?
- What is your last drawn salary with Date?
**Education**:
- Diploma (preferred)
**Experience**:
- Microsoft Office: 1 year (preferred)
- total work: 1 year (preferred)
- Internet research: 1 year (preferred)
**Language**:
- English (preferred)