20,305 Data Engineer jobs in India

Data Engineer- Senior Data Engineer

Bengaluru, Karnataka Paytm

Posted today

Job Viewed

Tap Again To Close

Job Description

The RoleWe're looking for a senior AI engineer who can build production-grade agentic AI systems. You'll be working at the intersection of cutting-edge AI research and scalable engineering, creating autonomous agents that can reason, plan, and execute complex tasks reliably at scale.What We Need Agentic AI & LLM EngineeringYou should have hands-on experience with:Multi-agent systems : Building agents that coordinate, communicate, and work together on complex workflowsAgent orchestration : Designing systems where AI agents can plan multi-step tasks, use tools, and make autonomous decisionsLLMOps Experience : End-to-End LLM Lifecycle Management - hands-on experience managing the complete LLM workflow from prompt engineering and dataset curation through model fine-tuning, evaluation, and deployment. This includes versioning prompts, managing training datasets, orchestrating distributed training jobs, and implementing automated model validation pipelines. Production LLM Infrastructure - experience building and maintaining production LLM serving infrastructure including model registries, A/B testing frameworks for comparing model versions, automated rollback mechanisms, and monitoring systems that track model performance, latency, and cost metrics in real-time.AI Observability : Experience implementing comprehensive monitoring and tracing for AI systems, including prompt tracking, model output analysis, cost monitoring, and agent decision-making visibility across complex workflows.Evaluation frameworks : Creating comprehensive testing for agent performance, safety, and goal achievementLLM inference optimization : Scaling model serving with techniques like batching, caching, and efficient frameworks (vLLM, TensorRT-LLM)Systems EngineeringStrong backend development skills including:Python expertise : FastAPI, Django, or Flask for building robust APIs that handle agent workflowsDistributed systems : Microservices, event-driven architectures, and message queues (Kafka, RabbitMQ) for agent coordinationDatabase strategy : Vector databases, traditional SQL/NoSQL, and caching layers optimized for agent state managementWeb-scale design : Systems handling millions of requests with proper load balancing and fault toleranceDevOps (Non-negotiable)Kubernetes : Working knowledge required - deployments, services, cluster managementContainerization : Docker with production optimization and security best practicesCI/CD : Automated testing and deployment pipelinesInfrastructure as Code : Terraform, Helm chartsMonitoring : Prometheus, Grafana for tracking complex agent behaviorsPrograming Language : Java , PythonWhat You'll BuildYou'll architect the infrastructure that powers our autonomous AI systems:Agent Orchestration Platform : Multi-agent coordination systems that handle complex, long-running workflows with proper state management and failure recovery.Evaluation Infrastructure : Comprehensive frameworks that assess agent performance across goal achievement, efficiency, safety, and decision-making quality.Production AI Services : High-throughput systems serving millions of users with intelligent resource management and robust fallback mechanisms.Training Systems : Scalable pipelines for SFT and DPO that continuously improve agent capabilities based on real-world performance and human feedback.Who You AreYou've spent serious time in production environments building AI systems that actually work. You understand the unique challenges of agentic AI - managing state across long conversations, handling partial failures in multi-step processes, and ensuring agents stay aligned with their intended goals.You've dealt with the reality that the hardest problems aren't always algorithmic. Sometimes it's about making an agent retry gracefully when an API call fails, or designing an observability layer that catches when an agent starts behaving unexpectedly, or building systems that can scale from handling dozens of agent interactions to millions.You're excited about the potential of AI agents but pragmatic about the engineering work required to make them reliable in production.
This advertiser has chosen not to accept applicants from your region.

Senior Data Engineer / Data Engineer

Kochi, Kerala Invokhr

Posted today

Job Viewed

Tap Again To Close

Job Description

LOOKING FOR IMMEDIATE JOINERS OR 15 DAYS NOTICE PERIODS AND THIS IS WORK FROM HOME OPPORTUNITY

Position: Senior Data Engineer / Data Engineer

Desired Experience: 3-8 years

Salary: Best-in-industry

You will act as a key member of the Data consulting team, working directly with the partners and senior

stakeholders of the clients designing and implementing big data and analytics solutions. Communication

and organisation skills are keys for this position, along with a problem-solution attitude.

What is in it for you:

Opportunity to work with a world class team of business consultants and engineers solving some of

the most complex business problems by applying data and analytics techniques

Fast track career growth in a highly entrepreneurial work environment

Best-in-industry renumeration package

Essential Technical Skills:

Technical expertise with emerging Big Data technologies, such as: Python, Spark, Hadoop, Clojure,

Git, SQL and Databricks; and visualization tools: Tableau and PowerBI

Experience with cloud, container and micro service infrastructures

Experience working with divergent data sets that meet the requirements of the Data Science and

Data Analytics teams

Hands-on experience with data modelling, query techniques and complexity analysis

Desirable Skills:

Experience/Knowledge of working in an agile environment and experience with agile

methodologies such as Scrum

Experience of working with development teams and product owners to understand their

requirement

Certifications on any of the above areas will be preferred.

Your duties will include:

Develop data solutions within a Big Data Azure and/or other cloud environments

Working with divergent data sets that meet the requirements of the Data Science and Data Analytics

teams

Build and design Data Architectures using Azure Data factory, Databricks, Data lake, Synapse

Liaising with CTO, Product Owners and other Operations teams to deliver engineering roadmaps

showing key items such as upgrades, technical refreshes and new versions

Perform data mapping activities to describe source data, target data and the high-level or

detailed transformations that need to occur;

Assist Data Analyst team in developing KPIs and reporting in tools viz. Power BI, Tableau

Data Integration, Transformation, Modelling

Maintaining all relevant documentation and knowledge bases

Research and suggest new database products, services and protocols

Essential Personal Traits:

You should be able to work independently and communicate effectively with remote teams.

Timely communication/escalation of issues/dependencies to higher management.

Curiosity to learn and apply emerging technologies to solve business problems

This advertiser has chosen not to accept applicants from your region.

Data Engineer / Senior Data Engineer

Madhapur, Orissa Thomson Reuters

Posted today

Job Viewed

Tap Again To Close

Job Description

Are you excited by the prospect of wrangling data, helping develop information systems/sources/tools, and shaping the way businesses make decisions? The Go-To-Markets Data Analytics team is looking for a skilled Data Engineer / Senior Data Engineer who is motivated to deliver top notch data-engineering solutions to support business intelligence, data science, and self-service data solutions.

About the Role:

In this role as a Data Engineer / Senior Data Engineer, you will:

  • Design, develop, optimize, and automate data pipelines that blend and transform data across different sources to help drive business intelligence, data science, and self-service data solutions.

  • Work closely with data scientists and data visualization teams to understand data requirements to ensure the availability of high-quality data for analytics, modelling, and reporting.

  • Build pipelines that source, transform, and load data that’s both structured and unstructured keeping in mind data security and access controls.

  • Explore large volumes of data with curiosity and conviction.

  • Contribute to the strategy and architecture of data management systems and solutions.

  • Proactively troubleshoot and resolve data-related and performance bottlenecks in a timely manner.

  • Be open to learning and working on emerging technologies in the data engineering, data science and cloud computing space.

  • Have the curiosity to interrogate data, conduct independent research, utilize various techniques, and tackle ambiguous problems.

  • Shift Timings: 12 PM to 9 PM (IST)

  • Work from office for 2 days in a week (Mandatory)

  • About You

    You’re a fit for the role of Data Engineer, if your background includes:

  • Must have at least 4+ years of total work experience with at least 2+ years in data engineering or analytics domains.

  • Graduates in data analytics, data science, computer science, software engineering or other data centric disciplines.

  • SQL Proficiency a must.

  • Experience with data pipeline and transformation tools such as dbt, Glue, FiveTran, Alteryx or similar solutions.

  • Experience using cloud-based data warehouse solutions such as Snowflake, Redshift, Azure.

  • Experience with orchestration tools like Airflow or Dagster.

  • Preferred experience using Amazon Web Services (S3, Glue, Athena, Quick sight).

  • Data modelling knowledge of various schemas like snowflake and star.

  • Has built data pipelines and other custom automated solutions to speed the ingestion, analysis, and visualization of large volumes of data.

  • Knowledge building ETL workflows, database design, and query optimization.

  • Has experience of a scripting language like Python.

  • Works well within a team and collaborates with colleagues across domains and geographies.

  • Excellent oral, written, and visual communication skills.

  • Has a demonstrable ability to assimilate new information thoroughly and quickly.

  • Strong logical and scientific approach to problem-solving.

  • Can articulate complex results in a simple and concise manner to all levels within the organization.

  • #LI-GS2

    What’s in it For You?

  • Hybrid Work Model: We’ve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected.

  • Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance.

  • Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrow’s challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future.

  • Industry Competitive Benefits: We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing.

  • Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our values: Obsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together.

  • Social Impact: Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives.

  • Making a Real-World Impact:  We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world.

  • About Us

    Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news.

    We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound exciting? Join us and help shape the industries that move society forward.

    As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace.

    We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law. More information on requesting an accommodation .

    Learn more on how to protect yourself from fraudulent job postings .

    More information about Thomson Reuters can be found on

    This advertiser has chosen not to accept applicants from your region.

    Data Engineer- Lead Data Engineer

    Bengaluru, Karnataka Paytm

    Posted today

    Job Viewed

    Tap Again To Close

    Job Description

    Role Overview



    We are seeking an experienced Lead Data Engineer to join our Data Engineering team at Paytm, India's leading digital payments and financial services platform. This is a critical role responsible for designing, building, and maintaining large-scale, real-time data streams that process billions of transactions and user interactions daily. Data accuracy and stream reliability are essential to our operations, as data quality issues can result in financial losses and impact customer trust.

    As a Lead Data Engineer at Paytm, you will be responsible for building robust data systems that support India's largest digital payments ecosystem. You'll architect and implement reliable, real-time data streaming solutions where precision and data correctness are fundamental requirements . Your work will directly support millions of users across merchant payments, peer-to-peer transfers, bill payments, and financial services, where data accuracy is crucial for maintaining customer confidence and operational excellence.


    This role requires expertise in designing fault-tolerant, scalable data architectures that maintain high uptime standards while processing peak transaction loads during festivals and high-traffic events. We place the highest priority on data quality and system reliability, as our customers depend on accurate, timely information for their financial decisions. You'll collaborate with cross-functional teams including data scientists, product managers, and risk engineers to deliver data solutions that enable real-time fraud detection, personalized recommendations, credit scoring, and regulatory compliance reporting.


    Key technical challenges include maintaining data consistency across distributed systems with demanding performance requirements, implementing comprehensive data quality frameworks with real-time validation, optimizing query performance on large datasets, and ensuring complete data lineage and governance across multiple business domains. At Paytm, reliable data streams are fundamental to our operations and our commitment to protecting customers' financial security and maintaining India's digital payments infrastructure.


    Key Responsibilities


    Data Stream Architecture & Development Design and implement reliable, scalable data streams handling high-volume transaction data with strong data integrity controlsBuild real-time processing systems using modern data engineering frameworks (Java/Python stack) with excellent performance characteristicsDevelop robust data ingestion systems from multiple sources with built-in redundancy and monitoring capabilitiesImplement comprehensive data quality frameworks, ensuring the 4 C's: Completeness, Consistency, Conformity, and Correctness - ensuring data reliability that supports sound business decisionsDesign automated data validation, profiling, and quality monitoring systems with proactive alerting capabilities Infrastructure & Platform Management Manage and optimize distributed data processing platforms with high availability requirements to ensure consistent service deliveryDesign data lake and data warehouse architectures with appropriate partitioning and indexing strategies for optimal query performanceImplement CI/CD processes for data engineering workflows with comprehensive testing and reliable deployment proceduresEnsure high availability and disaster recovery for critical data systems to maintain business continuity


    Performance & Optimization Monitor and optimize streaming performance with focus on latency reduction and operational efficiencyImplement efficient data storage strategies including compression, partitioning, and lifecycle management with cost considerationsTroubleshoot and resolve complex data streaming issues in production environments with effective response protocolsConduct proactive capacity planning and performance tuning to support business growth and data volume increases


    Collaboration & Leadership Work closely with data scientists, analysts, and product teams to understand important data requirements and service level expectationsMentor junior data engineers with emphasis on data quality best practices and customer-focused approachParticipate in architectural reviews and help establish data engineering standards that prioritize reliability and accuracyDocument technical designs, processes, and operational procedures with focus on maintainability and knowledge sharing


    Required Qualifications


    Experience & Education Bachelor's or Master's degree in Computer Science, Engineering, or related technical field

    7+ years (Senior) of hands-on data engineering experience

    Proven experience with large-scale data processing systems (preferably in fintech/payments domain)

    Experience building and maintaining production data streams processing TB/PB scale data with strong performance and reliability standards


    Technical Skills & RequirementsProgramming Languages:

    Expert-level proficiency in both Python and Java; experience with Scala preferred


    Big Data Technologies: Apache Spark (PySpark, Spark SQL, Spark with Java), Apache Kafka, Apache Airflow

    Cloud Platforms: AWS (EMR, Glue, Redshift, S3, Lambda) or equivalent Azure/GCP services

    Databases: Strong SQL skills, experience with both relational (PostgreSQL, MySQL) and NoSQL databases (MongoDB, Cassandra, Redis)

    Data Quality Management: Deep understanding of the 4 C's framework - Completeness, Consistency, Conformity, and Correctness

    Data Governance: Experience with data lineage tracking, metadata management, and data cataloging

    Data Formats & Protocols: Parquet, Avro, JSON, REST APIs, GraphQL Containerization & DevOps: Docker, Kubernetes, Git, GitLab/GitHub with CI/CD pipeline experience

    Monitoring & Observability: Experience with Prometheus, Grafana, or similar monitoring tools

    Data Modeling: Dimensional modeling, data vault, or similar methodologies

    Streaming Technologies: Apache Flink, Kinesis, or Pulsar experience is a plus

    Infrastructure as Code: Terraform, CloudFormation (preferred)

    Java-specific: Spring Boot, Maven/Gradle, JUnit for building robust data services


    Preferred Qualifications


    Domain Expertise

    Previous experience in fintech, payments, or banking industry with solid understanding of regulatory compliance and financial data requirementsUnderstanding of financial data standards, PCI DSS compliance, and data privacy regulations where compliance is essential for business operationsExperience with real-time fraud detection or risk management systems where data accuracy is crucial for customer protection


    Advanced Technical Skills (Preferred)


    Experience building automated data quality frameworks covering all 4 C's dimensionsKnowledge of machine learning stream orchestration (MLflow, Kubeflow)Familiarity with data mesh or federated data architecture patternsExperience with change data capture (CDC) tools and techniques


    Leadership & Soft Skills Strong problem-solving abilities with experience debugging complex distributed systems in production environmentsExcellent communication skills with ability to explain technical concepts to diverse stakeholders while highlighting business valueExperience mentoring team members and leading technical initiatives with focus on building a quality-oriented cultureProven track record of delivering projects successfully in dynamic, fast-paced financial technology environments


    What We Offer


    Opportunity to work with cutting-edge technology at scaleCompetitive salary and equity compensation

    Comprehensive health and wellness benefits

    Professional development opportunities and conference attendanceFlexible working arrangements

    Chance to impact millions of users across India's digital payments ecosystem


    Application Process


    Interested candidates should submit:

    Updated resume highlighting relevant data engineering experience with emphasis on real-time systems and data quality

    Portfolio or GitHub profile showcasing data engineering projects, particularly those involving high-throughput streaming systems

    Cover letter explaining interest in fintech/payments domain and understanding of data criticality in financial services

    References from previous technical managers or senior colleagues who can attest to your data quality standards






    PIfc5a5d46cf

    This advertiser has chosen not to accept applicants from your region.

    Senior Data Engineer / Data Engineer

    Gurugram, Uttar Pradesh Invokhr

    Posted today

    Job Viewed

    Tap Again To Close

    Job Description

    Desired Experience: 3-8 years

    Salary: Best-in-industry

    Location: Gurgaon ( 5 days onsite)


    Overview:

    You will act as a key member of the Data consulting team, working directly with the partners and senior stakeholders of the clients designing and implementing big data and analytics solutions. Communication and organisation skills are keys for this position, along with a problem-solution attitude.

    What is in it for you:

    Opportunity to work with a world class team of business consultants and engineers solving some of the most complex business problems by applying data and analytics techniques

    Fast track career growth in a highly entrepreneurial work environment

    Best-in-industry renumeration package

    Essential Technical Skills:

    Technical expertise with emerging Big Data technologies, such as: Python, Spark, Hadoop, Clojure, Git, SQL and Databricks; and visualization tools: Tableau and PowerBI

    Experience with cloud, container and micro service infrastructures

    Experience working with divergent data sets that meet the requirements of the Data Science and Data Analytics teams

    Hands-on experience with data modelling, query techniques and complexity analysis

    Desirable Skills:

    Experience/Knowledge of working in an agile environment and experience with agile methodologies such as Scrum

    Experience of working with development teams and product owners to understand their requirement

    Certifications on any of the above areas will be preferred.

    Your duties will include:

    Develop data solutions within a Big Data Azure and/or other cloud environments

    Working with divergent data sets that meet the requirements of the Data Science and Data Analytics teams

    Build and design Data Architectures using Azure Data factory, Databricks, Data lake, Synapse

    Liaising with CTO, Product Owners and other Operations teams to deliver engineering roadmaps showing key items such as upgrades, technical refreshes and new versions

    Perform data mapping activities to describe source data, target data and the high-level or detailed transformations that need to occur;

    Assist Data Analyst team in developing KPIs and reporting in tools viz. Power BI, Tableau

    Data Integration, Transformation, Modelling

    Maintaining all relevant documentation and knowledge bases

    Research and suggest new database products, services and protocols

    Essential Personal Traits:

    You should be able to work independently and communicate effectively with remote teams.

    Timely communication/escalation of issues/dependencies to higher management.

    Curiosity to learn and apply emerging technologies to solve business problems


    ** Interested candidate please send thier resume on - and **

    This advertiser has chosen not to accept applicants from your region.

    Data Engineer / Senior Data Engineer

    Karnataka, Karnataka Thomson Reuters

    Posted today

    Job Viewed

    Tap Again To Close

    Job Description

    Are you excited by the prospect of wrangling data, helping develop information systems/sources/tools, and shaping the way businesses make decisions? The Go-To-Markets Data Analytics team is looking for a skilled Data Engineer / Senior Data Engineer who is motivated to deliver top notch data-engineering solutions to support business intelligence, data science, and self-service data solutions.

    About the Role:

    In this role as a Data Engineer / Senior Data Engineer, you will:

  • Design, develop, optimize, and automate data pipelines that blend and transform data across different sources to help drive business intelligence, data science, and self-service data solutions.

  • Work closely with data scientists and data visualization teams to understand data requirements to ensure the availability of high-quality data for analytics, modelling, and reporting.

  • Build pipelines that source, transform, and load data that’s both structured and unstructured keeping in mind data security and access controls.

  • Explore large volumes of data with curiosity and conviction.

  • Contribute to the strategy and architecture of data management systems and solutions.

  • Proactively troubleshoot and resolve data-related and performance bottlenecks in a timely manner.

  • Be open to learning and working on emerging technologies in the data engineering, data science and cloud computing space.

  • Have the curiosity to interrogate data, conduct independent research, utilize various techniques, and tackle ambiguous problems.

  • Shift Timings: 12 PM to 9 PM (IST)

  • Work from office for 2 days in a week (Mandatory)

  • About You

    You’re a fit for the role of Data Engineer, if your background includes:

  • Must have at least 4+ years of total work experience with at least 2+ years in data engineering or analytics domains.

  • Graduates in data analytics, data science, computer science, software engineering or other data centric disciplines.

  • SQL Proficiency a must.

  • Experience with data pipeline and transformation tools such as dbt, Glue, FiveTran, Alteryx or similar solutions.

  • Experience using cloud-based data warehouse solutions such as Snowflake, Redshift, Azure.

  • Experience with orchestration tools like Airflow or Dagster.

  • Preferred experience using Amazon Web Services (S3, Glue, Athena, Quick sight).

  • Data modelling knowledge of various schemas like snowflake and star.

  • Has built data pipelines and other custom automated solutions to speed the ingestion, analysis, and visualization of large volumes of data.

  • Knowledge building ETL workflows, database design, and query optimization.

  • Has experience of a scripting language like Python.

  • Works well within a team and collaborates with colleagues across domains and geographies.

  • Excellent oral, written, and visual communication skills.

  • Has a demonstrable ability to assimilate new information thoroughly and quickly.

  • Strong logical and scientific approach to problem-solving.

  • Can articulate complex results in a simple and concise manner to all levels within the organization.

  • #LI-GS2

    What’s in it For You?

  • Hybrid Work Model: We’ve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected.

  • Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance.

  • Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrow’s challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future.

  • Industry Competitive Benefits: We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing.

  • Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our values: Obsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together.

  • Social Impact: Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives.

  • Making a Real-World Impact:  We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world.

  • About Us

    Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news.

    We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound exciting? Join us and help shape the industries that move society forward.

    As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace.

    We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law. More information on requesting an accommodation .

    Learn more on how to protect yourself from fraudulent job postings .

    More information about Thomson Reuters can be found on

    This advertiser has chosen not to accept applicants from your region.

    Data Engineer- Senior Data Engineer

    Bengaluru, Karnataka Paytm

    Posted today

    Job Viewed

    Tap Again To Close

    Job Description

    The Role


    We're looking for a senior AI engineer who can build production-grade agentic AI systems. You'll be working at the intersection of cutting-edge AI research and scalable engineering, creating autonomous agents that can reason, plan, and execute complex tasks reliably at scale.


    What We Need


    Agentic AI & LLM Engineering

    You should have hands-on experience with:

    Multi-agent systems : Building agents that coordinate, communicate, and work together on complex workflows

    Agent orchestration : Designing systems where AI agents can plan multi-step tasks, use tools, and make autonomous decisions

    LLMOps Experience : End-to-End LLM Lifecycle Management - hands-on experience managing the complete LLM workflow from prompt engineering and dataset curation through model fine-tuning, evaluation, and deployment. This includes versioning prompts, managing training datasets, orchestrating distributed training jobs, and implementing automated model validation pipelines. Production LLM Infrastructure - experience building and maintaining production LLM serving infrastructure including model registries, A/B testing frameworks for comparing model versions, automated rollback mechanisms, and monitoring systems that track model performance, latency, and cost metrics in real-time.


    AI Observability : Experience implementing comprehensive monitoring and tracing for AI systems, including prompt tracking, model output analysis, cost monitoring, and agent decision-making visibility across complex workflows.

    Evaluation frameworks : Creating comprehensive testing for agent performance, safety, and goal achievement

    LLM inference optimization : Scaling model serving with techniques like batching, caching, and efficient frameworks (vLLM, TensorRT-LLM)

    Systems Engineering

    Strong backend development skills including:

    Python expertise : FastAPI, Django, or Flask for building robust APIs that handle agent workflows

    Distributed systems : Microservices, event-driven architectures, and message queues (Kafka, RabbitMQ) for agent coordination

    Database strategy : Vector databases, traditional SQL/NoSQL, and caching layers optimized for agent state management

    Web-scale design : Systems handling millions of requests with proper load balancing and fault tolerance


    DevOps (Non-negotiable)

    Kubernetes : Working knowledge required - deployments, services, cluster management

    Containerization : Docker with production optimization and security best practices

    CI/CD : Automated testing and deployment pipelines

    Infrastructure as Code : Terraform, Helm charts

    Monitoring : Prometheus, Grafana for tracking complex agent behaviors

    Programing Language : Java , Python


    What You'll Build

    You'll architect the infrastructure that powers our autonomous AI systems:

    Agent Orchestration Platform : Multi-agent coordination systems that handle complex, long-running workflows with proper state management and failure recovery.

    Evaluation Infrastructure : Comprehensive frameworks that assess agent performance across goal achievement, efficiency, safety, and decision-making quality.

    Production AI Services : High-throughput systems serving millions of users with intelligent resource management and robust fallback mechanisms.

    Training Systems : Scalable pipelines for SFT and DPO that continuously improve agent capabilities based on real-world performance and human feedback.


    Who You Are

    You've spent serious time in production environments building AI systems that actually work. You understand the unique challenges of agentic AI - managing state across long conversations, handling partial failures in multi-step processes, and ensuring agents stay aligned with their intended goals.

    You've dealt with the reality that the hardest problems aren't always algorithmic. Sometimes it's about making an agent retry gracefully when an API call fails, or designing an observability layer that catches when an agent starts behaving unexpectedly, or building systems that can scale from handling dozens of agent interactions to millions.

    You're excited about the potential of AI agents but pragmatic about the engineering work required to make them reliable in production.







    PI503be25532c

    This advertiser has chosen not to accept applicants from your region.
    Be The First To Know

    About the latest Data engineer Jobs in India !

    Data Engineer

    Pune, Maharashtra Cummins Inc.

    Posted 2 days ago

    Job Viewed

    Tap Again To Close

    Job Description

    **DESCRIPTION**
    **Note:- Although the role category specified in the GPP is Remote, the requirement is for Hybrid working model from Cummins Pune Office.**
    **Job Summary:**
    Supports, develops and maintains a data and analytics platform. Effectively and efficiently process, store and make data available to analysts and other consumers. Works with the Business and IT teams to understand the requirements to best leverage the technologies to enable agile data delivery at scale.
    **Key Responsibilities:**
    Implements and automates deployment of our distributed system for ingesting and transforming data from various types of sources (relational, event-based, unstructured). Implements methods to continuously monitor and troubleshoot data quality and data integrity issues. Implements data governance processes and methods for managing metadata, access, retention to data for internal and external users. Develops reliable, efficient, scalable and quality data pipelines with monitoring and alert mechanisms that combine a variety of sources using ETL/ELT tools or scripting languages. Develops physical data models and implements data storage architectures as per design guidelines. Analyzes complex data elements and systems, data flow, dependencies, and relationships in order to contribute to conceptual physical and logical data models. Participates in testing and troubleshooting of data pipelines. Develops and operates large scale data storage and processing solutions using different distributed and cloud based platforms for storing data (e.g. Data Lakes, Hadoop, Hbase, Cassandra, MongoDB, Accumulo, DynamoDB, others). Uses agile development technologies, such as DevOps, Scrum, Kanban and continuous improvement cycle, for data driven application.
    **RESPONSIBILITIES**
    **Competencies:**
    System Requirements Engineering - Uses appropriate methods and tools to translate stakeholder needs into verifiable requirements to which designs are developed; establishes acceptance criteria for the system of interest through analysis, allocation and negotiation; tracks the status of requirements throughout the system lifecycle; assesses the impact of changes to system requirements on project scope, schedule, and resources; creates and maintains information linkages to related artifacts.
    Collaborates - Building partnerships and working collaboratively with others to meet shared objectives.
    Communicates effectively - Developing and delivering multi-mode communications that convey a clear understanding of the unique needs of different audiences.
    Customer focus - Building strong customer relationships and delivering customer-centric solutions.
    Decision quality - Making good and timely decisions that keep the organization moving forward.
    Data Extraction - Performs data extract-transform-load (ETL) activities from variety of sources and transforms them for consumption by various downstream applications and users using appropriate tools and technologies.
    Programming - Creates, writes and tests computer code, test scripts, and build scripts using algorithmic analysis and design, industry standards and tools, version control, and build and test automation to meet business, technical, security, governance and compliance requirements.
    Quality Assurance Metrics - Applies the science of measurement to assess whether a solution meets its intended outcomes using the IT Operating Model (ITOM), including the SDLC standards, tools, metrics and key performance indicators, to deliver a quality product.
    Solution Documentation - Documents information and solution based on knowledge gained as part of product development activities; communicates to stakeholders with the goal of enabling improved productivity and effective knowledge transfer to others who were not originally part of the initial learning.
    Solution Validation Testing - Validates a configuration item change or solution using the Function's defined best practices, including the Systems Development Life Cycle (SDLC) standards, tools and metrics, to ensure that it works as designed and meets customer requirements.
    Data Quality - Identifies, understands and corrects flaws in data that supports effective information governance across operational business processes and decision making.
    Problem Solving - Solves problems and may mentor others on effective problem solving by using a systematic analysis process by leveraging industry standard methodologies to create problem traceability and protect the customer; determines the assignable cause; implements robust, data-based solutions; identifies the systemic root causes and ensures actions to prevent problem reoccurrence are implemented.
    Values differences - Recognizing the value that different perspectives and cultures bring to an organization.
    **Education, Licenses, Certifications:**
    College, university, or equivalent degree in relevant technical discipline, or relevant equivalent experience required. This position may require licensing for compliance with export controls or sanctions regulations.
    **Experience:**
    4-5 Years of experience.
    Relevant experience preferred such as working in a temporary student employment, intern, co-op, or other extracurricular team activities.
    Knowledge of the latest technologies in data engineering is highly preferred and includes:
    - Exposure to Big Data open source
    - SPARK, Scala/Java, Map-Reduce, Hive, Hbase, and Kafka or equivalent college coursework
    - SQL query language
    - Clustered compute cloud-based implementation experience
    - Familiarity developing applications requiring large file movement for a Cloud-based environment
    - Exposure to Agile software development
    - Exposure to building analytical solutions
    - Exposure to IoT technology
    **QUALIFICATIONS**
    1) Work closely with business Product Owner to understand product vision.
    2) Participate in DBU Data & Analytics Power Cells to define, develop data pipelines for efficient data transport into Cummins Digital Core ( Azure DataLake, Snowflake).
    3) Collaborate closely with AAI Digital Core and AAI Solutions Architecture to ensure alignment of DBU project data pipeline design standards.
    4) Work under limited supervision to design, develop, test, implement complex data pipelines from transactional systems (ERP, CRM) to Datawarehouses, DataLake.
    5) Responsible for creation of DBU Data & Analytics data engineering documentation and standard operating procedures (SOP) with guidance and help from senior data engineers.
    6) Take part in evaluation of new data tools, POCs with guidance and help from senior data engineers.
    7) Take ownership of the developed data pipelines, providing ongoing support for enhancements and performance optimization under limited supervision.
    8) Assist to resolve issues that compromise data accuracy and usability.
    1. Programming Languages: Proficiency in languages such as Python, Java, and/or Scala.
    2. Database Management: Intermediate level expertise in SQL and NoSQL databases.
    3. Big Data Technologies: Experience with Hadoop, Spark, Kafka, and other big data frameworks.
    4. Cloud Services: Experience with Azure, Databricks and AWS cloud platforms.
    5. ETL Processes: Strong understanding of Extract, Transform, Load (ETL) processes.
    6. API: Working knowledge of API to consume data from ERP, CRM
    **Job** Systems/Information Technology
    **Organization** Cummins Inc.
    **Role Category** Remote
    **Job Type** Exempt - Experienced
    **ReqID**
    **Relocation Package** Yes
    This advertiser has chosen not to accept applicants from your region.

    Data Engineer

    Bangalore, Karnataka Celonis

    Posted 2 days ago

    Job Viewed

    Tap Again To Close

    Job Description

    We're Celonis, the global leader in Process Mining technology and one of the world's fastest-growing SaaS firms. We believe there is a massive opportunity to unlock productivity by placing data and intelligence at the core of business processes - and for that, we need you to join us.
    **The Team**
    You will be joining the Catalog team within the Business Apps department. Our mission is to build end-to-end solutions on the Celonis platform, including data models and end-user applications, to accelerate time to value for our customers and partners. The Catalog team within Business Apps specializes in three aspects: defining the data ontology of the most common business processes, building prebuilt transformations for such ontologies for major source systems like SAP, Oracle etc, and lastly, collaborating with various teams in both the Product and Go-to-market organizations to drive adoption at scale.
    As a **Data Engineer** , you will own and focus on primarily two aspects: On the one hand, refining prebuilt transformations for existing ontologies (for processes like Order to Cash, Procure to Pay, Inventory Management) for SAP, Oracle etc and validating them across our early adopters in our customer base. On the other, defining and extending the existing ontologies with additional processes and extending to additional systems. In addition, you would also be responsible for maintaining the quality of content that we produce, and write documentation on the ontology definitions. This will ensure both internal and external application developers will be able to leverage the data foundation to develop their solutions
    **The work you'll do:**
    + Build data models for the defined ontologies and mappings using the object-centric process mining methodologies with performant SQL transformations.
    + Design and implement business objects, process events, and data models in the Celonis platform.
    + Research and design:
    + ontologies for new business processes, improve and extend capabilities of existing ones
    + the source system transformations to map them with the defined ontologies.
    + Facilitate cross-functional interactions with product managers, domain experts, engineers, and consultants.
    + Test and validate the models in development environments and customer environments to gather early feedback
    + Document the data model governing principles and development.
    **The qualifications you need:**
    + You have that rare combination-a strong technical expertise and business acumen. You'll use this to build a system-agnostic data model for various business processes.
    + 3-6+ years of experience working in the data field as a Data Engineer, Data Analyst or similar.
    + **Must-have:**Experience working with data from at least one of the following system types:
    + ERP (e.g. SAP ECC or S/4, Oracle EBS or Fusion)
    + Supply Chain Management (e.g. BlueYonder, SAP or Oracle Transportation Management)
    + CRM (e.g. Salesforce, Microsoft Dynamics)
    + IT (e.g. ServiceNow)
    + Strong solution designing skills with solid understanding of business processes (supply chain, financial, CRM or IT-related processes) and data beneath the IT systems that run these processes.
    + Experience with databases and data modeling, and hands-on experience with SQL.
    + Ability to work independently and own a part of the team's goals
    + Very good knowledge of spoken and written English
    + Ability to communicate effectively and build a good rapport with team members.
    **What Celonis Can Offer You:**
    + **Pioneer Innovation:** Work with the leading, award-winning process mining technology, shaping the future of business.
    + **Accelerate Your Growth:** Benefit from clear career paths, internal mobility, a dedicated learning program, and mentorship opportunities.
    + **Receive Exceptional Benefits:** Including generous PTO, hybrid working options, company equity (RSUs), comprehensive benefits, extensive parental leave, dedicated volunteer days, and much more ( . Interns and working students explore your benefits here ( .
    + **Prioritize Your Well-being:** Access to resources such as gym subsidies, counseling, and well-being programs.
    + **Connect and Belong:** Find community and support through dedicated inclusion and belonging programs.
    + **Make Meaningful Impact:** Be part of a company driven by strong values ( that guide everything we do: Live for Customer Value, The Best Team Wins, We Own It, and Earth Is Our Future.
    + **Collaborate Globally:** Join a dynamic, international team of talented individuals.
    + **Empowered Environment:** Contribute your ideas in an open culture with autonomous teams.
    **About Us:**
    Celonis makes processes work for people, companies and the planet. The Celonis Process Intelligence Platform uses industry-leading process mining and AI technology and augments it with business context to give customers a living digital twin of their business operation. It's system-agnostic and without bias, and provides everyone with a common language for understanding and improving businesses. Celonis enables its customers to continuously realize significant value across the top, bottom, and green line. Celonis is headquartered in Munich, Germany, and New York City, USA, with more than 20 offices worldwide.
    Get familiar with the Celonis Process Intelligence Platform by watching this video ( .
    **Celonis Inclusion Statement:**
    At Celonis, we believe our people make us who we are and that "The Best Team Wins". We know that the best teams are made up of people who bring different perspectives to the table. And when everyone feels included, able to speak up and knows their voice is heard - that's when creativity and innovation happen.
    **Your Privacy:**
    Any information you submit to Celonis as part of your application will be processed in accordance with Celonis' Accessibility and Candidate Notices ( submitting this application, you confirm that you agree to the storing and processing of your personal data by Celonis as described in our Privacy Notice for the Application and Hiring Process ( .
    Please be aware of common job offer scams, impersonators and frauds. Learn more here ( .
    This advertiser has chosen not to accept applicants from your region.

    Data Engineer

    Hyderabad, Andhra Pradesh Amgen

    Posted 4 days ago

    Job Viewed

    Tap Again To Close

    Job Description

    **About Amgen**
    Amgen harnesses the best of biology and technology to fight the world's toughest diseases, and make people's lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what's known today.
    Since 1980, we've helped pioneer the world of biotech in our fight against the world's toughest diseases. With our focus on four therapeutic areas -Oncology, Inflammation, General Medicine, and Rare Disease- we reach millions of patients each year. As a member of the Amgen team, you'll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives.
    Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you'll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career.
    **What you will do**
    **Role Description:**
    We are seeking an experienced Data Engineer for the development and implementation of our data strategy. The ideal candidate possesses a strong blend of technical expertise and data-driven problem-solving skills. As a Data Engineer, you will play a crucial role in building, and optimizing our data pipelines and platforms in a SAFE Agile product team.
    **Roles & Responsibilities:**
    + Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions.
    + Deliver for data pipeline projects from development to deployment, managing, timelines, and risks.
    + Ensure data quality and integrity through rigorous testing and monitoring.
    + Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions.
    + Work closely with product team, and stakeholders to understand data requirements.
    + Adhere to data engineering best practices and standards.
    + Experience developing in an Agile development environment, and comfortable with Agile terminology and ceremonies.
    + Familiarity with code versioning using GIT, Jenkins and code migration tools.
    + Exposure to Jira or Rally.
    + Stay up to date with the latest data technologies and trends.
    **What we expect of you**
    **Basic Qualifications:**
    + Master's degree / bachelor's degree in computer science STEM majors with 5 to 9 years of Information Systems experience
    **Must-Have Skills (Not more than 3 to 4):**
    + Demonstrated hands-on experience with cloud platforms (AWS, Azure, GCP)
    + Proficiency in Python, PySpark, SQL. Hands on experience with ETL performance tuning.
    + Development knowledge in Databricks.
    + Good analytical and problem-solving skills to address complex data challenges.
    **Good-to-Have Skills:**
    + Experienced with data modeling and performance tuning for both OLAP and OLTP databases
    + Experienced working with Apache Spark, Apache Airflow
    + Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops
    + Familiarity with SQL/NOSQL database, vector database for large language models
    + Familiarity with prompt engineering, model fine tuning
    **Professional Certifications (please mention if the certification is preferred or mandatory for the role):**
    + AWS Certified Data Engineer (preferred)
    + Databricks Certification (preferred)
    + Any SAFe Agile certification (preferred)
    **Soft Skills:**
    + Skilled in breaking down problems, documenting problem statements, and estimating efforts.
    + Effective communication and interpersonal skills to collaborate with cross-functional teams.
    + Excellent analytical and troubleshooting skills.
    + Strong verbal and written communication skills
    + Ability to work effectively with global, virtual teams
    + High degree of initiative and self-motivation.
    + Team-oriented, with a focus on achieving team goals
    **What you can expect of us**
    As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we'll support your journey every step of the way.
    **Apply now and make a lasting impact with the Amgen team.**
    **careers.amgen.com**
    **EQUAL OPPORTUNITY STATEMENT**
    Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status.
    We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.
    This advertiser has chosen not to accept applicants from your region.
     

    Nearby Locations

    Other Jobs Near Me

    Industry

    1. request_quote Accounting
    2. work Administrative
    3. eco Agriculture Forestry
    4. smart_toy AI & Emerging Technologies
    5. school Apprenticeships & Trainee
    6. apartment Architecture
    7. palette Arts & Entertainment
    8. directions_car Automotive
    9. flight_takeoff Aviation
    10. account_balance Banking & Finance
    11. local_florist Beauty & Wellness
    12. restaurant Catering
    13. volunteer_activism Charity & Voluntary
    14. science Chemical Engineering
    15. child_friendly Childcare
    16. foundation Civil Engineering
    17. clean_hands Cleaning & Sanitation
    18. diversity_3 Community & Social Care
    19. construction Construction
    20. brush Creative & Digital
    21. currency_bitcoin Crypto & Blockchain
    22. support_agent Customer Service & Helpdesk
    23. medical_services Dental
    24. medical_services Driving & Transport
    25. medical_services E Commerce & Social Media
    26. school Education & Teaching
    27. electrical_services Electrical Engineering
    28. bolt Energy
    29. local_mall Fmcg
    30. gavel Government & Non Profit
    31. emoji_events Graduate
    32. health_and_safety Healthcare
    33. beach_access Hospitality & Tourism
    34. groups Human Resources
    35. precision_manufacturing Industrial Engineering
    36. security Information Security
    37. handyman Installation & Maintenance
    38. policy Insurance
    39. code IT & Software
    40. gavel Legal
    41. sports_soccer Leisure & Sports
    42. inventory_2 Logistics & Warehousing
    43. supervisor_account Management
    44. supervisor_account Management Consultancy
    45. supervisor_account Manufacturing & Production
    46. campaign Marketing
    47. build Mechanical Engineering
    48. perm_media Media & PR
    49. local_hospital Medical
    50. local_hospital Military & Public Safety
    51. local_hospital Mining
    52. medical_services Nursing
    53. local_gas_station Oil & Gas
    54. biotech Pharmaceutical
    55. checklist_rtl Project Management
    56. shopping_bag Purchasing
    57. home_work Real Estate
    58. person_search Recruitment Consultancy
    59. store Retail
    60. point_of_sale Sales
    61. science Scientific Research & Development
    62. wifi Telecoms
    63. psychology Therapy
    64. pets Veterinary
    View All Data Engineer Jobs