69,547 Data Professionals jobs in India

Data Scientist/Data Engineer/Data Analyst

Bengaluru, Karnataka ₹500000 - ₹1500000 Y UST

Posted today

Job Viewed

Tap Again To Close

Job Description

3 - 5 Years

1 Opening

Bangalore

Role description

Role Proficiency:

Independently interprets data and analyses results using statistical techniques

Outcomes:

  • Independently Mine and acquire data from primary and secondary sources and reorganize the data in a format that can be easily read by either a machine or a person; generating insights and helping clients make better decisions.

  • Develop reports and analysis that effectively communicate trends patterns and predictions using relevant data.

  • Utilizes historical data sets and planned changes to business models and forecast business trends

  • Working alongside teams within the business or the management team to establish business needs.

  • Creates visualizations including dashboards flowcharts and graphs to relay business concepts through visuals to colleagues and other relevant stakeholders.

  • Set FAST goals

Measures of Outcomes:

  • Schedule adherence to tasks

  • Quality – Errors in data interpretation and Modelling

  • Number of business processes changed due to vital analysis.

  • Number of insights generated for business decisions

  • Number of stakeholder appreciations/escalations

  • Number of customer appreciations

  • No: of mandatory trainings completed

Outputs Expected:

Data Mining:

  • Acquiring data from various sources

Reorganizing/Filtering data:

  • Consider only relevant data from the mined data and convert it into a format which is consistent and analysable.

Analysis:

  • Use statistical methods to analyse data and generate useful results.

Create Data Models:

  • Use data to create models that depict trends in the customer base and the consumer population as a whole

Create Reports:

  • Create reports depicting the trends and behaviours from the analysed data

Document:

  • Create documentation for own work as well as perform peer review of documentation of others' work

Manage knowledge:

  • Consume and contribute to project related documents

    share point

    libraries and client universities

Status Reporting:

  • Report status of tasks assigned

  • Comply with project related reporting standards and process

Code:

  • Create efficient and reusable code. Follows coding best practices.

Code Versioning:

  • Organize and manage the changes and revisions to code. Use a version control tool like git

    bitbucket

    etc.

Quality:

  • Provide quality assurance of imported data

    working with quality assurance analyst if necessary.

Performance Management:

  • Set FAST Goals and seek feedback from supervisor

Skill Examples:

  • Analytical Skills: Ability to work with large amounts of data: facts figures and number crunching.

  • Communication Skills: Ability to present findings or translate the data into an understandable document

  • Critical Thinking: Ability to look at the numbers trends and data; coming up with new conclusions based on the findings.

  • Attention to Detail: Making sure to be vigilant in the analysis to come with accurate conclusions.

  • Quantitative skills - knowledge of statistical methods and data analysis software

  • Presentation Skills - reports and oral presentations to senior colleagues

  • Mathematical skills to estimate numerical data.

  • Work in a team environment

  • Proactively ask for and offer help

Knowledge Examples:

Knowledge Examples

  • Proficient in mathematics and calculations.

  • Spreadsheet tools such as Microsoft Excel or Google Sheets

  • Advanced knowledge of Tableau or PowerBI

  • SQL

  • Python

  • DBMS

  • Operating Systems and software platforms

  • Knowledge about customer domain and also sub domain where problem is solved

  • Code version control e.g. git bitbucket etc

Additional Comments:

NA

Skills

Data Analyst,Data Science,Team Work

About UST

UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world's best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients' organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.

This advertiser has chosen not to accept applicants from your region.

Data Engineer _ Data

Bengaluru, Karnataka ₹600000 - ₹1800000 Y Launch It Consulting

Posted today

Job Viewed

Tap Again To Close

Job Description

: Be a part of our success story. Launch offers talented and motivated people the opportunity to do the best work of their lives in a dynamic and growing company. Through competitive salaries, outstanding benefits, internal advancement opportunities, and recognized community involvement, you will have the chance to create a career you can be proud of. Your new trajectory starts here at Launch.

Summary: The Data Engineer in the Data & AI division is responsible for designing, developing, and maintaining robust data pipelines, ensuring the efficient and secure movement, transformation, and storage of data across business systems. The ideal candidate will support analytics and AI initiatives, enabling data-driven decision-making within the organisation.

Role: Data & AI Data Engineer

Location: Bangalore

Shift timings: General Shift

Roles & Responsibilities:

  • Design, develop, and maintain scalable and reliable data pipelines to support analytics, reporting, and AI-driven solutions.
  • Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver appropriate data solutions.
  • Optimise data extraction, transformation, and loading (ETL) processes for performance, scalability, and data quality.
  • Implement data models, build and maintain data warehouses and lakes, and ensure data security and compliance.
  • Monitor data pipeline performance and troubleshoot issues in a timely manner.
  • Document data processes, pipelines, and architecture for knowledge sharing and audit purposes.
  • Stay updated with industry trends and recommend best practices in data engineering and AI integration.

Must-Have Skills:

  • Demonstrated proficiency in SQL and at least one programming language (Python, Java, or Scala).
  • Experience with cloud platforms such as Azure, AWS, or Google Cloud (Data Factory, Databricks, Glue, BigQuery, etc.).
  • Expertise in building and managing ETL pipelines and workflows.
  • Strong understanding of relational and non-relational databases.
  • Knowledge of data modelling, data warehousing, and data lake architectures.
  • Experience with version control systems (e.g., Git) and CI/CD principles.
  • Excellent problem-solving and communication skills.

Preferred skills:

  • Experience with big data frameworks (Spark, Hadoop, Kafka, etc.).
  • Familiarity with containerisation and orchestration tools (Docker, Kubernetes, Airflow).
  • Understanding of data privacy regulations (GDPR, etc.) and data governance practices.
  • Exposure to machine learning or AI model deployment pipelines.
  • P ands-on experience with reporting and visualisation tools (Power BI, Tableau, etc.).

We are Navigators in the Age of Transformation: We use sophisticated technology to transform clients into the digital age, but our top priority is our positive impact on human experience. We ease anxiety and fear around digital transformation and replace it with opportunity. Launch IT is an equal opportunity employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Launch IT is committed to creating a dynamic work environment that values diversity and inclusion, respect and integrity, customer focus, and innovation.

About Company: Launch IT India is wholly owned subsidiary of The Planet Group ; ) a US company, offers attractive compensation and work environment for the prospective employees. Launch is an entrepreneurial business and technology consultancy. We help businesses and people navigate from current state to future state. Technology, tenacity, and creativity fuel our solutions with offices in Bellevue, Sacramento, Dallas, San Francisco, Hyderabad & Washington D.C.

This advertiser has chosen not to accept applicants from your region.

Data Analyst/Data Engineer

Navi Mumbai, Maharashtra ₹900000 - ₹1200000 Y QualityKiosk

Posted today

Job Viewed

Tap Again To Close

Job Description

Performance AssuranceNavi Mumbai

Posted On

01 Sep 2025

End Date

31 Oct 2025

Required Experience

2 - 3 Years

Basic Section

No. Of Openings

1

Designation

Data Analyst/Data Engineer

Closing Date

31 Oct 2025

Organisational

MainBU

PT

Sub BU

Performance Assurance

Country

India

Region

India 1

State

Maharashtra

City

Navi Mumbai

Working Location

Ghansoli

Client Location

NA

Skills

Skill

EXCEL ANALYTICS

DATA ANALYSIS AND COORDINATION

Highest Education

No data available

CERTIFICATION

No data available

Working Language

No data available

JOB DESCRIPTION

Advance Excel , PPT creation, Proposal creation Tracking Current Month revenue tracking with Finance Maintaining Quest Data Sharing PCW case reports to Finance and Sales with Aging Sharing Invoicing status to Finance and Sales with Aging Analysis / Dashboard creation or update Daily resource - project mapping (sync With RMG) RAS status - MM with Bench bifurcation Raising RRF as per the forecast / requirement and tracking for the same till closure Each Project - GP / OM update GP/OM consolidation for Account Updating AOP (Daily) Updating leave tracker Follow ups on the Quest allocations / PCW allocation in the Quest/PCW approval cycle Updating the Delivery team on the BU RMG updates

This advertiser has chosen not to accept applicants from your region.

Data Analyst/ Data Engineer

Gurugram, Uttar Pradesh ₹500000 - ₹1200000 Y Three Across

Posted today

Job Viewed

Tap Again To Close

Job Description

Job Title: Data Analyst/ Data Engineer

Experience: 4B: 5 years+ work experience

4C: 7 years+ work experience

Location: Badshahpur, Sector 69, Gurugram

Shift Timings: 6AM IST - 4PM IST

5AM IST - 3PM IST (day light saving)

Exception for critical delivery:

Shift timing 4:30AM IST - 2:30PM IST

3:30AM IST - 1:30PM IST (DLS)

Skills Required:

  • Advanced SQL Knowledge, Big Query Database, Profisee
  • Experience with Python & Alteryx Data Visualization
  • (PBI/Looker Studio)

Roles and Responsibilities:

  • Proficient In IBM Cognos TM1 in below areas.
  • Support – Support the existing report and coordinate with OEM in case of any issues/bugs.
  • Development resource – Creation of New Reports & changes to existing ones. Should have worked on - requirement gathering , BRD and finally developing the model and testing along with documentation.
  • Analyze large data sets
  • MDM management. MDM tool: Profisee.
  • Manage / maintain dimensions and master data
  • Data Visualization Tools: Expertise "Looker Studio"
  • Other tools knowledge required: Alteryx; Python
  • Data Hierarchy Management: Managing data hierarchies and data mapping.
  • Business Process Knowledge: Knowledgeable in business processes, especially regarding revenue recognition rules.
  • Governance: Ensuring rules and processes are followed, particularly concerning revenue allocation.
  • Communication: Acting as a liaison between sales teams and management.
  • Hierarchy Management: Managing data hierarchies and mapping, particularly in the sales environment.
  • Data Environment Understanding: A strong understanding of the data environment, including data layers and data tables.
  • Troubleshooting: Ability to troubleshoot data issues, reporting issues and identify root causes.
  • Collaboration: Ability to collaborate with different teams, including sales, finance, and tech teams.
  • Adapt to changing business needs and new systems. Qualifications we seek in you
  • FP&A tool: TM1. Manage TM1 finance data with data warehouse

Qualifications:

Bachelor in Engineer

If interested, please share your resume to

This advertiser has chosen not to accept applicants from your region.

Data Engineer- Lead Data Engineer

Bengaluru, Karnataka ₹1500000 - ₹2000000 Y Paytm

Posted today

Job Viewed

Tap Again To Close

Job Description

Role Overview

We are seeking an experienced Lead Data Engineer to join our Data Engineering team at Paytm, India's leading digital payments and financial services platform. This is a critical role responsible for designing, building, and maintaining large-scale, real-time data streams that process billions of transactions and user interactions daily. Data accuracy and stream reliability are essential to our operations, as data quality issues can result in financial losses and impact customer

a Lead Data Engineer at Paytm, you will be responsible for building robust data systems that support India's largest digital payments ecosystem. You'll architect and implement reliable, real-time data streaming solutions where precision and data correctness are fundamental requirements. Your work will directly support millions of users across merchant payments, peer-to-peer transfers, bill payments, and financial services, where data accuracy is crucial for maintaining customer confidence and operational excellence.

This role requires expertise in designing fault-tolerant, scalable data architectures that maintain high uptime standards while processing peak transaction loads during festivals and high-traffic events. We place the highest priority on data quality and system reliability, as our customers depend on accurate, timely information for their financial decisions. You'll collaborate with cross-functional teams including data scientists, product managers, and risk engineers to deliver data solutions that enable real-time fraud detection, personalized recommendations, credit scoring, and regulatory compliance reporting.

Key technical challenges include maintaining data consistency across distributed systems with demanding performance requirements, implementing comprehensive data quality frameworks with real-time validation, optimizing query performance on large datasets, and ensuring complete data lineage and governance across multiple business domains. At Paytm, reliable data streams are fundamental to our operations and our commitment to protecting customers' financial security and maintaining India's digital payments

Responsibilities

Data Stream Architecture & DevelopmentDesign and implement reliable, scalable data streams handling high-volume transaction data with strong data integrity controlsBuild real-time processing systems using modern data engineering frameworks (Java/Python stack) with excellent performance characteristicsDevelop robust data ingestion systems from multiple sources with built-in redundancy and monitoring capabilitiesImplement comprehensive data quality frameworks, ensuring the 4 C's: Completeness, Consistency, Conformity, and Correctness - ensuring data reliability that supports sound business decisionsDesign automated data validation, profiling, and quality monitoring systems with proactive alerting capabilitiesInfrastructure & Platform ManagementManage and optimize distributed data processing platforms with high availability requirements to ensure consistent service deliveryDesign data lake and data warehouse architectures with appropriate partitioning and indexing strategies for optimal query performanceImplement CI/CD processes for data engineering workflows with comprehensive testing and reliable deployment proceduresEnsure high availability and disaster recovery for critical data systems to maintain business continuity

Performance & OptimizationMonitor and optimize streaming performance with focus on latency reduction and operational efficiencyImplement efficient data storage strategies including compression, partitioning, and lifecycle management with cost considerationsTroubleshoot and resolve complex data streaming issues in production environments with effective response protocolsConduct proactive capacity planning and performance tuning to support business growth and data volume increases

Collaboration & Leadership Work closely with data scientists, analysts, and product teams to understand important data requirements and service level expectationsMentor junior data engineers with emphasis on data quality best practices and customer-focused approachParticipate in architectural reviews and help establish data engineering standards that prioritize reliability and accuracyDocument technical designs, processes, and operational procedures with focus on maintainability and knowledge sharing

Required Qualifications

Experience & EducationBachelor's or Master's degree in Computer Science, Engineering, or related technical field

7+ years (Senior) of hands-on data engineering experience

Proven experience with large-scale data processing systems (preferably in fintech/payments domain)

Experience building and maintaining production data streams processing TB/PB scale data with strong performance and reliability standards

Technical Skills & RequirementsProgramming Languages:

Expert-level proficiency in both Python and Java; experience with Scala preferred

Big Data Technologies: Apache Spark (PySpark, Spark SQL, Spark with Java), Apache Kafka, Apache Airflow

Cloud Platforms: AWS (EMR, Glue, Redshift, S3, Lambda) or equivalent Azure/GCP services

Databases: Strong SQL skills, experience with both relational (PostgreSQL, MySQL) and NoSQL databases (MongoDB, Cassandra, Redis)

Data Quality Management: Deep understanding of the 4 C's framework - Completeness, Consistency, Conformity, and Correctness

Data Governance: Experience with data lineage tracking, metadata management, and data cataloging

Data Formats & Protocols: Parquet, Avro, JSON, REST APIs, GraphQLContainerization & DevOps: Docker, Kubernetes, Git, GitLab/GitHub with CI/CD pipeline experience

Monitoring & Observability: Experience with Prometheus, Grafana, or similar monitoring tools

Data Modeling: Dimensional modeling, data vault, or similar methodologies

Streaming Technologies: Apache Flink, Kinesis, or Pulsar experience is a plus

Infrastructure as Code: Terraform, CloudFormation (preferred)

Java-specific: Spring Boot, Maven/Gradle, JUnit for building robust data services

Preferred Qualifications

Domain Expertise

Previous experience in fintech, payments, or banking industry with solid understanding of regulatory compliance and financial data requirementsUnderstanding of financial data standards, PCI DSS compliance, and data privacy regulations where compliance is essential for business operationsExperience with real-time fraud detection or risk management systems where data accuracy is crucial for customer protection

Advanced Technical Skills (Preferred)

Experience building automated data quality frameworks covering all 4 C's dimensionsKnowledge of machine learning stream orchestration (MLflow, Kubeflow)Familiarity with data mesh or federated data architecture patternsExperience with change data capture (CDC) tools and techniques

Leadership & Soft SkillsStrong problem-solving abilities with experience debugging complex distributed systems in production environmentsExcellent communication skills with ability to explain technical concepts to diverse stakeholders while highlighting business valueExperience mentoring team members and leading technical initiatives with focus on building a quality-oriented cultureProven track record of delivering projects successfully in dynamic, fast-paced financial technology environments

This advertiser has chosen not to accept applicants from your region.

Data Engineer- Senior Data Engineer

Bengaluru, Karnataka ₹2000000 - ₹2500000 Y Paytm

Posted today

Job Viewed

Tap Again To Close

Job Description

The Role

We're looking for a senior AI engineer who can build production-grade agentic AI systems. You'll be working at the intersection of cutting-edge AI research and scalable engineering, creating autonomous agents that can reason, plan, and execute complex tasks reliably at scale.

What We Need

Agentic AI & LLM Engineering

You should have hands-on experience with:

Multi-agent systems: Building agents that coordinate, communicate, and work together on complex workflows

Agent orchestration: Designing systems where AI agents can plan multi-step tasks, use tools, and make autonomous decisions

LLMOps Experience: End-to-End LLM Lifecycle Management - hands-on experience managing the complete LLM workflow from prompt engineering and dataset curation through model fine-tuning, evaluation, and deployment. This includes versioning prompts, managing training datasets, orchestrating distributed training jobs, and implementing automated model validation pipelines. Production LLM Infrastructure - experience building and maintaining production LLM serving infrastructure including model registries, A/B testing frameworks for comparing model versions, automated rollback mechanisms, and monitoring systems that track model performance, latency, and cost metrics in real-time.

AI Observability: Experience implementing comprehensive monitoring and tracing for AI systems, including prompt tracking, model output analysis, cost monitoring, and agent decision-making visibility across complex workflows.

Evaluation frameworks: Creating comprehensive testing for agent performance, safety, and goal achievement

LLM inference optimization: Scaling model serving with techniques like batching, caching, and efficient frameworks (vLLM, TensorRT-LLM)

Systems Engineering

Strong backend development skills including:

Python expertise: FastAPI, Django, or Flask for building robust APIs that handle agent workflows

Distributed systems: Microservices, event-driven architectures, and message queues (Kafka, RabbitMQ) for agent coordination

Database strategy: Vector databases, traditional SQL/NoSQL, and caching layers optimized for agent state management

Web-scale design: Systems handling millions of requests with proper load balancing and fault tolerance

DevOps (Non-negotiable)

Kubernetes: Working knowledge required - deployments, services, cluster management

Containerization: Docker with production optimization and security best practices

CI/CD: Automated testing and deployment pipelines

Infrastructure as Code: Terraform, Helm charts

Monitoring: Prometheus, Grafana for tracking complex agent behaviors

Programing Language : Java , Python

What You'll Build

You'll architect the infrastructure that powers our autonomous AI systems:

Agent Orchestration Platform: Multi-agent coordination systems that handle complex, long-running workflows with proper state management and failure recovery.

Evaluation Infrastructure: Comprehensive frameworks that assess agent performance across goal achievement, efficiency, safety, and decision-making quality.

Production AI Services: High-throughput systems serving millions of users with intelligent resource management and robust fallback mechanisms.

Training Systems: Scalable pipelines for SFT and DPO that continuously improve agent capabilities based on real-world performance and human feedback.

Who You Are

You've spent serious time in production environments building AI systems that actually work. You understand the unique challenges of agentic AI - managing state across long conversations, handling partial failures in multi-step processes, and ensuring agents stay aligned with their intended goals.

You've dealt with the reality that the hardest problems aren't always algorithmic. Sometimes it's about making an agent retry gracefully when an API call fails, or designing an observability layer that catches when an agent starts behaving unexpectedly, or building systems that can scale from handling dozens of agent interactions to millions.

You're excited about the potential of AI agents but pragmatic about the engineering work required to make them reliable in production.

This advertiser has chosen not to accept applicants from your region.

Data Engineer - Senior Data Engineer

Bengaluru, Karnataka Paytm

Posted today

Job Viewed

Tap Again To Close

Job Description

About Us: Paytm is India’s largest digital payments and financial services platform, leading the mobile QR revolution. We power millions of businesses and individuals, and we’re building scalable, resilient systems to serve half a billion Indians and beyond.Here at Paytm, technology isn't just about keeping up, it's about building the future. We believe the next generation of engineers must not only scale systems to billions but also leverage AI and GPT tools to accelerate innovationAbout the Role: We're looking for a Senior AI engineer with 3-6 years of experience, who can build production-grade agentic AI systems. You'll be working at the intersection of cutting-edge AI research and scalable engineering, creating autonomous agents that can reason, plan, and execute complex tasks reliably at scaleWhat We're Looking For: Agentic AI & LLM EngineeringYou should have hands-on experience with: 1) Multi-agent systems : Building agents that coordinate, communicate, and work together on complex workflows.2) Agent orchestration : Designing systems where AI agents can plan multi-step tasks, use tools, and make autonomous decisions.3) LLMOps Experience : End-to-End LLM Lifecycle Management - hands-on experience managing the complete LLM workflow from prompt engineering and dataset curation through model fine-tuning, evaluation, and deployment. This includes versioning prompts, managing training datasets, orchestrating distributed training jobs, and implementing automated model validation pipelines. Production LLM Infrastructure - experience building and maintaining production LLM serving infrastructure including model registries, A/B testing frameworks for comparing model versions, automated rollback mechanisms, and monitoring systems that track model performance, latency, and cost metrics in real-time.4) AI Observability : Experience implementing comprehensive monitoring and tracing for AI systems, including prompt tracking, model output analysis, cost monitoring, and agent decision-making visibility across complex workflows.5) Evaluation frameworks : Creating comprehensive testing for agent performance, safety, and goal achievement.6) LLM inference optimization : Scaling model serving with techniques like batching, caching, and efficient frameworks (vLLM, TensorRT-LLM)Systems EngineeringStrong backend development skills including: 1) Python expertise : FastAPI, Django, or Flask for building robust APIs that handle agent workflows2) Distributed systems : Microservices, event-driven architectures, and message queues (Kafka, RabbitMQ) for agent coordination3) Database strategy : Vector databases, traditional SQL/NoSQL, and caching layers optimized for agent state management4) Web-scale design : Systems handling millions of requests with proper load balancing and fault toleranceDevOps (Non-negotiable) 1) Kubernetes : Working knowledge required - deployments, services, cluster management2) Containerization : Docker with production optimization and security best practices3) CI/CD : Automated testing and deployment pipelines4) Infrastructure as Code : Terraform, Helm charts5) Monitoring : Prometheus, Grafana for tracking complex agent behaviorsPrograming Language : Java , PythonWhat You'll Build You'll architect the infrastructure that powers our autonomous AI systems:Agent Orchestration Platform : Multi-agent coordination systems that handle complex, long-running workflows with proper state management and failure recovery.Evaluation Infrastructure : Comprehensive frameworks that assess agent performance across goal achievement, efficiency, safety, and decision-making quality.Production AI Services : High-throughput systems serving millions of users with intelligent resource management and robust fallback mechanisms.Training Systems : Scalable pipelines for SFT and DPO that continuously improve agent capabilities based on real-world performance and human feedback.Ideal Profile: 1) You've spent serious time in production environments building AI systems that actually work. You understand the unique challenges of agentic AI - managing state across long conversations, handling partial failures in multi-step processes, and ensuring agents stay aligned with their intended goals.2) You've dealt with the reality that the hardest problems aren't always algorithmic. Sometimes it's about making an agent retry gracefully when an API call fails, or designing an observability layer that catches when an agent starts behaving unexpectedly, or building systems that can scale from handling dozens of agent interactions to millions.3) You're excited about the potential of AI agents but pragmatic about the engineering work required to make them reliable in production.Preferred Qualifications: Bachelor's/Master's Degree in Computer Science or equivalent
This advertiser has chosen not to accept applicants from your region.
Be The First To Know

About the latest Data professionals Jobs in India !

Data Engineer - Lead Data Engineer

Bengaluru, Karnataka Paytm

Posted today

Job Viewed

Tap Again To Close

Job Description

About Us: Paytm is India’s largest digital payments and financial services platform, leading the mobile QR revolution. We power millions of businesses and individuals, and we’re building scalable, resilient systems to serve half a billion Indians and beyond.Here at Paytm, technology isn't just about keeping up, it's about building the future. We believe the next generation of engineers must not only scale systems to billions but also leverage AI and GPT tools to accelerate innovationAbout the Role: We're looking for a Lead AI engineer with 6-10 years of experience, who can build production-grade agentic AI systems. You'll be working at the intersection of cutting-edge AI research and scalable engineering, creating autonomous agents that can reason, plan, and execute complex tasks reliably at scaleWhat We're Looking For: Agentic AI & LLM EngineeringYou should have hands-on experience with: 1) Multi-agent systems : Building agents that coordinate, communicate, and work together on complex workflows.2) Agent orchestration : Designing systems where AI agents can plan multi-step tasks, use tools, and make autonomous decisions.3) LLMOps Experience : End-to-End LLM Lifecycle Management - hands-on experience managing the complete LLM workflow from prompt engineering and dataset curation through model fine-tuning, evaluation, and deployment. This includes versioning prompts, managing training datasets, orchestrating distributed training jobs, and implementing automated model validation pipelines. Production LLM Infrastructure - experience building and maintaining production LLM serving infrastructure including model registries, A/B testing frameworks for comparing model versions, automated rollback mechanisms, and monitoring systems that track model performance, latency, and cost metrics in real-time.4) AI Observability : Experience implementing comprehensive monitoring and tracing for AI systems, including prompt tracking, model output analysis, cost monitoring, and agent decision-making visibility across complex workflows.5) Evaluation frameworks : Creating comprehensive testing for agent performance, safety, and goal achievement.6) LLM inference optimization : Scaling model serving with techniques like batching, caching, and efficient frameworks (vLLM, TensorRT-LLM)Systems EngineeringStrong backend development skills including: 1) Python expertise : FastAPI, Django, or Flask for building robust APIs that handle agent workflows2) Distributed systems : Microservices, event-driven architectures, and message queues (Kafka, RabbitMQ) for agent coordination3) Database strategy : Vector databases, traditional SQL/NoSQL, and caching layers optimized for agent state management4) Web-scale design : Systems handling millions of requests with proper load balancing and fault toleranceDevOps (Non-negotiable) 1) Kubernetes : Working knowledge required - deployments, services, cluster management2) Containerization : Docker with production optimization and security best practices3) CI/CD : Automated testing and deployment pipelines4) Infrastructure as Code : Terraform, Helm charts5) Monitoring : Prometheus, Grafana for tracking complex agent behaviorsPrograming Language : Java , PythonWhat You'll Build You'll architect the infrastructure that powers our autonomous AI systems:1) Agent Orchestration Platform : Multi-agent coordination systems that handle complex, long-running workflows with proper state management and failure recovery.2) Evaluation Infrastructure : Comprehensive frameworks that assess agent performance across goal achievement, efficiency, safety, and decision-making quality.3) Production AI Services : High-throughput systems serving millions of users with intelligent resource management and robust fallback mechanisms.4) Training Systems : Scalable pipelines for SFT and DPO that continuously improve agent capabilities based on real-world performance and human feedback.Ideal Profile: 1) You've spent serious time in production environments building AI systems that actually work. You understand the unique challenges of agentic AI - managing state across long conversations, handling partial failures in multi-step processes, and ensuring agents stay aligned with their intended goals.2) You've dealt with the reality that the hardest problems aren't always algorithmic. Sometimes it's about making an agent retry gracefully when an API call fails, or designing an observability layer that catches when an agent starts behaving unexpectedly, or building systems that can scale from handling dozens of agent interactions to millions.3) You're excited about the potential of AI agents but pragmatic about the engineering work required to make them reliable in production.Preferred Qualifications: Bachelor's/Master's Degree in Computer Science or equivalent
This advertiser has chosen not to accept applicants from your region.

Senior Data Engineer / Data Engineer

Gurugram, Uttar Pradesh Invokhr

Posted today

Job Viewed

Tap Again To Close

Job Description

Desired Experience: 3-8 years

Salary: Best-in-industry

Location: Gurgaon ( 5 days onsite)


Overview:

You will act as a key member of the Data consulting team, working directly with the partners and senior stakeholders of the clients designing and implementing big data and analytics solutions. Communication and organisation skills are keys for this position, along with a problem-solution attitude.

What is in it for you:

Opportunity to work with a world class team of business consultants and engineers solving some of the most complex business problems by applying data and analytics techniques

Fast track career growth in a highly entrepreneurial work environment

Best-in-industry renumeration package

Essential Technical Skills:

Technical expertise with emerging Big Data technologies, such as: Python, Spark, Hadoop, Clojure, Git, SQL and Databricks; and visualization tools: Tableau and PowerBI

Experience with cloud, container and micro service infrastructures

Experience working with divergent data sets that meet the requirements of the Data Science and Data Analytics teams

Hands-on experience with data modelling, query techniques and complexity analysis

Desirable Skills:

Experience/Knowledge of working in an agile environment and experience with agile methodologies such as Scrum

Experience of working with development teams and product owners to understand their requirement

Certifications on any of the above areas will be preferred.

Your duties will include:

Develop data solutions within a Big Data Azure and/or other cloud environments

Working with divergent data sets that meet the requirements of the Data Science and Data Analytics teams

Build and design Data Architectures using Azure Data factory, Databricks, Data lake, Synapse

Liaising with CTO, Product Owners and other Operations teams to deliver engineering roadmaps showing key items such as upgrades, technical refreshes and new versions

Perform data mapping activities to describe source data, target data and the high-level or detailed transformations that need to occur;

Assist Data Analyst team in developing KPIs and reporting in tools viz. Power BI, Tableau

Data Integration, Transformation, Modelling

Maintaining all relevant documentation and knowledge bases

Research and suggest new database products, services and protocols

Essential Personal Traits:

You should be able to work independently and communicate effectively with remote teams.

Timely communication/escalation of issues/dependencies to higher management.

Curiosity to learn and apply emerging technologies to solve business problems


** Interested candidate please send thier resume on - and **

This advertiser has chosen not to accept applicants from your region.

Senior Data Engineer / Data Engineer

Kochi, Kerala Invokhr

Posted today

Job Viewed

Tap Again To Close

Job Description

LOOKING FOR IMMEDIATE JOINERS OR 15 DAYS NOTICE PERIODS AND THIS IS WORK FROM HOME OPPORTUNITY

Position: Senior Data Engineer / Data Engineer

Desired Experience: 3-8 years

Salary: Best-in-industry

You will act as a key member of the Data consulting team, working directly with the partners and senior

stakeholders of the clients designing and implementing big data and analytics solutions. Communication

and organisation skills are keys for this position, along with a problem-solution attitude.

What is in it for you:

Opportunity to work with a world class team of business consultants and engineers solving some of

the most complex business problems by applying data and analytics techniques

Fast track career growth in a highly entrepreneurial work environment

Best-in-industry renumeration package

Essential Technical Skills:

Technical expertise with emerging Big Data technologies, such as: Python, Spark, Hadoop, Clojure,

Git, SQL and Databricks; and visualization tools: Tableau and PowerBI

Experience with cloud, container and micro service infrastructures

Experience working with divergent data sets that meet the requirements of the Data Science and

Data Analytics teams

Hands-on experience with data modelling, query techniques and complexity analysis

Desirable Skills:

Experience/Knowledge of working in an agile environment and experience with agile

methodologies such as Scrum

Experience of working with development teams and product owners to understand their

requirement

Certifications on any of the above areas will be preferred.

Your duties will include:

Develop data solutions within a Big Data Azure and/or other cloud environments

Working with divergent data sets that meet the requirements of the Data Science and Data Analytics

teams

Build and design Data Architectures using Azure Data factory, Databricks, Data lake, Synapse

Liaising with CTO, Product Owners and other Operations teams to deliver engineering roadmaps

showing key items such as upgrades, technical refreshes and new versions

Perform data mapping activities to describe source data, target data and the high-level or

detailed transformations that need to occur;

Assist Data Analyst team in developing KPIs and reporting in tools viz. Power BI, Tableau

Data Integration, Transformation, Modelling

Maintaining all relevant documentation and knowledge bases

Research and suggest new database products, services and protocols

Essential Personal Traits:

You should be able to work independently and communicate effectively with remote teams.

Timely communication/escalation of issues/dependencies to higher management.

Curiosity to learn and apply emerging technologies to solve business problems

This advertiser has chosen not to accept applicants from your region.
 

Nearby Locations

Other Jobs Near Me

Industry

  1. request_quote Accounting
  2. work Administrative
  3. eco Agriculture Forestry
  4. smart_toy AI & Emerging Technologies
  5. school Apprenticeships & Trainee
  6. apartment Architecture
  7. palette Arts & Entertainment
  8. directions_car Automotive
  9. flight_takeoff Aviation
  10. account_balance Banking & Finance
  11. local_florist Beauty & Wellness
  12. restaurant Catering
  13. volunteer_activism Charity & Voluntary
  14. science Chemical Engineering
  15. child_friendly Childcare
  16. foundation Civil Engineering
  17. clean_hands Cleaning & Sanitation
  18. diversity_3 Community & Social Care
  19. construction Construction
  20. brush Creative & Digital
  21. currency_bitcoin Crypto & Blockchain
  22. support_agent Customer Service & Helpdesk
  23. medical_services Dental
  24. medical_services Driving & Transport
  25. medical_services E Commerce & Social Media
  26. school Education & Teaching
  27. electrical_services Electrical Engineering
  28. bolt Energy
  29. local_mall Fmcg
  30. gavel Government & Non Profit
  31. emoji_events Graduate
  32. health_and_safety Healthcare
  33. beach_access Hospitality & Tourism
  34. groups Human Resources
  35. precision_manufacturing Industrial Engineering
  36. security Information Security
  37. handyman Installation & Maintenance
  38. policy Insurance
  39. code IT & Software
  40. gavel Legal
  41. sports_soccer Leisure & Sports
  42. inventory_2 Logistics & Warehousing
  43. supervisor_account Management
  44. supervisor_account Management Consultancy
  45. supervisor_account Manufacturing & Production
  46. campaign Marketing
  47. build Mechanical Engineering
  48. perm_media Media & PR
  49. local_hospital Medical
  50. local_hospital Military & Public Safety
  51. local_hospital Mining
  52. medical_services Nursing
  53. local_gas_station Oil & Gas
  54. biotech Pharmaceutical
  55. checklist_rtl Project Management
  56. shopping_bag Purchasing
  57. home_work Real Estate
  58. person_search Recruitment Consultancy
  59. store Retail
  60. point_of_sale Sales
  61. science Scientific Research & Development
  62. wifi Telecoms
  63. psychology Therapy
  64. pets Veterinary
View All Data Professionals Jobs