38,572 Database Developers jobs in India
MySQL Database Developers
Posted today
Job Viewed
Job Description
We are looking to hire
Senior MySQL Database Developers (2 positions)
with
6–10 years of experience
for our Chennai location.
The ideal candidates must bring strong expertise in
MySQL and PostgreSQL
with hands-on experience in:
- Advanced database design, query tuning, indexing, and performance optimization.
- Development of stored procedures, triggers, and functions.
- Replication, clustering, PITR, and high-availability solutions.
- Backup/recovery, monitoring, and security implementation.
- Compliance with GDPR, HIPAA, SOX, and database governance.
- Leading migration, modernization, and automation initiatives.
Work Location: Guindy, Chennai
Contact: Jeyaraj – HR
Senior MySQL Database Developers
Posted today
Job Viewed
Job Description
Experience :7 + years
Work Location: Chennai ( work from office)
The ideal candidates must bring strong expertise in MySQL and PostgreSQL with hands-on experience in:
- Advanced database design, query tuning, indexing, and performance optimization.
- Development of stored procedures, triggers, and functions.
- Replication, clustering, PITR, and high-availability solutions.
- Backup/recovery, monitoring, and security implementation.
- Compliance with GDPR, HIPAA, SOX, and database governance.
- Leading migration, modernization, and automation initiatives.
- Additionally, candidates should have excellent problem-solving and leadership skills and be able to mentor junior DBAs while contributing to enterprise database strategies.
- candidates with proven MySQL and PostgreSQL optimization experience for these critical roles.
Senior MySQL Database Developers
Posted today
Job Viewed
Job Description
We are looking to hire Senior MySQL Database Developers with years of experience for our Chennai location.
The ideal candidates must bring strong expertise in MySQL and PostgreSQL with hands-on experience in:
- Advanced database design, query tuning, indexing, and performance optimization.
- Development of stored procedures, triggers, and functions.
- Replication, clustering, PITR, and high-availability solutions.
- Backup/recovery, monitoring, and security implementation.
- Compliance with GDPR, HIPAA, SOX, and database governance.
- Leading migration, modernization, and automation initiatives.
SQL Developer/Data Engineer
Posted today
Job Viewed
Job Description
Candidates ready to join immediately can share their details via email for quick processing.
CCTC | ECTC | Notice Period | Location Preference
Act fast for immediate attention! ⏳
Key Responsibilities
- Design, develop, and maintain applications and systems for internal business functions.
- Analyze existing programs and design logic for new systems and enhancements.
- Develop system logic, process flow diagrams, and comprehensive technical documentation.
- Write, test, debug, and optimize T-SQL stored procedures, functions, and triggers.
- Design and implement ETL workflows and data warehouse solutions using SSIS , SSRS , and SSAS .
- Develop reports and dashboards to support business decision-making.
- Perform data modeling, database design, and performance tuning.
- Collaborate with cross-functional teams to gather requirements and ensure high-quality deliverables.
- Prepare conversion and implementation plans for new systems.
- Train users during system rollouts and ensure smooth adoption.
- Recommend improvements to development processes, maintenance procedures, and system standards.
Core Competencies / Required Skill Set
SQL Server Development
- T-SQL, Stored Procedures, Functions, Triggers
Data Warehousing & ETL
- SSIS (SQL Server Integration Services)
- SSRS (SQL Server Reporting Services)
- SSAS (SQL Server Analysis Services)
Data Management & Design
- Data Modeling and Database Design
- Data Analysis and Visualization
Performance & Optimization
- Performance Tuning and Query Optimization
- Troubleshooting complex SQL queries and system performance issues
Technical Proficiency
- Hands-on experience with MS SQL Server 2012, 2016, and 2019
Data Engineer- Lead Data Engineer
Posted today
Job Viewed
Job Description
Role Overview
We are seeking an experienced Lead Data Engineer to join our Data Engineering team at Paytm, India's leading digital payments and financial services platform. This is a critical role responsible for designing, building, and maintaining large-scale, real-time data streams that process billions of transactions and user interactions daily. Data accuracy and stream reliability are essential to our operations, as data quality issues can result in financial losses and impact customer
a Lead Data Engineer at Paytm, you will be responsible for building robust data systems that support India's largest digital payments ecosystem. You'll architect and implement reliable, real-time data streaming solutions where precision and data correctness are fundamental requirements. Your work will directly support millions of users across merchant payments, peer-to-peer transfers, bill payments, and financial services, where data accuracy is crucial for maintaining customer confidence and operational excellence.
This role requires expertise in designing fault-tolerant, scalable data architectures that maintain high uptime standards while processing peak transaction loads during festivals and high-traffic events. We place the highest priority on data quality and system reliability, as our customers depend on accurate, timely information for their financial decisions. You'll collaborate with cross-functional teams including data scientists, product managers, and risk engineers to deliver data solutions that enable real-time fraud detection, personalized recommendations, credit scoring, and regulatory compliance reporting.
Key technical challenges include maintaining data consistency across distributed systems with demanding performance requirements, implementing comprehensive data quality frameworks with real-time validation, optimizing query performance on large datasets, and ensuring complete data lineage and governance across multiple business domains. At Paytm, reliable data streams are fundamental to our operations and our commitment to protecting customers' financial security and maintaining India's digital payments
Responsibilities
Data Stream Architecture & DevelopmentDesign and implement reliable, scalable data streams handling high-volume transaction data with strong data integrity controlsBuild real-time processing systems using modern data engineering frameworks (Java/Python stack) with excellent performance characteristicsDevelop robust data ingestion systems from multiple sources with built-in redundancy and monitoring capabilitiesImplement comprehensive data quality frameworks, ensuring the 4 C's: Completeness, Consistency, Conformity, and Correctness - ensuring data reliability that supports sound business decisionsDesign automated data validation, profiling, and quality monitoring systems with proactive alerting capabilitiesInfrastructure & Platform ManagementManage and optimize distributed data processing platforms with high availability requirements to ensure consistent service deliveryDesign data lake and data warehouse architectures with appropriate partitioning and indexing strategies for optimal query performanceImplement CI/CD processes for data engineering workflows with comprehensive testing and reliable deployment proceduresEnsure high availability and disaster recovery for critical data systems to maintain business continuity
Performance & OptimizationMonitor and optimize streaming performance with focus on latency reduction and operational efficiencyImplement efficient data storage strategies including compression, partitioning, and lifecycle management with cost considerationsTroubleshoot and resolve complex data streaming issues in production environments with effective response protocolsConduct proactive capacity planning and performance tuning to support business growth and data volume increases
Collaboration & Leadership Work closely with data scientists, analysts, and product teams to understand important data requirements and service level expectationsMentor junior data engineers with emphasis on data quality best practices and customer-focused approachParticipate in architectural reviews and help establish data engineering standards that prioritize reliability and accuracyDocument technical designs, processes, and operational procedures with focus on maintainability and knowledge sharing
Required Qualifications
Experience & EducationBachelor's or Master's degree in Computer Science, Engineering, or related technical field
7+ years (Senior) of hands-on data engineering experience
Proven experience with large-scale data processing systems (preferably in fintech/payments domain)
Experience building and maintaining production data streams processing TB/PB scale data with strong performance and reliability standards
Technical Skills & RequirementsProgramming Languages:
Expert-level proficiency in both Python and Java; experience with Scala preferred
Big Data Technologies: Apache Spark (PySpark, Spark SQL, Spark with Java), Apache Kafka, Apache Airflow
Cloud Platforms: AWS (EMR, Glue, Redshift, S3, Lambda) or equivalent Azure/GCP services
Databases: Strong SQL skills, experience with both relational (PostgreSQL, MySQL) and NoSQL databases (MongoDB, Cassandra, Redis)
Data Quality Management: Deep understanding of the 4 C's framework - Completeness, Consistency, Conformity, and Correctness
Data Governance: Experience with data lineage tracking, metadata management, and data cataloging
Data Formats & Protocols: Parquet, Avro, JSON, REST APIs, GraphQLContainerization & DevOps: Docker, Kubernetes, Git, GitLab/GitHub with CI/CD pipeline experience
Monitoring & Observability: Experience with Prometheus, Grafana, or similar monitoring tools
Data Modeling: Dimensional modeling, data vault, or similar methodologies
Streaming Technologies: Apache Flink, Kinesis, or Pulsar experience is a plus
Infrastructure as Code: Terraform, CloudFormation (preferred)
Java-specific: Spring Boot, Maven/Gradle, JUnit for building robust data services
Preferred Qualifications
Domain Expertise
Previous experience in fintech, payments, or banking industry with solid understanding of regulatory compliance and financial data requirementsUnderstanding of financial data standards, PCI DSS compliance, and data privacy regulations where compliance is essential for business operationsExperience with real-time fraud detection or risk management systems where data accuracy is crucial for customer protection
Advanced Technical Skills (Preferred)
Experience building automated data quality frameworks covering all 4 C's dimensionsKnowledge of machine learning stream orchestration (MLflow, Kubeflow)Familiarity with data mesh or federated data architecture patternsExperience with change data capture (CDC) tools and techniques
Leadership & Soft SkillsStrong problem-solving abilities with experience debugging complex distributed systems in production environmentsExcellent communication skills with ability to explain technical concepts to diverse stakeholders while highlighting business valueExperience mentoring team members and leading technical initiatives with focus on building a quality-oriented cultureProven track record of delivering projects successfully in dynamic, fast-paced financial technology environments
Data Engineer- Senior Data Engineer
Posted today
Job Viewed
Job Description
The Role
We're looking for a senior AI engineer who can build production-grade agentic AI systems. You'll be working at the intersection of cutting-edge AI research and scalable engineering, creating autonomous agents that can reason, plan, and execute complex tasks reliably at scale.
What We Need
Agentic AI & LLM Engineering
You should have hands-on experience with:
Multi-agent systems: Building agents that coordinate, communicate, and work together on complex workflows
Agent orchestration: Designing systems where AI agents can plan multi-step tasks, use tools, and make autonomous decisions
LLMOps Experience: End-to-End LLM Lifecycle Management - hands-on experience managing the complete LLM workflow from prompt engineering and dataset curation through model fine-tuning, evaluation, and deployment. This includes versioning prompts, managing training datasets, orchestrating distributed training jobs, and implementing automated model validation pipelines. Production LLM Infrastructure - experience building and maintaining production LLM serving infrastructure including model registries, A/B testing frameworks for comparing model versions, automated rollback mechanisms, and monitoring systems that track model performance, latency, and cost metrics in real-time.
AI Observability: Experience implementing comprehensive monitoring and tracing for AI systems, including prompt tracking, model output analysis, cost monitoring, and agent decision-making visibility across complex workflows.
Evaluation frameworks: Creating comprehensive testing for agent performance, safety, and goal achievement
LLM inference optimization: Scaling model serving with techniques like batching, caching, and efficient frameworks (vLLM, TensorRT-LLM)
Systems Engineering
Strong backend development skills including:
Python expertise: FastAPI, Django, or Flask for building robust APIs that handle agent workflows
Distributed systems: Microservices, event-driven architectures, and message queues (Kafka, RabbitMQ) for agent coordination
Database strategy: Vector databases, traditional SQL/NoSQL, and caching layers optimized for agent state management
Web-scale design: Systems handling millions of requests with proper load balancing and fault tolerance
DevOps (Non-negotiable)
Kubernetes: Working knowledge required - deployments, services, cluster management
Containerization: Docker with production optimization and security best practices
CI/CD: Automated testing and deployment pipelines
Infrastructure as Code: Terraform, Helm charts
Monitoring: Prometheus, Grafana for tracking complex agent behaviors
Programing Language : Java , Python
What You'll Build
You'll architect the infrastructure that powers our autonomous AI systems:
Agent Orchestration Platform: Multi-agent coordination systems that handle complex, long-running workflows with proper state management and failure recovery.
Evaluation Infrastructure: Comprehensive frameworks that assess agent performance across goal achievement, efficiency, safety, and decision-making quality.
Production AI Services: High-throughput systems serving millions of users with intelligent resource management and robust fallback mechanisms.
Training Systems: Scalable pipelines for SFT and DPO that continuously improve agent capabilities based on real-world performance and human feedback.
Who You Are
You've spent serious time in production environments building AI systems that actually work. You understand the unique challenges of agentic AI - managing state across long conversations, handling partial failures in multi-step processes, and ensuring agents stay aligned with their intended goals.
You've dealt with the reality that the hardest problems aren't always algorithmic. Sometimes it's about making an agent retry gracefully when an API call fails, or designing an observability layer that catches when an agent starts behaving unexpectedly, or building systems that can scale from handling dozens of agent interactions to millions.
You're excited about the potential of AI agents but pragmatic about the engineering work required to make them reliable in production.
Data Engineer _ Data
Posted today
Job Viewed
Job Description
Summary: The Data Engineer in the Data & AI division is responsible for designing, developing, and maintaining robust data pipelines, ensuring the efficient and secure movement, transformation, and storage of data across business systems. The ideal candidate will support analytics and AI initiatives, enabling data-driven decision-making within the organisation.
Role: Data & AI Data Engineer
Location: Bangalore
Shift timings: General Shift
Roles & Responsibilities:
- Design, develop, and maintain scalable and reliable data pipelines to support analytics, reporting, and AI-driven solutions.
- Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver appropriate data solutions.
- Optimise data extraction, transformation, and loading (ETL) processes for performance, scalability, and data quality.
- Implement data models, build and maintain data warehouses and lakes, and ensure data security and compliance.
- Monitor data pipeline performance and troubleshoot issues in a timely manner.
- Document data processes, pipelines, and architecture for knowledge sharing and audit purposes.
- Stay updated with industry trends and recommend best practices in data engineering and AI integration.
Must-Have Skills:
- Demonstrated proficiency in SQL and at least one programming language (Python, Java, or Scala).
- Experience with cloud platforms such as Azure, AWS, or Google Cloud (Data Factory, Databricks, Glue, BigQuery, etc.).
- Expertise in building and managing ETL pipelines and workflows.
- Strong understanding of relational and non-relational databases.
- Knowledge of data modelling, data warehousing, and data lake architectures.
- Experience with version control systems (e.g., Git) and CI/CD principles.
- Excellent problem-solving and communication skills.
Preferred skills:
- Experience with big data frameworks (Spark, Hadoop, Kafka, etc.).
- Familiarity with containerisation and orchestration tools (Docker, Kubernetes, Airflow).
- Understanding of data privacy regulations (GDPR, etc.) and data governance practices.
- Exposure to machine learning or AI model deployment pipelines.
- P ands-on experience with reporting and visualisation tools (Power BI, Tableau, etc.).
We are Navigators in the Age of Transformation: We use sophisticated technology to transform clients into the digital age, but our top priority is our positive impact on human experience. We ease anxiety and fear around digital transformation and replace it with opportunity. Launch IT is an equal opportunity employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Launch IT is committed to creating a dynamic work environment that values diversity and inclusion, respect and integrity, customer focus, and innovation.
About Company: Launch IT India is wholly owned subsidiary of The Planet Group ; ) a US company, offers attractive compensation and work environment for the prospective employees. Launch is an entrepreneurial business and technology consultancy. We help businesses and people navigate from current state to future state. Technology, tenacity, and creativity fuel our solutions with offices in Bellevue, Sacramento, Dallas, San Francisco, Hyderabad & Washington D.C.
Be The First To Know
About the latest Database developers Jobs in India !
Data Engineer
Posted today
Job Viewed
Job Description
Job Title: Data Engineer Department: Business Intelligence Location: Chennai
Position Overview
Develops effective business intelligence solutions for OEC by designing, developing, and deploying systems to support business intelligence, reporting, data warehousing, and integration with enterprise applications. Implements user access controls and data security measures. Assists in the evaluation of the business to support delivery of effective business intelligence and reporting solutions. Maintains documentation of processes, reports, applications, and procedures.
Key Responsibilities
Develops and deploys systems to support business intelligence, reporting, and data warehouse solutions using both Azure and AWS cloud platforms; implements robust user access controls and data security measures across cloud environments.
Owns the design, development, and maintenance of ongoing metrics, reports, analyses, and dashboards using tools like Amazon Quick Sight, Microsoft Azure BI solutions to drive key business decisions.
Design, build, and maintain scalable data pipelines and ETL processes using Azure and AWS services such as Azure Data Factory, AWS Glue, Amazon Redshift, and S3, ensuring efficient data extraction, transformation, and loading across cloud environments.
Anticipates opportunities for improvement in cross-cloud analytics solutions; outlines and identifies alternate solutions leveraging the strengths of both AWS and Azure platforms and presents to management for final approval and implementation.
Develops and performs system testing across Azure and AWS environments, fixes defects identified during testing; re-executes unit tests to validate results and ensure compatibility between cloud platforms.
Assists in coordination efforts with management and end-users to evaluate business needs and support delivery of effective business intelligence and reporting
solutions that leverage the cost-optimization benefits of both Microsoft workloads on AWS and native Azure services.
Develops and tests database scripts, stored procedures, triggers, functions, Azure Data Factory pipelines, AWS Glue jobs, and other back-end processes to support seamless system integration across cloud environments.
Assists with tabular and multi-dimensional modelling in addition to the development of the Enterprise Data Warehouse using both AWS and Azure cloud data warehouse solutions.
Skills & Qualifications
- At least 3 years of experience in business intelligence, data engineering and reporting required.
- A bachelor's degree from an accredited college or university is required, with a focus in Information Technology, Computer Science, or related discipline.
- Strong proven track record of developing and testing database scripts, stored procedures, triggers, functions, SSIS Packages, AWS, Azure cloud services and other back-end processes to support system integration.
- Experience using BI/Analytics/Statistical tools like Power BI, Tableau, Business Objects, SSRS, SSAS, and Excel.
- Strong SQL skills, business intelligence tool expertise, AWS Cloud services, and report development skills.
- Knowledge of data modelling, data quality, ETL/SQL server, and SSAS (DAX/MDX) database queries.
- Understanding of configuration, deployment, and database servers.
- Strong writing and verbal communication skills.
- Ability to work collaboratively in a functional team environment.
- Refined analytical thinking and problem-solving skills.
Data Engineer
Posted today
Job Viewed
Job Description
We at BEACON Consulting are expanding our core tech team with 2 engineers to build and scale data-driven platforms that process, analyze, and visualize large-scale digital data streams. Youll work at the intersection of data engineering, machine learning, and real-time analytics, creating systems that transform raw digital data into structured, actionable intelligence.
Please note: Immediate joiners are preferred for this role.
Responsibilities
-Build and maintain data pipelines to fetch and process data from multiple APIs (Twitter, YouTube, Meta, etc.)
-Implement and integrate NLP/ML models (Transformers, VADER, OpenAI, Hugging Face, etc.)
-Design and manage interactive dashboards (Looker Studio, Power BI, Streamlit, Tableau)
-Automate reporting and analysis workflows using Python, SQL, and Excel/Google Sheets
-Conduct data validation, anomaly detection, and trend analysis
-Collaborate with internal teams to convert outputs into strategic insights
Required Skills
-Strong in Python (pandas, numpy, requests, matplotlib/plotly)
-Proficient with APIs (REST, OAuth, rate-limit handling)
-Familiar with NLP & ML libraries
-Experience in SQL and database management (BigQuery, MySQL, Postgres)
-Comfortable with Excel/Google Sheets automation
-Skilled in data visualization & dashboarding (Looker Studio, Power BI, Tableau, or Streamlit)
-Good grasp of statistics and ability to interpret trends Eligibility -Minimum 1 year of relevant experience in data engineering, analytics, or similar role -Hands-on experience with APIs (fetching, cleaning, structuring data) is mandatory
-Proficiency in Excel and dashboarding (Looker Studio / Power BI / Tableau) is a must
-Prior exposure to NLP or social media data will be preferred
-Proficiency in Telugu mandatory
Why Join Us
-Best-in-class pay with performance-linked growth opportunities
-Work on high-impact, large-scale data products in a fast-moving environment
-Continuous learning across data engineering, AI, and applied analytics -Ownership & visibility work closely with leadership and see your ideas implemented at scale
-Exposure to cutting-edge tech stacks and real-time problem-solving
Data Engineer
Posted today
Job Viewed
Job Description
About the Role
- Your days are dynamic and impactful. You will spearhead GTM programs aimed at driving significant pipeline and revenue growth. Collaborating closely with the Front End, Inside Sales, and Demand Gen teams, you'll harness extensive knowledge of regional execution performance to identify trends and craft strategies.
- Your expertise will support the sales organization in smashing their quarterly and yearly pipeline targets, through meticulous project management and strategy execution.
A Day in the Life
- Your days are dynamic and impactful. You will spearhead GTM programs aimed at driving significant pipeline and revenue growth. Collaborating closely with the Front End, Inside Sales, and Demand Gen teams, you'll harness extensive knowledge of regional execution performance to identify trends and craft strategies.
- Your expertise will support the sales organization in smashing their quarterly and yearly pipeline targets, through meticulous project management and strategy execution.
Data Engineering & Warehousing
- Design, build, and optimize ETL/ELT pipelines leveraging Snowflake, Python/SQL, dbt, and Airflow.
- Develop and maintain dimensional data models with an emphasis on quality, governance, and time-series performance tracking.
- Implement real-time monitoring and observability tools to ensure system reliability and alerting for mission-critical data pipelines.Salesforce & Platform Integrations
- Architect and manage data integrations with Salesforce (SFDC), Jira, HRIS, and various third-party APIs to centralize and operationalize data across platforms.
Enable efficient data exchange and automation across core operational tools to support reporting, compliance, and analytics needs.AI Workflows & Agent Platform Engineering
Design and implement AI-driven workflows using micro-agent platforms such as n8n, , Relevance AI, or similar.
- Integrate these platforms with internal systems for automated task execution, decision support, and self-service AI capabilities across operational teams.
- Support development and deployment of AI co-pilots, compliance automation, and intelligent alerting systems.Collaboration, Enablement & Best Practices
- Collaborate closely with Central Ops, Legal, IT, and Engineering teams to drive automation, compliance, and cross-functional enablement
- Champion documentation, self-service data tools, and training resources to empower internal teams with easy access to data and automation solutions.
- Establish and maintain best practices for scalable, maintainable, and secure data and AI workflow engineering.
What You Need
- 3-5 years of hands-on experience in technical roles involving system integration, automation, or data engineering in SaaS/B2B environments.
- Proven experience with Salesforce (SFDC), including data integration, workflow automation, and API-based solutions.
- Strong proficiency in Python, with practical experience in developing automation scripts, data workflows, and operational tooling.
- Familiarity with data platforms and databases (e.g., Snowflake, Redshift, BigQuery) to support reliable data flow and integration.
- Experience designing or deploying AI workflows using micro-agent platforms such as n8n, , Relevance AI, or similar tools.
- Solid understanding of REST APIs, and experience with real-time data orchestration and system integrations.
- Bonus: Exposure to SuperAGI, Slack integrations, Jira, or observability and alerting tools is a plus.
- A proactive, problem-solving mindset, with the ability to work effectively in fast-paced, cross-functional environments.