3772 Database Developers jobs in Bengaluru
SQL Developer/Data Engineer
Posted today
Job Viewed
Job Description
Candidates ready to join immediately can share their details via email for quick processing.
CCTC | ECTC | Notice Period | Location Preference
Act fast for immediate attention! ⏳
Key Responsibilities
- Design, develop, and maintain applications and systems for internal business functions.
- Analyze existing programs and design logic for new systems and enhancements.
- Develop system logic, process flow diagrams, and comprehensive technical documentation.
- Write, test, debug, and optimize T-SQL stored procedures, functions, and triggers.
- Design and implement ETL workflows and data warehouse solutions using SSIS , SSRS , and SSAS .
- Develop reports and dashboards to support business decision-making.
- Perform data modeling, database design, and performance tuning.
- Collaborate with cross-functional teams to gather requirements and ensure high-quality deliverables.
- Prepare conversion and implementation plans for new systems.
- Train users during system rollouts and ensure smooth adoption.
- Recommend improvements to development processes, maintenance procedures, and system standards.
Core Competencies / Required Skill Set
SQL Server Development
- T-SQL, Stored Procedures, Functions, Triggers
Data Warehousing & ETL
- SSIS (SQL Server Integration Services)
- SSRS (SQL Server Reporting Services)
- SSAS (SQL Server Analysis Services)
Data Management & Design
- Data Modeling and Database Design
- Data Analysis and Visualization
Performance & Optimization
- Performance Tuning and Query Optimization
- Troubleshooting complex SQL queries and system performance issues
Technical Proficiency
- Hands-on experience with MS SQL Server 2012, 2016, and 2019
Data Engineer- Lead Data Engineer
Posted today
Job Viewed
Job Description
Role Overview
We are seeking an experienced Lead Data Engineer to join our Data Engineering team at Paytm, India's leading digital payments and financial services platform. This is a critical role responsible for designing, building, and maintaining large-scale, real-time data streams that process billions of transactions and user interactions daily. Data accuracy and stream reliability are essential to our operations, as data quality issues can result in financial losses and impact customer
a Lead Data Engineer at Paytm, you will be responsible for building robust data systems that support India's largest digital payments ecosystem. You'll architect and implement reliable, real-time data streaming solutions where precision and data correctness are fundamental requirements. Your work will directly support millions of users across merchant payments, peer-to-peer transfers, bill payments, and financial services, where data accuracy is crucial for maintaining customer confidence and operational excellence.
This role requires expertise in designing fault-tolerant, scalable data architectures that maintain high uptime standards while processing peak transaction loads during festivals and high-traffic events. We place the highest priority on data quality and system reliability, as our customers depend on accurate, timely information for their financial decisions. You'll collaborate with cross-functional teams including data scientists, product managers, and risk engineers to deliver data solutions that enable real-time fraud detection, personalized recommendations, credit scoring, and regulatory compliance reporting.
Key technical challenges include maintaining data consistency across distributed systems with demanding performance requirements, implementing comprehensive data quality frameworks with real-time validation, optimizing query performance on large datasets, and ensuring complete data lineage and governance across multiple business domains. At Paytm, reliable data streams are fundamental to our operations and our commitment to protecting customers' financial security and maintaining India's digital payments
Responsibilities
Data Stream Architecture & DevelopmentDesign and implement reliable, scalable data streams handling high-volume transaction data with strong data integrity controlsBuild real-time processing systems using modern data engineering frameworks (Java/Python stack) with excellent performance characteristicsDevelop robust data ingestion systems from multiple sources with built-in redundancy and monitoring capabilitiesImplement comprehensive data quality frameworks, ensuring the 4 C's: Completeness, Consistency, Conformity, and Correctness - ensuring data reliability that supports sound business decisionsDesign automated data validation, profiling, and quality monitoring systems with proactive alerting capabilitiesInfrastructure & Platform ManagementManage and optimize distributed data processing platforms with high availability requirements to ensure consistent service deliveryDesign data lake and data warehouse architectures with appropriate partitioning and indexing strategies for optimal query performanceImplement CI/CD processes for data engineering workflows with comprehensive testing and reliable deployment proceduresEnsure high availability and disaster recovery for critical data systems to maintain business continuity
Performance & OptimizationMonitor and optimize streaming performance with focus on latency reduction and operational efficiencyImplement efficient data storage strategies including compression, partitioning, and lifecycle management with cost considerationsTroubleshoot and resolve complex data streaming issues in production environments with effective response protocolsConduct proactive capacity planning and performance tuning to support business growth and data volume increases
Collaboration & Leadership Work closely with data scientists, analysts, and product teams to understand important data requirements and service level expectationsMentor junior data engineers with emphasis on data quality best practices and customer-focused approachParticipate in architectural reviews and help establish data engineering standards that prioritize reliability and accuracyDocument technical designs, processes, and operational procedures with focus on maintainability and knowledge sharing
Required Qualifications
Experience & EducationBachelor's or Master's degree in Computer Science, Engineering, or related technical field
7+ years (Senior) of hands-on data engineering experience
Proven experience with large-scale data processing systems (preferably in fintech/payments domain)
Experience building and maintaining production data streams processing TB/PB scale data with strong performance and reliability standards
Technical Skills & RequirementsProgramming Languages:
Expert-level proficiency in both Python and Java; experience with Scala preferred
Big Data Technologies: Apache Spark (PySpark, Spark SQL, Spark with Java), Apache Kafka, Apache Airflow
Cloud Platforms: AWS (EMR, Glue, Redshift, S3, Lambda) or equivalent Azure/GCP services
Databases: Strong SQL skills, experience with both relational (PostgreSQL, MySQL) and NoSQL databases (MongoDB, Cassandra, Redis)
Data Quality Management: Deep understanding of the 4 C's framework - Completeness, Consistency, Conformity, and Correctness
Data Governance: Experience with data lineage tracking, metadata management, and data cataloging
Data Formats & Protocols: Parquet, Avro, JSON, REST APIs, GraphQLContainerization & DevOps: Docker, Kubernetes, Git, GitLab/GitHub with CI/CD pipeline experience
Monitoring & Observability: Experience with Prometheus, Grafana, or similar monitoring tools
Data Modeling: Dimensional modeling, data vault, or similar methodologies
Streaming Technologies: Apache Flink, Kinesis, or Pulsar experience is a plus
Infrastructure as Code: Terraform, CloudFormation (preferred)
Java-specific: Spring Boot, Maven/Gradle, JUnit for building robust data services
Preferred Qualifications
Domain Expertise
Previous experience in fintech, payments, or banking industry with solid understanding of regulatory compliance and financial data requirementsUnderstanding of financial data standards, PCI DSS compliance, and data privacy regulations where compliance is essential for business operationsExperience with real-time fraud detection or risk management systems where data accuracy is crucial for customer protection
Advanced Technical Skills (Preferred)
Experience building automated data quality frameworks covering all 4 C's dimensionsKnowledge of machine learning stream orchestration (MLflow, Kubeflow)Familiarity with data mesh or federated data architecture patternsExperience with change data capture (CDC) tools and techniques
Leadership & Soft SkillsStrong problem-solving abilities with experience debugging complex distributed systems in production environmentsExcellent communication skills with ability to explain technical concepts to diverse stakeholders while highlighting business valueExperience mentoring team members and leading technical initiatives with focus on building a quality-oriented cultureProven track record of delivering projects successfully in dynamic, fast-paced financial technology environments
Data Engineer- Senior Data Engineer
Posted today
Job Viewed
Job Description
The Role
We're looking for a senior AI engineer who can build production-grade agentic AI systems. You'll be working at the intersection of cutting-edge AI research and scalable engineering, creating autonomous agents that can reason, plan, and execute complex tasks reliably at scale.
What We Need
Agentic AI & LLM Engineering
You should have hands-on experience with:
Multi-agent systems: Building agents that coordinate, communicate, and work together on complex workflows
Agent orchestration: Designing systems where AI agents can plan multi-step tasks, use tools, and make autonomous decisions
LLMOps Experience: End-to-End LLM Lifecycle Management - hands-on experience managing the complete LLM workflow from prompt engineering and dataset curation through model fine-tuning, evaluation, and deployment. This includes versioning prompts, managing training datasets, orchestrating distributed training jobs, and implementing automated model validation pipelines. Production LLM Infrastructure - experience building and maintaining production LLM serving infrastructure including model registries, A/B testing frameworks for comparing model versions, automated rollback mechanisms, and monitoring systems that track model performance, latency, and cost metrics in real-time.
AI Observability: Experience implementing comprehensive monitoring and tracing for AI systems, including prompt tracking, model output analysis, cost monitoring, and agent decision-making visibility across complex workflows.
Evaluation frameworks: Creating comprehensive testing for agent performance, safety, and goal achievement
LLM inference optimization: Scaling model serving with techniques like batching, caching, and efficient frameworks (vLLM, TensorRT-LLM)
Systems Engineering
Strong backend development skills including:
Python expertise: FastAPI, Django, or Flask for building robust APIs that handle agent workflows
Distributed systems: Microservices, event-driven architectures, and message queues (Kafka, RabbitMQ) for agent coordination
Database strategy: Vector databases, traditional SQL/NoSQL, and caching layers optimized for agent state management
Web-scale design: Systems handling millions of requests with proper load balancing and fault tolerance
DevOps (Non-negotiable)
Kubernetes: Working knowledge required - deployments, services, cluster management
Containerization: Docker with production optimization and security best practices
CI/CD: Automated testing and deployment pipelines
Infrastructure as Code: Terraform, Helm charts
Monitoring: Prometheus, Grafana for tracking complex agent behaviors
Programing Language : Java , Python
What You'll Build
You'll architect the infrastructure that powers our autonomous AI systems:
Agent Orchestration Platform: Multi-agent coordination systems that handle complex, long-running workflows with proper state management and failure recovery.
Evaluation Infrastructure: Comprehensive frameworks that assess agent performance across goal achievement, efficiency, safety, and decision-making quality.
Production AI Services: High-throughput systems serving millions of users with intelligent resource management and robust fallback mechanisms.
Training Systems: Scalable pipelines for SFT and DPO that continuously improve agent capabilities based on real-world performance and human feedback.
Who You Are
You've spent serious time in production environments building AI systems that actually work. You understand the unique challenges of agentic AI - managing state across long conversations, handling partial failures in multi-step processes, and ensuring agents stay aligned with their intended goals.
You've dealt with the reality that the hardest problems aren't always algorithmic. Sometimes it's about making an agent retry gracefully when an API call fails, or designing an observability layer that catches when an agent starts behaving unexpectedly, or building systems that can scale from handling dozens of agent interactions to millions.
You're excited about the potential of AI agents but pragmatic about the engineering work required to make them reliable in production.
Data Engineer _ Data
Posted today
Job Viewed
Job Description
Summary: The Data Engineer in the Data & AI division is responsible for designing, developing, and maintaining robust data pipelines, ensuring the efficient and secure movement, transformation, and storage of data across business systems. The ideal candidate will support analytics and AI initiatives, enabling data-driven decision-making within the organisation.
Role: Data & AI Data Engineer
Location: Bangalore
Shift timings: General Shift
Roles & Responsibilities:
- Design, develop, and maintain scalable and reliable data pipelines to support analytics, reporting, and AI-driven solutions.
- Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver appropriate data solutions.
- Optimise data extraction, transformation, and loading (ETL) processes for performance, scalability, and data quality.
- Implement data models, build and maintain data warehouses and lakes, and ensure data security and compliance.
- Monitor data pipeline performance and troubleshoot issues in a timely manner.
- Document data processes, pipelines, and architecture for knowledge sharing and audit purposes.
- Stay updated with industry trends and recommend best practices in data engineering and AI integration.
Must-Have Skills:
- Demonstrated proficiency in SQL and at least one programming language (Python, Java, or Scala).
- Experience with cloud platforms such as Azure, AWS, or Google Cloud (Data Factory, Databricks, Glue, BigQuery, etc.).
- Expertise in building and managing ETL pipelines and workflows.
- Strong understanding of relational and non-relational databases.
- Knowledge of data modelling, data warehousing, and data lake architectures.
- Experience with version control systems (e.g., Git) and CI/CD principles.
- Excellent problem-solving and communication skills.
Preferred skills:
- Experience with big data frameworks (Spark, Hadoop, Kafka, etc.).
- Familiarity with containerisation and orchestration tools (Docker, Kubernetes, Airflow).
- Understanding of data privacy regulations (GDPR, etc.) and data governance practices.
- Exposure to machine learning or AI model deployment pipelines.
- P ands-on experience with reporting and visualisation tools (Power BI, Tableau, etc.).
We are Navigators in the Age of Transformation: We use sophisticated technology to transform clients into the digital age, but our top priority is our positive impact on human experience. We ease anxiety and fear around digital transformation and replace it with opportunity. Launch IT is an equal opportunity employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Launch IT is committed to creating a dynamic work environment that values diversity and inclusion, respect and integrity, customer focus, and innovation.
About Company: Launch IT India is wholly owned subsidiary of The Planet Group ; ) a US company, offers attractive compensation and work environment for the prospective employees. Launch is an entrepreneurial business and technology consultancy. We help businesses and people navigate from current state to future state. Technology, tenacity, and creativity fuel our solutions with offices in Bellevue, Sacramento, Dallas, San Francisco, Hyderabad & Washington D.C.
Data Engineer
Posted today
Job Viewed
Job Description
Overview
Working at Atlassian
Atlassians can choose where they work – whether in an office, from home, or a combination of the two. That way, Atlassians have more control over supporting their family, personal goals, and other priorities. We can hire people in any country where we have a legal entity. Interviews and onboarding are conducted virtually, a part of being a distributed-first company.
Responsibilities
Atlassian is looking for a Data Engineer to join our Data Engineering team, responsible for building our data lake, maintaining big data pipelines / services and facilitating the movement of billions of messages each day. We work directly with the business stakeholders, platform and engineering teams to enable growth and retention strategies at Atlassian. We are looking for an open-minded, structured thinker who is passionate about building services/pipelines that scale.
On a typical day you will help our stakeholder teams ingest data faster into our data lake, you'll find ways to make our data pipelines more efficient, or even come up ideas to help instigate self-serve data engineering within the company. You will be involved in strategizing measurement, collecting data, and generating insights.
Qualifications
Benefits & Perks
Atlassian offers a wide range of perks and benefits designed to support you, your family and to help you engage with your local community. Our offerings include health and wellbeing resources, paid volunteer days, and so much more. To learn more, visit
.
About Atlassian
At Atlassian, we're motivated by a common goal: to unleash the potential of every team. Our software products help teams all over the planet and our solutions are designed for all types of work. Team collaboration through our tools makes what may be impossible alone, possible together.
We believe that the unique contributions of all Atlassians create our success. To ensure that our products and culture continue to incorporate everyone's perspectives and experience, we never discriminate based on race, religion, national origin, gender identity or expression, sexual orientation, age, or marital, veteran, or disability status. All your information will be kept confidential according to EEO guidelines.
To provide you the best experience, we can support with accommodations or adjustments at any stage of the recruitment process. Simply inform our Recruitment team during your conversation with them.
To learn more about our culture and hiring process, visit
.
Data Engineer
Posted today
Job Viewed
Job Description
About Position:
We are looking for an experienced and talented Data Engineer- Databricks to join our growing data engineering team.
Role: Data Engineer- Databricks
• Location: Pune and Bangalore
• Experience: 3 Years- 12 Years
• Job Type: Full Time Employment
What You'll Do:
• Design and maintain scalable data pipelines using Databricks and Apache Spark.
• Develop and optimize ETL/ELT processes for structured and unstructured data.
• Implement Lakehouse architecture for efficient data storage, processing, and analytics.
• Manage Databricks Workflows, Jobs API, and AWS/Azure/GCP data services.
• Optimize queries using pushdown capabilities and indexing strategies.
• Ensure data governance with Unity Catalog, security policies, and access controls.
• Troubleshoot and improve Databricks jobs and clusters.
Expertise You'll Bring:
• 3+ years of experience in Big Data technologies like Apache Spark, Databricks.
• Strong proficiency in Python or Scala.
• Experience with cloud platforms (AWS,Azure, GCP).
• Knowledge of data warehousing, ETL processes, and SQL.
• Familiarity with CI/CD pipelines, GitHub, and containerization (Docker, Kubernetes).
Benefits:
• Competitive salary and benefits package
• Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications
• Opportunity to work with cutting-edge technologies
• Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards
• Annual health check-ups
• Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents
Inclusive Environment:
Our client is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds.
• We offer hybrid work options and flexible working hours to accommodate various needs and preferences.
• Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities.
• If you are a person with disabilities and have specific requirements, please inform us during the application process or at any time during your employment. We are committed to creating an inclusive environment where all employees can thrive.
Our company fosters a values-driven and people-centric work environment that enables our employees to:
• Accelerate growth, both professionally and personally
• Impact the world in powerful, positive ways, using the latest technologies
• Enjoy collaborative innovation, with diversity and work-life wellbeing at the core
• Unlock global opportunities to work and learn with the industry's best
For more detail, please contact – VivekKumar )
Requirements3+ years of experience in Big Data technologies like Apache Spark, Databricks.
Python or Scala
AWS,Azure, GCP
BenefitsCompetitive salary and benefits package
• Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications
• Opportunity to work with cutting-edge technologies
• Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards
• Annual health check-ups
• Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents
Data Engineer
Posted today
Job Viewed
Job Description
You will join the team as a Data Engineer, responsible for building and maintaining data pipelines, as well as developing data visualizations that drive company-wide analytics.
A strong understanding of good API design, data pipelining, and a commitment to data security is essential for this role. Proficiency in SQL is a must, as you'll regularly work with complex queries, data transformations, and optimizations to support scalable, high-performance data workflows.
Responsibilities:
Data Pipeline Development
- Design and implement scalable, production-grade data pipelines using PySpark and Python
- Develop ETL/ELT workflows within the AWS ecosystem, leveraging services like AWS Glue, Lambda, and Step Functions.
- Ingest and transform data from a variety of sources, including files, APIs, SQL databases (e.g., PostgreSQL, MySQL), NoSQL databases (e.g., DynamoDB, Cassandra) , events and streaming data
- Write robust unit and integration tests to validate pipeline logic and ensure data quality.
- Monitor, optimize, and troubleshoot pipeline performance, with logging and alerting for failures and delays
Data Lake Table Management (Hudi & Iceberg)
- Work with modern table formats like Apache Hudi and Apache Iceberg to enable incremental processing, upserts, and time-travel queries.
- Implement efficient data modelling strategies using these formats to support both batch and streaming data needs
- Optimize table partitioning, compaction, and schema evolution in large-scale data lake environments
Data Visualization & Reporting
- Create impactful dashboards and data visualizations using tools like QuickSight,Power BI, Tableau or similar tools
- Translate complex data into actionable insights for business stakeholders
- Provide support and training to stakeholders on accessing and using analytics tools and data assets
Collaboration & Stakeholder Engagement
- Partner with product, data scientists, and business teams to gather data requirements and deliver integrated solutions
- Translate business logic into efficient data transformations and visual outputs
- Optimize application performance for speed and responsiveness
Data Governance & Infrastructure
- Manage cloud-based data infrastructure (e.g., AWS Glue, Redshift, S3, EMR) ensuring security, reliability, and scalability
- Ensure compliance with data governance policies, privacy regulations, and access control standards
- Maintain proper data documentation, versioning, and lineage tracking.
Deployment & Release Management
- Design, implement, and maintain the build, deployment, and release process for data pipelines using AWS CloudFormation (CFN) or Terraform.
- Collaborate with team members to integrate code changes into CI/CD pipelines and ensure smooth deployment across multiple environments
Debugging, Collaboration & Growth
- Mentor and support junior developers by guiding them on creating and maintaining pipelines.
- Debug and troubleshoot cross-platform issues efficiently
- Collaborate with developers, DevOps, and stakeholders to deliver end-to-end features
- Stay updated on industry trends in the data and generative BI world
Knowledge, Education & Experience:
- Bachelor's or Master's degree in Engineering in computer science or a related field (preferred)
- 3–5 years of experience Data engineering and pipeline development using Pyspark and python, SQL, No-SQL and understanding of modern development patterns
- Understanding and familiarity of modern concepts like data lake, lake house, open table formats
- Strong knowledge and experience to work with complex SQL queries
- Hands-on with AWS or similar cloud platforms; knowledge of cloud architecture (serverless)
- Experience with Git, CI/CD tools
- Basic experience in Microservice architecture, web services (Restful API's).
- Experience with Redshift will be an added advantage.
- Strong problem-solving, analytical, communication, and collaboration skills
- Familiarity with Agile software development methodologies.
Why Join bswift?
At bswift, we empower our employees to make a meaningful impact, innovate, and grow. Joining our team means stepping into a collaborative and dynamic environment that values creativity, initiative, and a passion for client success. We are dedicated to fostering an inclusive workplace that celebrates diversity and values each team member's unique contributions.
Benefits of Working at bswift:
- Comprehensive Health Benefits
: Medical, Accidental and Term Life Insurance coverage to support your wellness and that of your family. - Competitive Compensation
: A compensation package that recognizes your skills, experience, and contributions, including performance-based incentives for most roles. - Hybrid work-model:
With flexible working hours - Retirement Savings Plans
: Options like Provident Fund and Gratuity to help you plan for a secure financial future with employer contribution - Professional Development
: Opportunities for career growth, including training and access to resources to support your career progression. - Supportive Culture
: A work environment that encourages collaboration, open communication, and creative problem-solving, where your voice and ideas are valued. - Employee Wellbeing Initiatives
: Programs focused on mental health, financial planning, and wellness resources to help you thrive inside and outside of work.
Make an Impact
: At bswift, your work directly contributes to transforming how organizations approach benefits administration and client engagement. Join us to be part of an organization that is making a meaningful difference in the lives of our clients and their employees.
Be The First To Know
About the latest Database developers Jobs in Bengaluru !
Data Engineer
Posted today
Job Viewed
Job Description
Description
Job Summary :
We are seeking a highly skilled Data Engineer to design, build, and optimize scalable data pipelines using Snowflake, AWS services, and Apache Spark. You will be responsible for real-time and batch data ingestion, transformation, and orchestration across cloud platforms.
Notice Period - 0 to 15 days.
Key Responsibilities
- Develop and maintain data pipelines using AWS Glue, Lambda, EMR, and Snowflake.
- Implement real-time ingestion using Snowpipe and Streams for CDC (Change Data Capture).
- Write efficient PySpark or Scala Spark jobs for large-scale data processing.
- Automate workflows and orchestrate jobs using AWS Step Functions, Airflow, or similar tools.
- Optimize Snowflake queries and warehouse performance.
- Collaborate with Data Scientists, Analysts, and DevOps teams to deliver reliable data solutions.
- Monitor and troubleshoot data pipeline failures and latency issues.
Required Skills
- Strong experience with Snowflake architecture, SQL, and performance tuning.
- Hands-on expertise in AWS Glue, Lambda, S3, EMR, and CloudWatch.
- Proficiency in Apache Spark (PySpark or Scala).
- Familiarity with Snowpipe, Streams, and Tasks in Snowflake.
- Knowledge of CI/CD tools and infrastructure-as-code (Terraform, CloudFormation).
- Experience with version control (Git) and agile development practices
)
Data Engineer
Posted today
Job Viewed
Job Description
The Data Engineer will be responsible for designing, building, and maintaining data pipelines and infrastructure to support data-driven decision-making. This role requires a strong understanding of data warehousing, ETL processes, and cloud technologies. Responsibilities:
- Design, develop, and maintain data pipelines for ingesting, processing, and storing large datasets.
- Build and optimize ETL processes to ensure data quality, accuracy, and consistency.
- Develop and maintain data infrastructure, including data warehouses, data lakes, and data marts.
- Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and provide data solutions.
- Monitor and troubleshoot data pipelines and infrastructure to ensure optimal performance and reliability.
- Implement data governance and security best practices.
- Stay up-to-date with the latest data engineering technologies and trends.
Data Engineer
Posted today
Job Viewed
Job Description
We are hiring for-
Role: Data Engineer
Experience: 2-4 Years
Location: Bangalore
Mandatory Skills:
Data Engineer, Python or Scalar, Pyspark, AWS, Performance tuning