1775 Data Engineer jobs in Bengaluru

Data Engineer- Lead Data Engineer

Bengaluru, Karnataka Paytm

Posted today

Job Viewed

Tap Again To Close

Job Description

Role Overview



We are seeking an experienced Lead Data Engineer to join our Data Engineering team at Paytm, India's leading digital payments and financial services platform. This is a critical role responsible for designing, building, and maintaining large-scale, real-time data streams that process billions of transactions and user interactions daily. Data accuracy and stream reliability are essential to our operations, as data quality issues can result in financial losses and impact customer trust.

As a Lead Data Engineer at Paytm, you will be responsible for building robust data systems that support India's largest digital payments ecosystem. You'll architect and implement reliable, real-time data streaming solutions where precision and data correctness are fundamental requirements . Your work will directly support millions of users across merchant payments, peer-to-peer transfers, bill payments, and financial services, where data accuracy is crucial for maintaining customer confidence and operational excellence.


This role requires expertise in designing fault-tolerant, scalable data architectures that maintain high uptime standards while processing peak transaction loads during festivals and high-traffic events. We place the highest priority on data quality and system reliability, as our customers depend on accurate, timely information for their financial decisions. You'll collaborate with cross-functional teams including data scientists, product managers, and risk engineers to deliver data solutions that enable real-time fraud detection, personalized recommendations, credit scoring, and regulatory compliance reporting.


Key technical challenges include maintaining data consistency across distributed systems with demanding performance requirements, implementing comprehensive data quality frameworks with real-time validation, optimizing query performance on large datasets, and ensuring complete data lineage and governance across multiple business domains. At Paytm, reliable data streams are fundamental to our operations and our commitment to protecting customers' financial security and maintaining India's digital payments infrastructure.


Key Responsibilities


Data Stream Architecture & Development Design and implement reliable, scalable data streams handling high-volume transaction data with strong data integrity controlsBuild real-time processing systems using modern data engineering frameworks (Java/Python stack) with excellent performance characteristicsDevelop robust data ingestion systems from multiple sources with built-in redundancy and monitoring capabilitiesImplement comprehensive data quality frameworks, ensuring the 4 C's: Completeness, Consistency, Conformity, and Correctness - ensuring data reliability that supports sound business decisionsDesign automated data validation, profiling, and quality monitoring systems with proactive alerting capabilities Infrastructure & Platform Management Manage and optimize distributed data processing platforms with high availability requirements to ensure consistent service deliveryDesign data lake and data warehouse architectures with appropriate partitioning and indexing strategies for optimal query performanceImplement CI/CD processes for data engineering workflows with comprehensive testing and reliable deployment proceduresEnsure high availability and disaster recovery for critical data systems to maintain business continuity


Performance & Optimization Monitor and optimize streaming performance with focus on latency reduction and operational efficiencyImplement efficient data storage strategies including compression, partitioning, and lifecycle management with cost considerationsTroubleshoot and resolve complex data streaming issues in production environments with effective response protocolsConduct proactive capacity planning and performance tuning to support business growth and data volume increases


Collaboration & Leadership Work closely with data scientists, analysts, and product teams to understand important data requirements and service level expectationsMentor junior data engineers with emphasis on data quality best practices and customer-focused approachParticipate in architectural reviews and help establish data engineering standards that prioritize reliability and accuracyDocument technical designs, processes, and operational procedures with focus on maintainability and knowledge sharing


Required Qualifications


Experience & Education Bachelor's or Master's degree in Computer Science, Engineering, or related technical field

7+ years (Senior) of hands-on data engineering experience

Proven experience with large-scale data processing systems (preferably in fintech/payments domain)

Experience building and maintaining production data streams processing TB/PB scale data with strong performance and reliability standards


Technical Skills & RequirementsProgramming Languages:

Expert-level proficiency in both Python and Java; experience with Scala preferred


Big Data Technologies: Apache Spark (PySpark, Spark SQL, Spark with Java), Apache Kafka, Apache Airflow

Cloud Platforms: AWS (EMR, Glue, Redshift, S3, Lambda) or equivalent Azure/GCP services

Databases: Strong SQL skills, experience with both relational (PostgreSQL, MySQL) and NoSQL databases (MongoDB, Cassandra, Redis)

Data Quality Management: Deep understanding of the 4 C's framework - Completeness, Consistency, Conformity, and Correctness

Data Governance: Experience with data lineage tracking, metadata management, and data cataloging

Data Formats & Protocols: Parquet, Avro, JSON, REST APIs, GraphQL Containerization & DevOps: Docker, Kubernetes, Git, GitLab/GitHub with CI/CD pipeline experience

Monitoring & Observability: Experience with Prometheus, Grafana, or similar monitoring tools

Data Modeling: Dimensional modeling, data vault, or similar methodologies

Streaming Technologies: Apache Flink, Kinesis, or Pulsar experience is a plus

Infrastructure as Code: Terraform, CloudFormation (preferred)

Java-specific: Spring Boot, Maven/Gradle, JUnit for building robust data services


Preferred Qualifications


Domain Expertise

Previous experience in fintech, payments, or banking industry with solid understanding of regulatory compliance and financial data requirementsUnderstanding of financial data standards, PCI DSS compliance, and data privacy regulations where compliance is essential for business operationsExperience with real-time fraud detection or risk management systems where data accuracy is crucial for customer protection


Advanced Technical Skills (Preferred)


Experience building automated data quality frameworks covering all 4 C's dimensionsKnowledge of machine learning stream orchestration (MLflow, Kubeflow)Familiarity with data mesh or federated data architecture patternsExperience with change data capture (CDC) tools and techniques


Leadership & Soft Skills Strong problem-solving abilities with experience debugging complex distributed systems in production environmentsExcellent communication skills with ability to explain technical concepts to diverse stakeholders while highlighting business valueExperience mentoring team members and leading technical initiatives with focus on building a quality-oriented cultureProven track record of delivering projects successfully in dynamic, fast-paced financial technology environments


What We Offer


Opportunity to work with cutting-edge technology at scaleCompetitive salary and equity compensation

Comprehensive health and wellness benefits

Professional development opportunities and conference attendanceFlexible working arrangements

Chance to impact millions of users across India's digital payments ecosystem


Application Process


Interested candidates should submit:

Updated resume highlighting relevant data engineering experience with emphasis on real-time systems and data quality

Portfolio or GitHub profile showcasing data engineering projects, particularly those involving high-throughput streaming systems

Cover letter explaining interest in fintech/payments domain and understanding of data criticality in financial services

References from previous technical managers or senior colleagues who can attest to your data quality standards






PI1255d80c7d41-30511-38316905

This advertiser has chosen not to accept applicants from your region.

Data Engineer- Lead Data Engineer

Bengaluru, Karnataka Paytm

Posted today

Job Viewed

Tap Again To Close

Job Description

Role Overview We are seeking an experienced Lead Data Engineer to join our Data Engineering team at Paytm, India's leading digital payments and financial services platform. This is a critical role responsible for designing, building, and maintaining large-scale, real-time data streams that process billions of transactions and user interactions daily. Data accuracy and stream reliability are essential to our operations, as data quality issues can result in financial losses and impact customer a Lead Data Engineer at Paytm, you will be responsible for building robust data systems that support India's largest digital payments ecosystem. You'll architect and implement reliable, real-time data streaming solutions where precision and data correctness are fundamental requirements . Your work will directly support millions of users across merchant payments, peer-to-peer transfers, bill payments, and financial services, where data accuracy is crucial for maintaining customer confidence and operational excellence.This role requires expertise in designing fault-tolerant, scalable data architectures that maintain high uptime standards while processing peak transaction loads during festivals and high-traffic events. We place the highest priority on data quality and system reliability, as our customers depend on accurate, timely information for their financial decisions. You'll collaborate with cross-functional teams including data scientists, product managers, and risk engineers to deliver data solutions that enable real-time fraud detection, personalized recommendations, credit scoring, and regulatory compliance reporting.Key technical challenges include maintaining data consistency across distributed systems with demanding performance requirements, implementing comprehensive data quality frameworks with real-time validation, optimizing query performance on large datasets, and ensuring complete data lineage and governance across multiple business domains. At Paytm, reliable data streams are fundamental to our operations and our commitment to protecting customers' financial security and maintaining India's digital payments ResponsibilitiesData Stream Architecture & Development Design and implement reliable, scalable data streams handling high-volume transaction data with strong data integrity controlsBuild real-time processing systems using modern data engineering frameworks (Java/Python stack) with excellent performance characteristicsDevelop robust data ingestion systems from multiple sources with built-in redundancy and monitoring capabilitiesImplement comprehensive data quality frameworks, ensuring the 4 C's: Completeness, Consistency, Conformity, and Correctness - ensuring data reliability that supports sound business decisionsDesign automated data validation, profiling, and quality monitoring systems with proactive alerting capabilitiesInfrastructure & Platform Management Manage and optimize distributed data processing platforms with high availability requirements to ensure consistent service deliveryDesign data lake and data warehouse architectures with appropriate partitioning and indexing strategies for optimal query performanceImplement CI/CD processes for data engineering workflows with comprehensive testing and reliable deployment proceduresEnsure high availability and disaster recovery for critical data systems to maintain business continuityPerformance & Optimization Monitor and optimize streaming performance with focus on latency reduction and operational efficiencyImplement efficient data storage strategies including compression, partitioning, and lifecycle management with cost considerationsTroubleshoot and resolve complex data streaming issues in production environments with effective response protocolsConduct proactive capacity planning and performance tuning to support business growth and data volume increasesCollaboration & Leadership Work closely with data scientists, analysts, and product teams to understand important data requirements and service level expectationsMentor junior data engineers with emphasis on data quality best practices and customer-focused approachParticipate in architectural reviews and help establish data engineering standards that prioritize reliability and accuracyDocument technical designs, processes, and operational procedures with focus on maintainability and knowledge sharingRequired QualificationsExperience & Education Bachelor's or Master's degree in Computer Science, Engineering, or related technical field7+ years (Senior) of hands-on data engineering experienceProven experience with large-scale data processing systems (preferably in fintech/payments domain)Experience building and maintaining production data streams processing TB/PB scale data with strong performance and reliability standardsTechnical Skills & RequirementsProgramming Languages: Expert-level proficiency in both Python and Java; experience with Scala preferredBig Data Technologies: Apache Spark (PySpark, Spark SQL, Spark with Java), Apache Kafka, Apache AirflowCloud Platforms: AWS (EMR, Glue, Redshift, S3, Lambda) or equivalent Azure/GCP servicesDatabases: Strong SQL skills, experience with both relational (PostgreSQL, MySQL) and NoSQL databases (MongoDB, Cassandra, Redis)Data Quality Management: Deep understanding of the 4 C's framework - Completeness, Consistency, Conformity, and CorrectnessData Governance: Experience with data lineage tracking, metadata management, and data catalogingData Formats & Protocols: Parquet, Avro, JSON, REST APIs, GraphQLContainerization & DevOps: Docker, Kubernetes, Git, GitLab/GitHub with CI/CD pipeline experienceMonitoring & Observability: Experience with Prometheus, Grafana, or similar monitoring toolsData Modeling: Dimensional modeling, data vault, or similar methodologiesStreaming Technologies: Apache Flink, Kinesis, or Pulsar experience is a plusInfrastructure as Code: Terraform, CloudFormation (preferred)Java-specific: Spring Boot, Maven/Gradle, JUnit for building robust data servicesPreferred QualificationsDomain Expertise Previous experience in fintech, payments, or banking industry with solid understanding of regulatory compliance and financial data requirementsUnderstanding of financial data standards, PCI DSS compliance, and data privacy regulations where compliance is essential for business operationsExperience with real-time fraud detection or risk management systems where data accuracy is crucial for customer protectionAdvanced Technical Skills (Preferred) Experience building automated data quality frameworks covering all 4 C's dimensionsKnowledge of machine learning stream orchestration (MLflow, Kubeflow)Familiarity with data mesh or federated data architecture patternsExperience with change data capture (CDC) tools and techniquesLeadership & Soft Skills Strong problem-solving abilities with experience debugging complex distributed systems in production environmentsExcellent communication skills with ability to explain technical concepts to diverse stakeholders while highlighting business valueExperience mentoring team members and leading technical initiatives with focus on building a quality-oriented cultureProven track record of delivering projects successfully in dynamic, fast-paced financial technology environmentsWhat We Offer Opportunity to work with cutting-edge technology at scaleCompetitive salary and equity compensationComprehensive health and wellness benefitsProfessional development opportunities and conference attendanceFlexible working arrangementsChance to impact millions of users across India's digital payments ecosystemApplication Process Interested candidates should submit:Updated resume highlighting relevant data engineering experience with emphasis on real-time systems and data qualityPortfolio or GitHub profile showcasing data engineering projects, particularly those involving high-throughput streaming systemsCover letter explaining interest in fintech/payments domain and understanding of data criticality in financial servicesReferences from previous technical managers or senior colleagues who can attest to your data quality standards
This advertiser has chosen not to accept applicants from your region.

Data Engineer

Bangalore, Karnataka S&P Global

Posted 3 days ago

Job Viewed

Tap Again To Close

Job Description

**About the Role:**
**Grade Level (for internal use):**
10
**The Team** : The Data Engineering team is responsible for architecting, building, and maintaining our evolving data infrastructure, as well as curating and governing the data assets created on our platform. We work closely with various stakeholders to acquire, process, and refine vast datasets, focusing on creating scalable and optimized data pipelines. Our team possesses broad expertise in critical data domains, technology stacks, and architectural patterns. We foster knowledge sharing and collaboration, resulting in a unified strategy and seamless data management.
**The Impact:** This role is the foundation of the products delivered. The data onboarded is the base for the company as it feeds into the products, platforms, and essential for supporting our advanced analytics and machine learning initiatives.
**What's in it for you:** Be the part of a successful team which works on delivering top priority projects which will directly contribute to Company's strategy. Drive the testing initiatives including supporting Automation strategy, performance, and security testing. This is the place to enhance your Testing skills while adding value to the business. As an experienced member of the team, you will have the opportunity to own and drive a project end to end and collaborate with developers, business analysts and product managers who are experts in their domain which can help you to build multiple skillsets.
**Responsibilities:**
+ Design, develop, and maintain scalable and efficient data pipelines to process large volumes of data.
+ To implement ETL processes to acquire, validate, and process incoming data from diverse sources.
+ Collaborate with cross-functional teams, including data scientists, analysts, and software engineers, to understand data requirements and translate them into technical solutions.
+ Implement data ingestion, transformation, and integration processes to ensure data quality, accuracy, and consistency.
+ Optimize Spark jobs and data processing workflows for performance, scalability, and reliability.
+ Troubleshoot and resolve issues related to data pipelines, data processing, and performance bottlenecks.
+ Conduct code reviews and provide constructive feedback to junior team members to ensure code quality and best practices adherence.
+ Stay updated with the latest advancements in Spark and related technologies and evaluate their potential for enhancing existing data engineering processes.
+ Develop and maintain documentation, including technical specifications, data models, and system architecture diagrams.
+ Stay abreast of emerging trends and technologies in the data engineering and big data space and propose innovative solutions to enhance data processing capabilities.
**What We're Looking For:**
+ 5+ Years of experience in Data Engineering or related field
+ Strong experience in Python programming with expertise in building data-intensive applications.
+ Proven hands-on experience with Apache Spark, including Spark Core, Spark SQL, Spark Streaming, and Spark MLlib.
+ Solid understanding of distributed computing concepts, parallel processing, and cluster computing frameworks.
+ Proficiency in data modeling, data warehousing, and ETL techniques.
+ Experience with workflow management platforms, preferably Airflow.
+ Familiarity with big data technologies such as Hadoop, Hive, or HBase.
+ Strong Knowledge of SQL and experience with relational databases.
+ Hand on experience with AWS cloud data platform
+ Strong problem-solving and troubleshooting skills, with the ability to analyze complex data engineering issues and provide effective solutions.
+ Excellent communication and collaboration skills, with the ability to work effectively in cross-functional teams.
+ Nice to have experience on DataBricks
**Preferred Qualifications:** Bachelor's degree in Information Technology, Computer Information Systems, Computer Engineering, Computer Science, or other technical discipline
**What's In It For** **You?**
**Our Purpose:**
Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology-the right combination can unlock possibility and change the world.
Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress.
**Our People:**
We're more than 35,000 strong worldwide-so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all.
From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We're committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We're constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference.
**Our Values:**
**Integrity, Discovery, Partnership**
At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of **integrity** in all we do, bring a spirit of **discovery** to our work, and collaborate in close **partnership** with each other and our customers to achieve shared goals.
**Benefits:**
We take care of you, so you can take care of business. We care about our people. That's why we provide everything you-and your career-need to thrive at S&P Global.
Our benefits include:
+ Health & Wellness: Health care coverage designed for the mind and body.
+ Flexible Downtime: Generous time off helps keep you energized for your time on.
+ Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills.
+ Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs.
+ Family Friendly Perks: It's not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families.
+ Beyond the Basics: From retail discounts to referral incentive awards-small perks can make a big difference.
For more information on benefits by country visit: Hiring and Opportunity at S&P Global:**
At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets.
**Recruitment Fraud Alert:**
If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to . S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, "pre-employment training" or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here ( .
---
**Equal Opportunity Employer**
S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment.
If you need an accommodation during the application process due to a disability, please send an email to:? ?and your request will be forwarded to the appropriate person?
**US Candidates Only:** The EEO is the Law Poster ? describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - - Middle Professional Tier I (EEO Job Group)
**Job ID:** 315442
**Posted On:** 2025-08-13
**Location:** Ahmedabad, Gujarat, India
This advertiser has chosen not to accept applicants from your region.

Data Engineer

Bengaluru, Karnataka Autodesk

Posted 5 days ago

Job Viewed

Tap Again To Close

Job Description

**Job Requisition ID #**
25WD90048
**Position Overview**
We are looking for an exceptional data engineer to transform, optimize, test, and maintain architectures for enterprise analytics databases, data pipelines, and processing systems, as well as optimizing data flow and collection for cross functional teams. The mission of the team is to empower decision makers and the broader data communities through trusted data assets and scalable self-serve analytics. The focus of your work will be engineering new pipelines and maintaining, creating frameworks, enhancing existing data pipeline with new features to ensure accurate data delivery to stakeholders in a timely manner, also support ad-hoc reporting requirements that facilitate data-driven actionable insights and decision making at Autodesk.
**Responsibilities**
- Maintain/develop data pipelines required for the extraction, transformation, cleaning, pre-processing, aggregation and loading of data from a wide variety of data sources into Snowflake or Hive warehouses using Python, SQL, DBT, other data technologies
- Design, implement,?test and maintain data pipelines/ new features based on stakeholders' requirements
- Develop/maintain scalable, available, quality assured analytical building blocks/datasets by close coordination with data analysts
- Optimize/ maintain workflows/ scripts on present data warehouses and present?ETL?
- Design / develop / maintain components of data processing frameworks
- Basic Knowledge of using LLMs and using AI tools like Co-pilot for development purposes
- Build and maintain data quality and durability tracking mechanisms in order to provide visibility into and address inevitable disruptions in data ingestion, processing, and storage?
- Translate deeply technical designs into business appropriate representations as well as analyze business needs and requirements ensuring implementation of data services directly correlates to the strategy and growth of the business
- Focus on automation use cases, CI/CD approaches and self-service modules relevant for data domains
- Address questions and concerns from downstream data consumers through appropriate channels
- Create data tools for analytics and BI teams that assist them in building and optimizing our product into an innovative industry?leader?
- Stay up to date with data engineering best practices, patterns, evaluate and analyze new technologies, capabilities, open-source software in context of our data strategy to ensure we are adapting, extending, or replacing our own core technologies to stay ahead of the industry
- Contribute to Analytics engineering process
**Minimum Qualifications**
- Bachelor's degree in computer science, information systems, or a related?discipline?
- 3+ years in the Data Engineer role
- Built processes supporting data transformation, data structures, metadata, dependency, data quality, and workload?management?
- Working experience with Snowflake, Hands-on experience with Snowflake utilities, Snow SQL, Snow Pipe. Must have worked on Snowflake Cost optimization scenarios
- Overall solid programming skills, able to write modular, maintainable code, preferably Python & SQL
- Have experience with workflow management solutions like Airflow
- Have experience on Data transformation tools like DBT
- Basic understanding of how to use LLMs for data engineering purposes and tools like GitHub, Co-pilot
- Experience working with Git
- Experience working with big data stack environment, like, Hive, Spark and Presto
- Strong analytical, problem solving and interpersonal skills
- Familiar with Scrum
- Ready to work flexible European hours
**Preferred Qualifications?**
- Snowflake?
- DBT?
- Fivetran?
- Airflow
- CI/CD (Jenkins)?
- Basic understanding of Power BI
- AWS environment, for example S3, Lambda, Glue, Cloud watch
- Basic understanding of Salesforce?
- Experience working with remote teams spread across multiple time-zones
- Strong organizational skills and attention to detail
- Have a hunger to learn and the ability to operate in a self-guided manner
#LI-MR2
**Learn More**
**About Autodesk**
Welcome to Autodesk! Amazing things are created every day with our software - from the greenest buildings and cleanest cars to the smartest factories and biggest hit movies. We help innovators turn their ideas into reality, transforming not only how things are made, but what can be made.
We take great pride in our culture here at Autodesk - it's at the core of everything we do. Our culture guides the way we work and treat each other, informs how we connect with customers and partners, and defines how we show up in the world.
When you're an Autodesker, you can do meaningful work that helps build a better world designed and made for all. Ready to shape the world and your future? Join us!
**Salary transparency**
Salary is one part of Autodesk's competitive compensation package. Offers are based on the candidate's experience and geographic location. In addition to base salaries, our compensation package may include annual cash bonuses, commissions for sales roles, stock grants, and a comprehensive benefits package.
**Diversity & Belonging**
We take pride in cultivating a culture of belonging where everyone can thrive. Learn more here: you an existing contractor or consultant with Autodesk?**
Please search for open jobs and apply internally (not on this external site).
This advertiser has chosen not to accept applicants from your region.

Data Engineer

Bangalore, Karnataka NTT America, Inc.

Posted 8 days ago

Job Viewed

Tap Again To Close

Job Description

**Req ID:** 335681
NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now.
We are currently seeking a Data Engineer to join our team in Bangalore, Karnataka (IN-KA), India (IN).
**Key Responsibilities:**
- Develop data pipeline to integrate data movement tasks from multiple API data sources.
- Ensure data integrity, consistency, and normalization.
- Gather requirements from stakeholders to align with business needs.
- Collaborate with business analysts, data architects, and engineers to design solutions.
- Support ETL (Extract, Transform, Load) processes for data migration and integration.
- Ensure adherence to industry standards, security policies, and data governance frameworks.
- Keep up with industry trends in data modeling, big data, and AI/ML.
- Recommend improvements to data architecture for scalability and efficiency.
- Work with compliance teams to align data models with regulations (GDPR, HIPAA, etc.).
**Basic Qualifications:**
- 8+ years experience in professional servies or related field
- 3+ years experience working with databases such as Oracle, SQL Server and Azure cloud data platform.
- 3+ years of experience working with SQL tools.
- 2+ years of experience working with Azure Data Factory, Python
- 2+ years of experience working with API data integration tasks
**Preferred Qualifications:**
- Proven work experience as a Spark/PySpark development work
- Knowledge of database structure systems
- Excellent analytical and problem-solving skills
- Understanding of agile methodologies
- Undergraduate or Graduate degree preferred
- Ability to travel at least 25%.
**About NTT DATA**
NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com ( possible, we hire locally to NTT DATA offices or client sites. This ensures we can provide timely and effective support tailored to each client's needs. While many positions offer remote or hybrid work options, these arrangements are subject to change based on client requirements. For employees near an NTT DATA office or client site, in-office attendance may be required for meetings or events, depending on business needs. At NTT DATA, we are committed to staying flexible and meeting the evolving needs of both our clients and employees. NTT DATA recruiters will never ask for payment or banking information and will only use @nttdata.com and @talent.nttdataservices.com email addresses. If you are requested to provide payment or disclose banking information, please submit a contact us form, .
**_NTT DATA endeavors to make_** **_ **_accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at_** **_ **_._** **_This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here ( . If you'd like more information on your EEO rights under the law, please click here ( . For Pay Transparency information, please click here ( ._**
This advertiser has chosen not to accept applicants from your region.

Data Engineer

Bangalore, Karnataka NTT DATA North America

Posted 8 days ago

Job Viewed

Tap Again To Close

Job Description

**Req ID:** 335681
NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now.
We are currently seeking a Data Engineer to join our team in Bangalore, Karnataka (IN-KA), India (IN).
**Key Responsibilities:**
- Develop data pipeline to integrate data movement tasks from multiple API data sources.
- Ensure data integrity, consistency, and normalization.
- Gather requirements from stakeholders to align with business needs.
- Collaborate with business analysts, data architects, and engineers to design solutions.
- Support ETL (Extract, Transform, Load) processes for data migration and integration.
- Ensure adherence to industry standards, security policies, and data governance frameworks.
- Keep up with industry trends in data modeling, big data, and AI/ML.
- Recommend improvements to data architecture for scalability and efficiency.
- Work with compliance teams to align data models with regulations (GDPR, HIPAA, etc.).
**Basic Qualifications:**
- 8+ years experience in professional servies or related field
- 3+ years experience working with databases such as Oracle, SQL Server and Azure cloud data platform.
- 3+ years of experience working with SQL tools.
- 2+ years of experience working with Azure Data Factory, Python
- 2+ years of experience working with API data integration tasks
**Preferred Qualifications:**
- Proven work experience as a Spark/PySpark development work
- Knowledge of database structure systems
- Excellent analytical and problem-solving skills
- Understanding of agile methodologies
- Undergraduate or Graduate degree preferred
- Ability to travel at least 25%.
**About NTT DATA**
NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com ( possible, we hire locally to NTT DATA offices or client sites. This ensures we can provide timely and effective support tailored to each client's needs. While many positions offer remote or hybrid work options, these arrangements are subject to change based on client requirements. For employees near an NTT DATA office or client site, in-office attendance may be required for meetings or events, depending on business needs. At NTT DATA, we are committed to staying flexible and meeting the evolving needs of both our clients and employees. NTT DATA recruiters will never ask for payment or banking information and will only use @nttdata.com and @talent.nttdataservices.com email addresses. If you are requested to provide payment or disclose banking information, please submit a contact us form, .
**_NTT DATA endeavors to make_** **_ **_accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at_** **_ **_._** **_This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here ( . If you'd like more information on your EEO rights under the law, please click here ( . For Pay Transparency information, please click here ( ._**
This advertiser has chosen not to accept applicants from your region.

Data Engineer

Bangalore, Karnataka ThermoFisher Scientific

Posted 11 days ago

Job Viewed

Tap Again To Close

Job Description

**Work Schedule**
Standard (Mon-Fri)
**Environmental Conditions**
Office
**Job Description**
**Job Opportunity: Data Engineer**
Are you an ambitious Data Engineer looking to join a world-class team? At CCG IT for Fisher Scientific Research Safety Division (RSD) NA, we are seeking a dedicated individual to successfully implement and manage data solutions that will propel our mission forward!
**Key Responsibilities**
+ Design, develop, and maintain scalable data pipelines and ETL processes
+ Collaborate with cross-functional teams to determine data requirements and deliver outstanding solutions
+ Ensure data integrity and flawless performance of data systems
+ Develop and implement data models and algorithms to optimize data processing
+ Perform data analysis to support business needs and drive decision-making
+ Monitor data system performance and troubleshoot issues as they arise
**Requirements**
+ Proven experience as a Data Engineer or in a similar role
+ Over 6 years in industry, with 3+ years as a data engineer in Supply Chain business.
+ Strong knowledge of SQL, Python, and data warehousing solutions
+ Expertise in data integration, data quality, and data governance
+ Experience with cloud platforms such as AWS, Azure, or Google Cloud
+ Ability to work collaboratively in a team environment and communicate effectively
Why Join Us?
+ Be part of an inclusive, collaborative, and innovative team
+ Work on ambitious projects that drive real-world impact
+ Competitive compensation and benefits package
+ Opportunities for professional growth and development
We look forward to welcoming you to our team as we strive to achieve excellence in everything we do!
Thermo Fisher Scientific is an EEO/Affirmative Action Employer and does not discriminate on the basis of race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability or any other legally protected status.
This advertiser has chosen not to accept applicants from your region.
Be The First To Know

About the latest Data engineer Jobs in Bengaluru !

Data Engineer

Bangalore, Karnataka Kyndryl

Posted 11 days ago

Job Viewed

Tap Again To Close

Job Description

**Who We Are**
At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward - always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities.
**The Role**
As a Data Engineer at Kyndryl, you'll be at the forefront of the data revolution, crafting and shaping data platforms that power our organization's success. This role is not just about code and databases; it's about transforming raw data into actionable insights that drive strategic decisions and innovation.
Technical Professional to Design, Build and Manage the infrastructure and systems that enable organizations to collect, process, store, and analyze large volumes of data. He will be the architects and builders of the data pipelines, ensuring that data is accessible, reliable, and optimized for various uses, including analytics, machine learning, and business intelligence.
In this role, you'll be engineering the backbone of our data infrastructure, ensuring the availability of pristine, refined data sets. With a well-defined methodology, critical thinking, and a rich blend of domain expertise, consulting finesse, and software engineering prowess, you'll be the mastermind of data transformation.
Your journey begins by understanding project objectives and requirements from a business perspective, converting this knowledge into a data puzzle. You'll be delving into the depths of information to uncover quality issues and initial insights, setting the stage for data excellence. But it doesn't stop there. You'll be the architect of data pipelines, using your expertise to cleanse, normalize, and transform raw data into the final dataset-a true data alchemist.
Armed with a keen eye for detail, you'll scrutinize data solutions, ensuring they align with business and technical requirements. Your work isn't just a means to an end; it's the foundation upon which data-driven decisions are made - and your lifecycle management expertise will ensure our data remains fresh and impactful.
Technical Professional to Design, Build and Manage the infrastructure and systems that enable organizations to collect, process, store, and analyze large volumes of data. You will be the architects and builders of the data pipelines, ensuring that data is accessible, reliable, and optimized for various uses, including analytics, machine learning, and business intelligence.
**Key Responsibilities:**
+ **Designing and Building Data Pipelines:** Creating robust, scalable, and efficient ETL (Extract, Transform, Load) or ELT (Extract, Load, Transform) pipelines to move data from various sources into data warehouses, data lakes, or other storage systems. Ingest data which is structured, unstructured.
+ **Data Storage and Management:** Selecting and managing appropriate data storage solutions (e.g., relational databases, S3, ADLS, data warehouses like SQL, Databricks.
+ **Data Architecture:** Understand target data models, schemas, and database structures that support business requirements and data analysis needs.
+ **Data Integration:** Connecting disparate data sources, ensuring data consistency and quality across different systems.
+ **Performance Optimization:** Optimizing data processing systems for speed, efficiency, and scalability, often dealing with large source systems datasets.
+ **Data Governance and Security:** Implementing measures for data quality, security, privacy, and compliance with regulations.
+ **Collaboration:** Working closely with Data Scientists, Data Analysts, Business Intelligence Developers, and other stakeholders to understand their data needs and provide them with clean, reliable data.
+ **Automation:** Automating data processes and workflows to reduce manual effort and improve reliability.
So, if you're a technical enthusiast with a passion for data, we invite you to join us in the exhilarating world of data engineering at Kyndryl. Let's transform data into a compelling story of innovation and growth.
Your Future at Kyndryl
Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won't find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here.
**Who You Are**
You're good at what you do and possess the required experience to prove it. However, equally as important - you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused - someone who prioritizes customer success in their work. And finally, you're open and borderless - naturally inclusive in how you work with others.
**Required Technical and Professional Expertise**
+ 4 - 6 years of experience as an Data Engineer .
+ ETL/ELT Tools: Experience with data integration tools and platforms like SSIS, Azure Data Factory
+ SSIS Package Development
+ Control Flow: Designing and managing the workflow of ETL processes, including tasks, containers, and precedence constraints.
+ Data Flow: Building pipelines for extracting data from sources, transforming it using various built-in components
+ SQL Server Management Studio (SSMS): For database administration, querying, and managing SSIS packages.
+ SQL Server Data Tools (SSDT) / Visual Studio: The primary IDE for developing SSIS packages.
+ Scripting (C# or VB.NET): For advanced transformations, custom components, or complex logic that cannot be achieved with built-in SSIS components.
+ Programming Languages: Advantage if experience on either of Python / Java Scala basics
+ Cloud Platforms: Proficiency with cloud data services from providers like SSIS, Microsoft Azure (Azure Data Lake, Azure Data Factory) etc
+ Data Warehousing: Understanding of data warehousing concepts, dimensional modelling, and schema design.
+ Version Control: Familiarity with Git and collaborative development workflows.
**Preferred Technical and Professional Experience**
+ Degree in a scientific discipline, such as Computer Science, Software Engineering, or Information Technology
**Being You**
Diversity is a whole lot more than what we look like or where we come from, it's how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we're not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you - and everyone next to you - the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That's the Kyndryl Way.
**What You Can Expect**
With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter - wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed.
**Get Referred!**
If you know someone that works at Kyndryl, when asked 'How Did You Hear About Us' during the application process, select 'Employee Referral' and enter your contact's Kyndryl email address.
Kyndryl is committed to creating a diverse environment and is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, pregnancy, disability, age, veteran status, or other characteristics. Kyndryl is also committed to compliance with all fair employment practices regarding citizenship and immigration status.
This advertiser has chosen not to accept applicants from your region.

Data Engineer

Bengaluru, Karnataka People Prime Worldwide

Posted 1 day ago

Job Viewed

Tap Again To Close

Job Description

About Company : They balance innovation with an open, friendly culture and the backing of a long-established parent company, known for its ethical reputation. We guide customers from what’s now to what’s next by unlocking the value of their data and applications to solve their digital challenges, achieving outcomes that benefit both business and society.

About Client:

Our client is a global digital solutions and technology consulting company headquartered in Mumbai, India. The company generates annual revenue of over $4.29 billion (₹35,517 crore), reflecting a 4.4% year-over-year growth in USD terms. It has a workforce of around 86,000 professionals operating in more than 40 countries and serves a global client base of over 700 organizations.

Our client operates across several major industry sectors, including Banking, Financial Services & Insurance (BFSI), Technology, Media & Telecommunications (TMT), Healthcare & Life Sciences, and Manufacturing & Consumer. In the past year, the company achieved a net profit of $53.4 million (₹4,584.6 crore), marking a 1.4% increase from the previous year. It also recorded a strong order inflow of $5 6 billion, up 15.7% year-over-year, highlighting growing demand across its service lines.

Key focus areas include Digital Transformation, Enterprise AI, Data & Analytics, and Product Engineering—reflecting its strategic commitment to driving innovation and value for clients across industries.


Exp:-5-10 years


Location: -Bangalore


Cloud Native Developer Requirements


The team clarified that for the cloud native developer role, hands-on Azure experience is mandatory, while Node. js is a secondary skill . AWS experience is not mandatory for this role

This advertiser has chosen not to accept applicants from your region.

Data Engineer

Bengaluru, Karnataka Live Connections

Posted 2 days ago

Job Viewed

Tap Again To Close

Job Description

Position: Data Engineer – Big Data & Azure

Location: Bangalore, India

Role Purpose:

We are seeking a highly skilled Data Engineer with expertise in Big Data, Python, and PySpark to design, develop, and optimize large-scale data solutions on Azure. The role involves working with complex structured and unstructured datasets to deliver high-quality analytics solutions.

Key Responsibilities:

  • Build and optimize data pipelines for ingestion, transformation, and visualization.
  • Collaborate with data scientists to deploy ML models.
  • Perform data wrangling, cleaning, and aggregation across multiple sources.
  • Manage relational and non-relational data in Azure SQL Data Warehouse & Cosmos DB.
  • Implement high-speed data ingestion and complex parsing (XML, JSON, NLP).
  • Work on Azure Databricks for data preparation and analytics.
  • Develop APIs for data access and integration.
  • Ensure performance tuning and efficient data processing.

Required Skills & Experience:

  • 6–10 years’ experience in Big Data technologies.
  • Strong knowledge of Python, PySpark, Azure, Hadoop ecosystem (HDFS, Hive, Spark, etc.).
  • Experience with data APIs, REST/SOAP, and web scraping (Scrapy, Beautiful Soup).
  • Expertise in relational databases (MySQL, Oracle) and complex data parsing.
  • Knowledge of NLP tools (Apache Solr, Python) and real-time data streaming.
  • Bachelor’s degree in Computer Science or related field; Azure certifications preferred.

Why Join Us:

  • Work with a diverse, skilled team in a growth-focused environment.
  • Competitive compensation, merit-based recognition, and global exposure.
  • Inclusive, learning-driven culture with emphasis on excellence and innovation.
This advertiser has chosen not to accept applicants from your region.
 

Nearby Locations

Other Jobs Near Me

Industry

  1. request_quote Accounting
  2. work Administrative
  3. eco Agriculture Forestry
  4. smart_toy AI & Emerging Technologies
  5. school Apprenticeships & Trainee
  6. apartment Architecture
  7. palette Arts & Entertainment
  8. directions_car Automotive
  9. flight_takeoff Aviation
  10. account_balance Banking & Finance
  11. local_florist Beauty & Wellness
  12. restaurant Catering
  13. volunteer_activism Charity & Voluntary
  14. science Chemical Engineering
  15. child_friendly Childcare
  16. foundation Civil Engineering
  17. clean_hands Cleaning & Sanitation
  18. diversity_3 Community & Social Care
  19. construction Construction
  20. brush Creative & Digital
  21. currency_bitcoin Crypto & Blockchain
  22. support_agent Customer Service & Helpdesk
  23. medical_services Dental
  24. medical_services Driving & Transport
  25. medical_services E Commerce & Social Media
  26. school Education & Teaching
  27. electrical_services Electrical Engineering
  28. bolt Energy
  29. local_mall Fmcg
  30. gavel Government & Non Profit
  31. emoji_events Graduate
  32. health_and_safety Healthcare
  33. beach_access Hospitality & Tourism
  34. groups Human Resources
  35. precision_manufacturing Industrial Engineering
  36. security Information Security
  37. handyman Installation & Maintenance
  38. policy Insurance
  39. code IT & Software
  40. gavel Legal
  41. sports_soccer Leisure & Sports
  42. inventory_2 Logistics & Warehousing
  43. supervisor_account Management
  44. supervisor_account Management Consultancy
  45. supervisor_account Manufacturing & Production
  46. campaign Marketing
  47. build Mechanical Engineering
  48. perm_media Media & PR
  49. local_hospital Medical
  50. local_hospital Military & Public Safety
  51. local_hospital Mining
  52. medical_services Nursing
  53. local_gas_station Oil & Gas
  54. biotech Pharmaceutical
  55. checklist_rtl Project Management
  56. shopping_bag Purchasing
  57. home_work Real Estate
  58. person_search Recruitment Consultancy
  59. store Retail
  60. point_of_sale Sales
  61. science Scientific Research & Development
  62. wifi Telecoms
  63. psychology Therapy
  64. pets Veterinary
View All Data Engineer Jobs View All Jobs in Bengaluru