19,231 Data Engineer jobs in India

ETL Developer

Aligarh, Uttar Pradesh Insight Global

Job Viewed

Tap Again To Close

Job Description

Job Title: ETL Developer

Location: Remote (India)

Type: Contract Approved for 1 year 40 hours a week and will extend past a year

Compensation: 18 LPA - 22LPA (Broken into hourly Pay) 40 hours a week must be dedicated to this role.

Working Hours 2:30 PM IST 10 PM IST

Start Date: Immediate (No Notice Period Preferred)


Why is this open?


  • ETL: getting data ready for MicroStrategy developer
  • Work on report and dashboard building/modifying existing and building mew
  • Creation of metrics, dashboards, library features



Preferred Skills & Experience

  • Experience with Redshift or other MPP (Massively Parallel Processing) data warehouse platforms.
  • Familiarity with Telecom/Cable MSO data and applications.
  • Proficiency in BI tools such as MicroStrategy and Tableau.
  • Experience with ETL workflow and scheduling tools (e.g., Informatica, One Automation, UC4, Composite).


Job Description

We are seeking a skilled ETL Developer to design, develop, and support business intelligence solutions that transform data into meaningful insights. This role is critical in enabling data-driven decision-making across the organization.


Key Responsibilities

  • Support initiatives aimed at simplifying and enhancing the customer experience.
  • Collaborate with team members and stakeholders to gather and understand business requirements.
  • Partner with IT, Architecture, Business Analysts, and Report Developers to deliver on business objectives.
  • Design, develop, implement, and maintain data integration jobs using Teradata Stored Procedures.
  • Apply best practices and adhere to development standards.
  • Generate ad hoc reports to address business inquiries efficiently.
  • Create and maintain technical documentation for production deployments and ongoing support.
  • Diagnose and resolve data quality and performance issues.
  • Communicate complex data topics to non-technical stakeholders.
  • Provide accurate estimates for development tasks.
  • Perform additional duties as assigned.
This advertiser has chosen not to accept applicants from your region.

Job No Longer Available

This position is no longer listed on WhatJobs. The employer may be reviewing applications, filled the role, or has removed the listing.

However, we have similar jobs available for you below.

Big Data Engineer, Data Modeling

Hyderabad, Andhra Pradesh data.ai

Posted today

Job Viewed

Tap Again To Close

Job Description

What can you tell your friends

when they ask you what you do?

We’re looking for an experienced Big Data Engineer who can create innovative new products in the analytics and data space. You will participate in the development that creates the world's #1 mobile app analytics service. Together with the team, you will build out new product features and applications using agile methodologies and open-source technologies. You will work directly with Data Scientists, Data Engineers, Product Managers, and Software Architects, and will be on the front lines of coding new and exciting analytics and data mining products. You should be passionate about what you do and excited to join an entrepreneurial start-­up.

To ensure we execute on our values we are looking for someone who has a passion for:

As a Big Data Engineer, we will need you to be in charge of model implementation and maintenance, and to build a clean, robust, and maintainable data processing program that can support these projects on huge amounts of data, this includes

  • Able to design and implement complex data product components based on requirements with possible technical solutions.
  • Write data programs using Python (e.g., pyspark) with a commitment to maintaining high-quality work while being confident in dealing with data mining challenges.
  • Discover any feasible new technologies lying in the Big Data ecosystem, for example, the Hadoop ecosystem, and share them with to team with your professional perspectives.
  • Get up to speed in the data science and machine learning domain, implementing analysis components in a distributed computing environment (e.g., MapReduce implementation) with instruction from Data Scientists.
  • Be comfortable conducting detailed discussions with Data Scientists regarding specific questions related to specific data models.
  • You should be a strong problem solver with proven experience in big data.
  • You should recognize yourself in the following…

  • Hands-on experience and deep knowledge of the Hadoop ecosystem.
  • Must: PySpark, MapReduce, HDFS.
  • Plus: Storm, Kafka.
  • Must have 2+ years of Linux environment development experience.
  • Proficient with programming in Python & Scala, experience in Pandas, Sklearn or Other data science and data analysis toolset is a big plus.
  • Experience in data pipeline design & automation.
  • Having a background in data mining, analytics & data science components implementation, and machine learning domain, familiarity with common algorithms and libs is a plus.
  • Passion for cloud computing (AWS in particular) and distributed systems.
  • You must be a great problem solver with the ability to dive deeply into complex problems and emerge with clear and pragmatic solutions.
  • Good communication, and cooperation globally.
  • Major in Math or Computer Science.
  • You are driven by passion for innovation that pushes us closer to our vision in everything we do. Centering around our purpose and our hunger for new innovations is the foundation that allows us to grow and unlock the potential in AI.
  • You are an Ideal Team Player: You are hungry and no, we are not talking about food here. You are humble, yet love to succeed, especially as a team! You are smart, and not just book smart, you have a great read on people.
  • This position is located in Hyderabad, India.

    We are hiring for our engineering team at our data.ai India subsidiary entity, which is in the process of getting established . As we are awaiting approval from the Indian government, they shall be interim employees at Innova Solutions who is our Global Employer of Record.

    This advertiser has chosen not to accept applicants from your region.

    Data Engineer- Lead Data Engineer

    Bengaluru, Karnataka Paytm

    Posted today

    Job Viewed

    Tap Again To Close

    Job Description

    Role Overview



    We are seeking an experienced Lead Data Engineer to join our Data Engineering team at Paytm, India's leading digital payments and financial services platform. This is a critical role responsible for designing, building, and maintaining large-scale, real-time data streams that process billions of transactions and user interactions daily. Data accuracy and stream reliability are essential to our operations, as data quality issues can result in financial losses and impact customer trust.

    As a Lead Data Engineer at Paytm, you will be responsible for building robust data systems that support India's largest digital payments ecosystem. You'll architect and implement reliable, real-time data streaming solutions where precision and data correctness are fundamental requirements . Your work will directly support millions of users across merchant payments, peer-to-peer transfers, bill payments, and financial services, where data accuracy is crucial for maintaining customer confidence and operational excellence.


    This role requires expertise in designing fault-tolerant, scalable data architectures that maintain high uptime standards while processing peak transaction loads during festivals and high-traffic events. We place the highest priority on data quality and system reliability, as our customers depend on accurate, timely information for their financial decisions. You'll collaborate with cross-functional teams including data scientists, product managers, and risk engineers to deliver data solutions that enable real-time fraud detection, personalized recommendations, credit scoring, and regulatory compliance reporting.


    Key technical challenges include maintaining data consistency across distributed systems with demanding performance requirements, implementing comprehensive data quality frameworks with real-time validation, optimizing query performance on large datasets, and ensuring complete data lineage and governance across multiple business domains. At Paytm, reliable data streams are fundamental to our operations and our commitment to protecting customers' financial security and maintaining India's digital payments infrastructure.


    Key Responsibilities


    Data Stream Architecture & Development Design and implement reliable, scalable data streams handling high-volume transaction data with strong data integrity controlsBuild real-time processing systems using modern data engineering frameworks (Java/Python stack) with excellent performance characteristicsDevelop robust data ingestion systems from multiple sources with built-in redundancy and monitoring capabilitiesImplement comprehensive data quality frameworks, ensuring the 4 C's: Completeness, Consistency, Conformity, and Correctness - ensuring data reliability that supports sound business decisionsDesign automated data validation, profiling, and quality monitoring systems with proactive alerting capabilities Infrastructure & Platform Management Manage and optimize distributed data processing platforms with high availability requirements to ensure consistent service deliveryDesign data lake and data warehouse architectures with appropriate partitioning and indexing strategies for optimal query performanceImplement CI/CD processes for data engineering workflows with comprehensive testing and reliable deployment proceduresEnsure high availability and disaster recovery for critical data systems to maintain business continuity


    Performance & Optimization Monitor and optimize streaming performance with focus on latency reduction and operational efficiencyImplement efficient data storage strategies including compression, partitioning, and lifecycle management with cost considerationsTroubleshoot and resolve complex data streaming issues in production environments with effective response protocolsConduct proactive capacity planning and performance tuning to support business growth and data volume increases


    Collaboration & Leadership Work closely with data scientists, analysts, and product teams to understand important data requirements and service level expectationsMentor junior data engineers with emphasis on data quality best practices and customer-focused approachParticipate in architectural reviews and help establish data engineering standards that prioritize reliability and accuracyDocument technical designs, processes, and operational procedures with focus on maintainability and knowledge sharing


    Required Qualifications


    Experience & Education Bachelor's or Master's degree in Computer Science, Engineering, or related technical field

    7+ years (Senior) of hands-on data engineering experience

    Proven experience with large-scale data processing systems (preferably in fintech/payments domain)

    Experience building and maintaining production data streams processing TB/PB scale data with strong performance and reliability standards


    Technical Skills & RequirementsProgramming Languages:

    Expert-level proficiency in both Python and Java; experience with Scala preferred


    Big Data Technologies: Apache Spark (PySpark, Spark SQL, Spark with Java), Apache Kafka, Apache Airflow

    Cloud Platforms: AWS (EMR, Glue, Redshift, S3, Lambda) or equivalent Azure/GCP services

    Databases: Strong SQL skills, experience with both relational (PostgreSQL, MySQL) and NoSQL databases (MongoDB, Cassandra, Redis)

    Data Quality Management: Deep understanding of the 4 C's framework - Completeness, Consistency, Conformity, and Correctness

    Data Governance: Experience with data lineage tracking, metadata management, and data cataloging

    Data Formats & Protocols: Parquet, Avro, JSON, REST APIs, GraphQL Containerization & DevOps: Docker, Kubernetes, Git, GitLab/GitHub with CI/CD pipeline experience

    Monitoring & Observability: Experience with Prometheus, Grafana, or similar monitoring tools

    Data Modeling: Dimensional modeling, data vault, or similar methodologies

    Streaming Technologies: Apache Flink, Kinesis, or Pulsar experience is a plus

    Infrastructure as Code: Terraform, CloudFormation (preferred)

    Java-specific: Spring Boot, Maven/Gradle, JUnit for building robust data services


    Preferred Qualifications


    Domain Expertise

    Previous experience in fintech, payments, or banking industry with solid understanding of regulatory compliance and financial data requirementsUnderstanding of financial data standards, PCI DSS compliance, and data privacy regulations where compliance is essential for business operationsExperience with real-time fraud detection or risk management systems where data accuracy is crucial for customer protection


    Advanced Technical Skills (Preferred)


    Experience building automated data quality frameworks covering all 4 C's dimensionsKnowledge of machine learning stream orchestration (MLflow, Kubeflow)Familiarity with data mesh or federated data architecture patternsExperience with change data capture (CDC) tools and techniques


    Leadership & Soft Skills Strong problem-solving abilities with experience debugging complex distributed systems in production environmentsExcellent communication skills with ability to explain technical concepts to diverse stakeholders while highlighting business valueExperience mentoring team members and leading technical initiatives with focus on building a quality-oriented cultureProven track record of delivering projects successfully in dynamic, fast-paced financial technology environments


    What We Offer


    Opportunity to work with cutting-edge technology at scaleCompetitive salary and equity compensation

    Comprehensive health and wellness benefits

    Professional development opportunities and conference attendanceFlexible working arrangements

    Chance to impact millions of users across India's digital payments ecosystem


    Application Process


    Interested candidates should submit:

    Updated resume highlighting relevant data engineering experience with emphasis on real-time systems and data quality

    Portfolio or GitHub profile showcasing data engineering projects, particularly those involving high-throughput streaming systems

    Cover letter explaining interest in fintech/payments domain and understanding of data criticality in financial services

    References from previous technical managers or senior colleagues who can attest to your data quality standards






    PI1255d80c7d41-30511-38316905

    This advertiser has chosen not to accept applicants from your region.

    Data Engineer- Lead Data Engineer

    Bengaluru, Karnataka Paytm

    Posted today

    Job Viewed

    Tap Again To Close

    Job Description

    Role Overview We are seeking an experienced Lead Data Engineer to join our Data Engineering team at Paytm, India's leading digital payments and financial services platform. This is a critical role responsible for designing, building, and maintaining large-scale, real-time data streams that process billions of transactions and user interactions daily. Data accuracy and stream reliability are essential to our operations, as data quality issues can result in financial losses and impact customer a Lead Data Engineer at Paytm, you will be responsible for building robust data systems that support India's largest digital payments ecosystem. You'll architect and implement reliable, real-time data streaming solutions where precision and data correctness are fundamental requirements . Your work will directly support millions of users across merchant payments, peer-to-peer transfers, bill payments, and financial services, where data accuracy is crucial for maintaining customer confidence and operational excellence.This role requires expertise in designing fault-tolerant, scalable data architectures that maintain high uptime standards while processing peak transaction loads during festivals and high-traffic events. We place the highest priority on data quality and system reliability, as our customers depend on accurate, timely information for their financial decisions. You'll collaborate with cross-functional teams including data scientists, product managers, and risk engineers to deliver data solutions that enable real-time fraud detection, personalized recommendations, credit scoring, and regulatory compliance reporting.Key technical challenges include maintaining data consistency across distributed systems with demanding performance requirements, implementing comprehensive data quality frameworks with real-time validation, optimizing query performance on large datasets, and ensuring complete data lineage and governance across multiple business domains. At Paytm, reliable data streams are fundamental to our operations and our commitment to protecting customers' financial security and maintaining India's digital payments ResponsibilitiesData Stream Architecture & Development Design and implement reliable, scalable data streams handling high-volume transaction data with strong data integrity controlsBuild real-time processing systems using modern data engineering frameworks (Java/Python stack) with excellent performance characteristicsDevelop robust data ingestion systems from multiple sources with built-in redundancy and monitoring capabilitiesImplement comprehensive data quality frameworks, ensuring the 4 C's: Completeness, Consistency, Conformity, and Correctness - ensuring data reliability that supports sound business decisionsDesign automated data validation, profiling, and quality monitoring systems with proactive alerting capabilitiesInfrastructure & Platform Management Manage and optimize distributed data processing platforms with high availability requirements to ensure consistent service deliveryDesign data lake and data warehouse architectures with appropriate partitioning and indexing strategies for optimal query performanceImplement CI/CD processes for data engineering workflows with comprehensive testing and reliable deployment proceduresEnsure high availability and disaster recovery for critical data systems to maintain business continuityPerformance & Optimization Monitor and optimize streaming performance with focus on latency reduction and operational efficiencyImplement efficient data storage strategies including compression, partitioning, and lifecycle management with cost considerationsTroubleshoot and resolve complex data streaming issues in production environments with effective response protocolsConduct proactive capacity planning and performance tuning to support business growth and data volume increasesCollaboration & Leadership Work closely with data scientists, analysts, and product teams to understand important data requirements and service level expectationsMentor junior data engineers with emphasis on data quality best practices and customer-focused approachParticipate in architectural reviews and help establish data engineering standards that prioritize reliability and accuracyDocument technical designs, processes, and operational procedures with focus on maintainability and knowledge sharingRequired QualificationsExperience & Education Bachelor's or Master's degree in Computer Science, Engineering, or related technical field7+ years (Senior) of hands-on data engineering experienceProven experience with large-scale data processing systems (preferably in fintech/payments domain)Experience building and maintaining production data streams processing TB/PB scale data with strong performance and reliability standardsTechnical Skills & RequirementsProgramming Languages: Expert-level proficiency in both Python and Java; experience with Scala preferredBig Data Technologies: Apache Spark (PySpark, Spark SQL, Spark with Java), Apache Kafka, Apache AirflowCloud Platforms: AWS (EMR, Glue, Redshift, S3, Lambda) or equivalent Azure/GCP servicesDatabases: Strong SQL skills, experience with both relational (PostgreSQL, MySQL) and NoSQL databases (MongoDB, Cassandra, Redis)Data Quality Management: Deep understanding of the 4 C's framework - Completeness, Consistency, Conformity, and CorrectnessData Governance: Experience with data lineage tracking, metadata management, and data catalogingData Formats & Protocols: Parquet, Avro, JSON, REST APIs, GraphQLContainerization & DevOps: Docker, Kubernetes, Git, GitLab/GitHub with CI/CD pipeline experienceMonitoring & Observability: Experience with Prometheus, Grafana, or similar monitoring toolsData Modeling: Dimensional modeling, data vault, or similar methodologiesStreaming Technologies: Apache Flink, Kinesis, or Pulsar experience is a plusInfrastructure as Code: Terraform, CloudFormation (preferred)Java-specific: Spring Boot, Maven/Gradle, JUnit for building robust data servicesPreferred QualificationsDomain Expertise Previous experience in fintech, payments, or banking industry with solid understanding of regulatory compliance and financial data requirementsUnderstanding of financial data standards, PCI DSS compliance, and data privacy regulations where compliance is essential for business operationsExperience with real-time fraud detection or risk management systems where data accuracy is crucial for customer protectionAdvanced Technical Skills (Preferred) Experience building automated data quality frameworks covering all 4 C's dimensionsKnowledge of machine learning stream orchestration (MLflow, Kubeflow)Familiarity with data mesh or federated data architecture patternsExperience with change data capture (CDC) tools and techniquesLeadership & Soft Skills Strong problem-solving abilities with experience debugging complex distributed systems in production environmentsExcellent communication skills with ability to explain technical concepts to diverse stakeholders while highlighting business valueExperience mentoring team members and leading technical initiatives with focus on building a quality-oriented cultureProven track record of delivering projects successfully in dynamic, fast-paced financial technology environmentsWhat We Offer Opportunity to work with cutting-edge technology at scaleCompetitive salary and equity compensationComprehensive health and wellness benefitsProfessional development opportunities and conference attendanceFlexible working arrangementsChance to impact millions of users across India's digital payments ecosystemApplication Process Interested candidates should submit:Updated resume highlighting relevant data engineering experience with emphasis on real-time systems and data qualityPortfolio or GitHub profile showcasing data engineering projects, particularly those involving high-throughput streaming systemsCover letter explaining interest in fintech/payments domain and understanding of data criticality in financial servicesReferences from previous technical managers or senior colleagues who can attest to your data quality standards
    This advertiser has chosen not to accept applicants from your region.

    Senior Data Engineer / Data Engineer

    Gurugram, Uttar Pradesh Invokhr

    Posted today

    Job Viewed

    Tap Again To Close

    Job Description

    Desired Experience: 3-8 years

    Salary: Best-in-industry

    Location: Gurgaon ( 5 days onsite)


    Overview:

    You will act as a key member of the Data consulting team, working directly with the partners and senior stakeholders of the clients designing and implementing big data and analytics solutions. Communication and organisation skills are keys for this position, along with a problem-solution attitude.

    What is in it for you:

    Opportunity to work with a world class team of business consultants and engineers solving some of the most complex business problems by applying data and analytics techniques

    Fast track career growth in a highly entrepreneurial work environment

    Best-in-industry renumeration package

    Essential Technical Skills:

    Technical expertise with emerging Big Data technologies, such as: Python, Spark, Hadoop, Clojure, Git, SQL and Databricks; and visualization tools: Tableau and PowerBI

    Experience with cloud, container and micro service infrastructures

    Experience working with divergent data sets that meet the requirements of the Data Science and Data Analytics teams

    Hands-on experience with data modelling, query techniques and complexity analysis

    Desirable Skills:

    Experience/Knowledge of working in an agile environment and experience with agile methodologies such as Scrum

    Experience of working with development teams and product owners to understand their requirement

    Certifications on any of the above areas will be preferred.

    Your duties will include:

    Develop data solutions within a Big Data Azure and/or other cloud environments

    Working with divergent data sets that meet the requirements of the Data Science and Data Analytics teams

    Build and design Data Architectures using Azure Data factory, Databricks, Data lake, Synapse

    Liaising with CTO, Product Owners and other Operations teams to deliver engineering roadmaps showing key items such as upgrades, technical refreshes and new versions

    Perform data mapping activities to describe source data, target data and the high-level or detailed transformations that need to occur;

    Assist Data Analyst team in developing KPIs and reporting in tools viz. Power BI, Tableau

    Data Integration, Transformation, Modelling

    Maintaining all relevant documentation and knowledge bases

    Research and suggest new database products, services and protocols

    Essential Personal Traits:

    You should be able to work independently and communicate effectively with remote teams.

    Timely communication/escalation of issues/dependencies to higher management.

    Curiosity to learn and apply emerging technologies to solve business problems


    ** Interested candidate please send thier resume on - and **

    This advertiser has chosen not to accept applicants from your region.

    Senior Data Engineer / Data Engineer

    Kochi, Kerala Invokhr

    Posted today

    Job Viewed

    Tap Again To Close

    Job Description

    LOOKING FOR IMMEDIATE JOINERS OR 15 DAYS NOTICE PERIODS AND THIS IS WORK FROM HOME OPPORTUNITY

    Position: Senior Data Engineer / Data Engineer

    Desired Experience: 3-8 years

    Salary: Best-in-industry

    You will act as a key member of the Data consulting team, working directly with the partners and senior

    stakeholders of the clients designing and implementing big data and analytics solutions. Communication

    and organisation skills are keys for this position, along with a problem-solution attitude.

    What is in it for you:

    Opportunity to work with a world class team of business consultants and engineers solving some of

    the most complex business problems by applying data and analytics techniques

    Fast track career growth in a highly entrepreneurial work environment

    Best-in-industry renumeration package

    Essential Technical Skills:

    Technical expertise with emerging Big Data technologies, such as: Python, Spark, Hadoop, Clojure,

    Git, SQL and Databricks; and visualization tools: Tableau and PowerBI

    Experience with cloud, container and micro service infrastructures

    Experience working with divergent data sets that meet the requirements of the Data Science and

    Data Analytics teams

    Hands-on experience with data modelling, query techniques and complexity analysis

    Desirable Skills:

    Experience/Knowledge of working in an agile environment and experience with agile

    methodologies such as Scrum

    Experience of working with development teams and product owners to understand their

    requirement

    Certifications on any of the above areas will be preferred.

    Your duties will include:

    Develop data solutions within a Big Data Azure and/or other cloud environments

    Working with divergent data sets that meet the requirements of the Data Science and Data Analytics

    teams

    Build and design Data Architectures using Azure Data factory, Databricks, Data lake, Synapse

    Liaising with CTO, Product Owners and other Operations teams to deliver engineering roadmaps

    showing key items such as upgrades, technical refreshes and new versions

    Perform data mapping activities to describe source data, target data and the high-level or

    detailed transformations that need to occur;

    Assist Data Analyst team in developing KPIs and reporting in tools viz. Power BI, Tableau

    Data Integration, Transformation, Modelling

    Maintaining all relevant documentation and knowledge bases

    Research and suggest new database products, services and protocols

    Essential Personal Traits:

    You should be able to work independently and communicate effectively with remote teams.

    Timely communication/escalation of issues/dependencies to higher management.

    Curiosity to learn and apply emerging technologies to solve business problems

    This advertiser has chosen not to accept applicants from your region.

    Data Engineer

    Bangalore, Karnataka ThermoFisher Scientific

    Posted 1 day ago

    Job Viewed

    Tap Again To Close

    Job Description

    **Work Schedule**
    Standard (Mon-Fri)
    **Environmental Conditions**
    Office
    **Job Description**
    **Job Summary:**
    We are seeking a skilled and detail-oriented **Data Engineer** to join our data team. The ideal candidate will be responsible for building and maintaining scalable data pipelines, extracting data from diverse sources including APIs, databases, and flat files, and ensuring high data quality and reliability. You will work closely with analysts, data scientists, and engineers to power data-driven decision-making across the organization.
    **Key Responsibilities:**
    + Design, develop, and maintain scalable and robust data pipelines for both batch and real-time processing.
    + Extract, transform, and load (ETL) data from a wide variety of structured and unstructured data sources including:
    + RESTful and SOAP APIs
    + Databases (SQL, NoSQL)
    + Cloud storage (e.g., S3, Google Cloud Storage)
    + File formats (e.g., JSON, CSV, XML, Parquet)
    + Web scraping tools where appropriate
    + Build reusable data connectors and integration solutions to automate data ingestion.
    + Collaborate with internal stakeholders to understand data requirements and ensure accessibility and usability.
    + Monitor and optimize pipeline performance and troubleshoot data flow issues.
    + Ensure data governance, security, and quality standards are applied across all pipelines.
    + Experience with data manipulation and analysis libraries such as Pandas, Polars, or Dask for handling large datasets efficiently.
    + Design and create data flow and architecture diagrams to visually represent data pipelines, system integrations, and data models, ensuring clarity and alignment among technical and non-technical stakeholders.
    **Requirements:**
    **Technical Skills:**
    + Proficiency in SQL and at least one programming language (Python, Java, Scala).
    + Experience with data pipeline and workflow tools (e.g., Apache Airflow, AWS Data Pipeline).
    + Knowledge of relational and non-relational databases. (e.g., Oracle, SqlServer, MongoDB).
    + Strong data modeling and data warehousing skills.
    **Education & Experience:**
    + Bachelor's degree in Computer Science, Engineering, Information Systems, or related field (Master's a plus).
    + 5+ years of experience in a data engineering or similar role.
    **Soft Skills:**
    + Strong analytical and problem-solving abilities.
    + Excellent communication and collaboration skills.
    + Detail-oriented and proactive mindset.
    Thermo Fisher Scientific is an EEO/Affirmative Action Employer and does not discriminate on the basis of race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability or any other legally protected status.
    This advertiser has chosen not to accept applicants from your region.

    Data Engineer

    Bangalore, Karnataka NTT America, Inc.

    Posted 2 days ago

    Job Viewed

    Tap Again To Close

    Job Description

    **Req ID:** 338011
    NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now.
    We are currently seeking a Data Engineer to join our team in Bangalore, Karnātaka (IN-KA), India (IN).
    **Key Skills & Competencies**
    + Advanced SQL development (joins, CTEs, window functions, optimization)
    + Experience with ETL/ELT processes and tools
    + Data modeling (dimensional and normalized
    + Familiarity with version control (e.g., Git) and CI/CD practices
    + Understanding of DWH architectures and data integration patterns
    + Ability to work with large datasets and performance-tune queries
    + Platform-agnostic mindset with readiness to adapt to Azure or AWS
    **About NTT DATA**
    NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com ( possible, we hire locally to NTT DATA offices or client sites. This ensures we can provide timely and effective support tailored to each client's needs. While many positions offer remote or hybrid work options, these arrangements are subject to change based on client requirements. For employees near an NTT DATA office or client site, in-office attendance may be required for meetings or events, depending on business needs. At NTT DATA, we are committed to staying flexible and meeting the evolving needs of both our clients and employees. NTT DATA recruiters will never ask for payment or banking information and will only use @nttdata.com and @talent.nttdataservices.com email addresses. If you are requested to provide payment or disclose banking information, please submit a contact us form, .
    **_NTT DATA endeavors to make_** **_ **_accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at_** **_ **_._** **_This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here ( . If you'd like more information on your EEO rights under the law, please click here ( . For Pay Transparency information, please click here ( ._**
    This advertiser has chosen not to accept applicants from your region.
    Be The First To Know

    About the latest Data engineer Jobs in India !

    Data Engineer

    Bangalore, Karnataka NTT DATA North America

    Posted 2 days ago

    Job Viewed

    Tap Again To Close

    Job Description

    **Req ID:** 338011
    NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now.
    We are currently seeking a Data Engineer to join our team in Bangalore, Karnātaka (IN-KA), India (IN).
    **Key Skills & Competencies**
    + Advanced SQL development (joins, CTEs, window functions, optimization)
    + Experience with ETL/ELT processes and tools
    + Data modeling (dimensional and normalized
    + Familiarity with version control (e.g., Git) and CI/CD practices
    + Understanding of DWH architectures and data integration patterns
    + Ability to work with large datasets and performance-tune queries
    + Platform-agnostic mindset with readiness to adapt to Azure or AWS
    **About NTT DATA**
    NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com ( possible, we hire locally to NTT DATA offices or client sites. This ensures we can provide timely and effective support tailored to each client's needs. While many positions offer remote or hybrid work options, these arrangements are subject to change based on client requirements. For employees near an NTT DATA office or client site, in-office attendance may be required for meetings or events, depending on business needs. At NTT DATA, we are committed to staying flexible and meeting the evolving needs of both our clients and employees. NTT DATA recruiters will never ask for payment or banking information and will only use @nttdata.com and @talent.nttdataservices.com email addresses. If you are requested to provide payment or disclose banking information, please submit a contact us form, .
    **_NTT DATA endeavors to make_** **_ **_accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at_** **_ **_._** **_This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here ( . If you'd like more information on your EEO rights under the law, please click here ( . For Pay Transparency information, please click here ( ._**
    This advertiser has chosen not to accept applicants from your region.

    Data Engineer

    CAI

    Posted 2 days ago

    Job Viewed

    Tap Again To Close

    Job Description

    Data Engineer
    **Req number:**
    R6008
    **Employment type:**
    Full time
    **Worksite flexibility:**
    Remote
    **Who we are**
    CAI is a global technology services firm with over 8,500 associates worldwide and a yearly revenue of $1 billion+. We have over 40 years of excellence in uniting talent and technology to power the possible for our clients, colleagues, and communities. As a privately held company, we have the freedom and focus to do what is right-whatever it takes. Our tailor-made solutions create lasting results across the public and commercial sectors, and we are trailblazers in bringing neurodiversity to the enterprise.
    **Job Summary**
    As a Data Engineer, you will build data products using Databricks and related technologies.
    **Job Description**
    We are seeking a motivated **Data Engineer** that has experience in building data products using Databricks and related technologies. This is a **Full-time** and **Remote** position.
    **What You'll Do**
    + Analyze and understand existing data warehouse implementations to support migration and consolidation efforts
    + Reverse-engineer legacy stored procedures (PL/SQL, SQL) and translate business logic into scalable Spark SQL code within Databricks notebooks
    + Design and develop data lake solutions on AWS using S3 and Delta Lake architecture, leveraging Databricks for processing and transformation
    + Build and maintain robust data pipelines using ETL tools with ingestion into S3 and processing in Databricks
    + Collaborate with data architects to implement ingestion and transformation frameworks aligned with enterprise standards
    + Evaluate and optimize data models (Star, Snowflake, Flattened) for performance and scalability in the new platform
    + Document ETL processes, data flows, and transformation logic to ensure transparency and maintainability
    + Perform foundational data administration tasks including job scheduling, error troubleshooting, performance tuning, and backup coordination
    + Participate in Agile ceremonies and contribute to sprint planning, retrospectives, and backlog grooming
    + Triage, debug and fix technical issues related to Data Lakes
    + Maintain and manage Code repositories like Git
    **What You'll Need**
    Required:
    + 5+ years of experience working with **Databricks** , including Spark SQL and Delta Lake implementations
    + 3 + years of experience in designing and implementing data lake architectures on Databricks
    + Strong SQL and PL/SQL skills with the ability to interpret and refactor legacy stored procedures
    + Hands-on experience with data modeling and warehouse design principles
    + Proficiency in at least one programming language (Python, Scala, Java)
    + Bachelor's degree in Computer Science, Information Technology, Data Engineering, or related field
    + Experience working in Agile environments and contributing to iterative development cycles. Experience working on Agile projects and Agile methodology in general
    + Exposure to enterprise data governance and metadata management practices
    Preferred:
    + Databricks cloud certification is a big plus
    **Physical Demands**
    + This role involves mostly sedentary work, with occasional movement around the office to attend meetings, etc.
    + Ability to perform repetitive tasks on a computer, using a mouse, keyboard, and monitor
    **Reasonable accommodation statement**
    If you require a reasonable accommodation in completing this application, interviewing, completing any pre-employment testing, or otherwise participating in the employment selection process, please direct your inquiries to or (888) 824 - 8111.
    This advertiser has chosen not to accept applicants from your region.

    Data Engineer

    Pune, Maharashtra Red Hat

    Posted 3 days ago

    Job Viewed

    Tap Again To Close

    Job Description

    The Data Engineer exercises judgment when following general instructions and works with minimal instruction to support the integration and automation of data solutions. This role focuses on data massaging, reconciliation, and analysis, resolving routine to semi-routine issues. Responsibilities include creating optimized SQL queries, managing data pipelines, and collaborating with cross-functional teams to ensure data accuracy and availability.
    **What will you do:**
    + Write optimized and scalable complex SQL queries
    + Automate data processing tasks using Python, focusing on cleaning and merging datasets.
    + Manage data pipelines, including scheduling, monitoring, and debugging workflows.
    + Collaborate with data engineers and IT teams to maintain data accessibility for stakeholders.
    + Assist in developing automated tests to ensure the accuracy and integrity of data.
    + Participate in version control and CI/CD processes for deploying and testing pipeline changes across environments.
    + Work cross-functionally with analysts, engineers, and operations.
    + Data stewardship including: data governance, data compliance, data transformation, data cleanliness, data validation, data audit/maintenance.
    + Writing complex, highly-optimized SQL queries across large datasets, involved in SQL Query tuning and provided tuning recommendations
    + Experienced in Data Analytics, hands-on experience of various Python libraries such as NumPy and Pandas
    + Python development experience to massage, clean data and automate data extract and loads
    + Expertise to convert raw data to processed data by merging, finding outliers, errors, trends, missing values and distributions in the data
    + Expertise in Creating, Debugging, Scheduling and Monitoring jobs using Airflow, resolve performance tuning related issues and queries
    + Foster collaboration among Data engineers, IT & other business groups to ensure data is accessible to FP&A team
    + Scheduled a regular hot backup process and involved in the backup activities
    + Strong analytical and problem-solving skills with ability to represent complex algorithms in software
    + Develop automated unit tests, end-to-end tests, and integration tests to assist in quality assurance (QA) procedures
    **What will you bring:**
    + Bachelor's or Master's degree in Computer Science, IT, Engineering or equivalent
    + 5+ years of experience as a Data Engineer, BI Engineer, Systems Analyst in a company with large, complex data sources
    + Working knowledge of DBT, Snowflake, Fivetran, Git and SQL or Python programming skills for data querying, cleaning, and presentation
    + Build highly available, reliable and secured API solutions, experience working with REST API design and Implementation
    + Working knowledge of relational databases (PostgreSQL, MSSQL, etc.), experience with AWS services including S3, Redshift, EMR and RDS
    + Ability to manage multiple projects at the same time in a fast-paced team environment, across time zones, and with different cultures, while maintaining ability to work as part of a team
    + The candidate must have good troubleshooting skills and be able to think through issues and problems in a logical manner and planning knowledge would be an added advantage
    + Detail-oriented and enthusiastic who is also focused and diligent on delivering results
    **About Red Hat**
    Red Hat ( is the world's leading provider of enterprise open source ( software solutions, using a community-powered approach to deliver high-performing Linux, cloud, container, and Kubernetes technologies. Spread across 40+ countries, our associates work flexibly across work environments, from in-office, to office-flex, to fully remote, depending on the requirements of their role. Red Hatters are encouraged to bring their best ideas, no matter their title or tenure. We're a leader in open source because of our open and inclusive environment. We hire creative, passionate people ready to contribute their ideas, help solve complex problems, and make an impact.
    **Inclusion at Red Hat**
    Red Hat's culture is built on the open source principles of transparency, collaboration, and inclusion, where the best ideas can come from anywhere and anyone. When this is realized, it empowers people from different backgrounds, perspectives, and experiences to come together to share ideas, challenge the status quo, and drive innovation. Our aspiration is that everyone experiences this culture with equal opportunity and access, and that all voices are not only heard but also celebrated. We hope you will join our celebration, and we welcome and encourage applicants from all the beautiful dimensions that compose our global village.
    **Equal Opportunity Policy (EEO)**
    Red Hat is proud to be an equal opportunity workplace and an affirmative action employer. We review applications for employment without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, citizenship, age, veteran status, genetic information, physical or mental disability, medical condition, marital status, or any other basis prohibited by law.
    **Red Hat does not seek or accept unsolicited resumes or CVs from recruitment agencies. We are not responsible for, and will not pay, any fees, commissions, or any other payment related to unsolicited resumes or CVs except as required in a written contract between Red Hat and the recruitment agency or party requesting payment of a fee.**
    **Red Hat supports individuals with disabilities and provides reasonable accommodations to job applicants. If you need assistance completing our online job application, email** ** ** **. General inquiries, such as those regarding the status of a job application, will not receive a reply.**
    This advertiser has chosen not to accept applicants from your region.
     

    Nearby Locations

    Other Jobs Near Me

    Industry

    1. request_quote Accounting
    2. work Administrative
    3. eco Agriculture Forestry
    4. smart_toy AI & Emerging Technologies
    5. school Apprenticeships & Trainee
    6. apartment Architecture
    7. palette Arts & Entertainment
    8. directions_car Automotive
    9. flight_takeoff Aviation
    10. account_balance Banking & Finance
    11. local_florist Beauty & Wellness
    12. restaurant Catering
    13. volunteer_activism Charity & Voluntary
    14. science Chemical Engineering
    15. child_friendly Childcare
    16. foundation Civil Engineering
    17. clean_hands Cleaning & Sanitation
    18. diversity_3 Community & Social Care
    19. construction Construction
    20. brush Creative & Digital
    21. currency_bitcoin Crypto & Blockchain
    22. support_agent Customer Service & Helpdesk
    23. medical_services Dental
    24. medical_services Driving & Transport
    25. medical_services E Commerce & Social Media
    26. school Education & Teaching
    27. electrical_services Electrical Engineering
    28. bolt Energy
    29. local_mall Fmcg
    30. gavel Government & Non Profit
    31. emoji_events Graduate
    32. health_and_safety Healthcare
    33. beach_access Hospitality & Tourism
    34. groups Human Resources
    35. precision_manufacturing Industrial Engineering
    36. security Information Security
    37. handyman Installation & Maintenance
    38. policy Insurance
    39. code IT & Software
    40. gavel Legal
    41. sports_soccer Leisure & Sports
    42. inventory_2 Logistics & Warehousing
    43. supervisor_account Management
    44. supervisor_account Management Consultancy
    45. supervisor_account Manufacturing & Production
    46. campaign Marketing
    47. build Mechanical Engineering
    48. perm_media Media & PR
    49. local_hospital Medical
    50. local_hospital Military & Public Safety
    51. local_hospital Mining
    52. medical_services Nursing
    53. local_gas_station Oil & Gas
    54. biotech Pharmaceutical
    55. checklist_rtl Project Management
    56. shopping_bag Purchasing
    57. home_work Real Estate
    58. person_search Recruitment Consultancy
    59. store Retail
    60. point_of_sale Sales
    61. science Scientific Research & Development
    62. wifi Telecoms
    63. psychology Therapy
    64. pets Veterinary
    View All Data Engineer Jobs