58,035 Data Professionals jobs in India

Senior Data Engineer / Data Engineer

Gurugram, Uttar Pradesh Invokhr

Posted today

Job Viewed

Tap Again To Close

Job Description

Desired Experience: 3-8 years

Salary: Best-in-industry

Location: Gurgaon ( 5 days onsite)


Overview:

You will act as a key member of the Data consulting team, working directly with the partners and senior stakeholders of the clients designing and implementing big data and analytics solutions. Communication and organisation skills are keys for this position, along with a problem-solution attitude.

What is in it for you:

Opportunity to work with a world class team of business consultants and engineers solving some of the most complex business problems by applying data and analytics techniques

Fast track career growth in a highly entrepreneurial work environment

Best-in-industry renumeration package

Essential Technical Skills:

Technical expertise with emerging Big Data technologies, such as: Python, Spark, Hadoop, Clojure, Git, SQL and Databricks; and visualization tools: Tableau and PowerBI

Experience with cloud, container and micro service infrastructures

Experience working with divergent data sets that meet the requirements of the Data Science and Data Analytics teams

Hands-on experience with data modelling, query techniques and complexity analysis

Desirable Skills:

Experience/Knowledge of working in an agile environment and experience with agile methodologies such as Scrum

Experience of working with development teams and product owners to understand their requirement

Certifications on any of the above areas will be preferred.

Your duties will include:

Develop data solutions within a Big Data Azure and/or other cloud environments

Working with divergent data sets that meet the requirements of the Data Science and Data Analytics teams

Build and design Data Architectures using Azure Data factory, Databricks, Data lake, Synapse

Liaising with CTO, Product Owners and other Operations teams to deliver engineering roadmaps showing key items such as upgrades, technical refreshes and new versions

Perform data mapping activities to describe source data, target data and the high-level or detailed transformations that need to occur;

Assist Data Analyst team in developing KPIs and reporting in tools viz. Power BI, Tableau

Data Integration, Transformation, Modelling

Maintaining all relevant documentation and knowledge bases

Research and suggest new database products, services and protocols

Essential Personal Traits:

You should be able to work independently and communicate effectively with remote teams.

Timely communication/escalation of issues/dependencies to higher management.

Curiosity to learn and apply emerging technologies to solve business problems


** Interested candidate please send thier resume on - and **

This advertiser has chosen not to accept applicants from your region.

Data Engineer- Lead Data Engineer

Bengaluru, Karnataka Paytm

Posted today

Job Viewed

Tap Again To Close

Job Description

Role Overview



We are seeking an experienced Lead Data Engineer to join our Data Engineering team at Paytm, India's leading digital payments and financial services platform. This is a critical role responsible for designing, building, and maintaining large-scale, real-time data streams that process billions of transactions and user interactions daily. Data accuracy and stream reliability are essential to our operations, as data quality issues can result in financial losses and impact customer trust.

As a Lead Data Engineer at Paytm, you will be responsible for building robust data systems that support India's largest digital payments ecosystem. You'll architect and implement reliable, real-time data streaming solutions where precision and data correctness are fundamental requirements . Your work will directly support millions of users across merchant payments, peer-to-peer transfers, bill payments, and financial services, where data accuracy is crucial for maintaining customer confidence and operational excellence.


This role requires expertise in designing fault-tolerant, scalable data architectures that maintain high uptime standards while processing peak transaction loads during festivals and high-traffic events. We place the highest priority on data quality and system reliability, as our customers depend on accurate, timely information for their financial decisions. You'll collaborate with cross-functional teams including data scientists, product managers, and risk engineers to deliver data solutions that enable real-time fraud detection, personalized recommendations, credit scoring, and regulatory compliance reporting.


Key technical challenges include maintaining data consistency across distributed systems with demanding performance requirements, implementing comprehensive data quality frameworks with real-time validation, optimizing query performance on large datasets, and ensuring complete data lineage and governance across multiple business domains. At Paytm, reliable data streams are fundamental to our operations and our commitment to protecting customers' financial security and maintaining India's digital payments infrastructure.


Key Responsibilities


Data Stream Architecture & Development Design and implement reliable, scalable data streams handling high-volume transaction data with strong data integrity controlsBuild real-time processing systems using modern data engineering frameworks (Java/Python stack) with excellent performance characteristicsDevelop robust data ingestion systems from multiple sources with built-in redundancy and monitoring capabilitiesImplement comprehensive data quality frameworks, ensuring the 4 C's: Completeness, Consistency, Conformity, and Correctness - ensuring data reliability that supports sound business decisionsDesign automated data validation, profiling, and quality monitoring systems with proactive alerting capabilities Infrastructure & Platform Management Manage and optimize distributed data processing platforms with high availability requirements to ensure consistent service deliveryDesign data lake and data warehouse architectures with appropriate partitioning and indexing strategies for optimal query performanceImplement CI/CD processes for data engineering workflows with comprehensive testing and reliable deployment proceduresEnsure high availability and disaster recovery for critical data systems to maintain business continuity


Performance & Optimization Monitor and optimize streaming performance with focus on latency reduction and operational efficiencyImplement efficient data storage strategies including compression, partitioning, and lifecycle management with cost considerationsTroubleshoot and resolve complex data streaming issues in production environments with effective response protocolsConduct proactive capacity planning and performance tuning to support business growth and data volume increases


Collaboration & Leadership Work closely with data scientists, analysts, and product teams to understand important data requirements and service level expectationsMentor junior data engineers with emphasis on data quality best practices and customer-focused approachParticipate in architectural reviews and help establish data engineering standards that prioritize reliability and accuracyDocument technical designs, processes, and operational procedures with focus on maintainability and knowledge sharing


Required Qualifications


Experience & Education Bachelor's or Master's degree in Computer Science, Engineering, or related technical field

7+ years (Senior) of hands-on data engineering experience

Proven experience with large-scale data processing systems (preferably in fintech/payments domain)

Experience building and maintaining production data streams processing TB/PB scale data with strong performance and reliability standards


Technical Skills & RequirementsProgramming Languages:

Expert-level proficiency in both Python and Java; experience with Scala preferred


Big Data Technologies: Apache Spark (PySpark, Spark SQL, Spark with Java), Apache Kafka, Apache Airflow

Cloud Platforms: AWS (EMR, Glue, Redshift, S3, Lambda) or equivalent Azure/GCP services

Databases: Strong SQL skills, experience with both relational (PostgreSQL, MySQL) and NoSQL databases (MongoDB, Cassandra, Redis)

Data Quality Management: Deep understanding of the 4 C's framework - Completeness, Consistency, Conformity, and Correctness

Data Governance: Experience with data lineage tracking, metadata management, and data cataloging

Data Formats & Protocols: Parquet, Avro, JSON, REST APIs, GraphQL Containerization & DevOps: Docker, Kubernetes, Git, GitLab/GitHub with CI/CD pipeline experience

Monitoring & Observability: Experience with Prometheus, Grafana, or similar monitoring tools

Data Modeling: Dimensional modeling, data vault, or similar methodologies

Streaming Technologies: Apache Flink, Kinesis, or Pulsar experience is a plus

Infrastructure as Code: Terraform, CloudFormation (preferred)

Java-specific: Spring Boot, Maven/Gradle, JUnit for building robust data services


Preferred Qualifications


Domain Expertise

Previous experience in fintech, payments, or banking industry with solid understanding of regulatory compliance and financial data requirementsUnderstanding of financial data standards, PCI DSS compliance, and data privacy regulations where compliance is essential for business operationsExperience with real-time fraud detection or risk management systems where data accuracy is crucial for customer protection


Advanced Technical Skills (Preferred)


Experience building automated data quality frameworks covering all 4 C's dimensionsKnowledge of machine learning stream orchestration (MLflow, Kubeflow)Familiarity with data mesh or federated data architecture patternsExperience with change data capture (CDC) tools and techniques


Leadership & Soft Skills Strong problem-solving abilities with experience debugging complex distributed systems in production environmentsExcellent communication skills with ability to explain technical concepts to diverse stakeholders while highlighting business valueExperience mentoring team members and leading technical initiatives with focus on building a quality-oriented cultureProven track record of delivering projects successfully in dynamic, fast-paced financial technology environments


What We Offer


Opportunity to work with cutting-edge technology at scaleCompetitive salary and equity compensation

Comprehensive health and wellness benefits

Professional development opportunities and conference attendanceFlexible working arrangements

Chance to impact millions of users across India's digital payments ecosystem


Application Process


Interested candidates should submit:

Updated resume highlighting relevant data engineering experience with emphasis on real-time systems and data quality

Portfolio or GitHub profile showcasing data engineering projects, particularly those involving high-throughput streaming systems

Cover letter explaining interest in fintech/payments domain and understanding of data criticality in financial services

References from previous technical managers or senior colleagues who can attest to your data quality standards






PI1255d80c7d

This advertiser has chosen not to accept applicants from your region.

Senior Data Engineer / Data Engineer

Kochi, Kerala Invokhr

Posted today

Job Viewed

Tap Again To Close

Job Description

LOOKING FOR IMMEDIATE JOINERS OR 15 DAYS NOTICE PERIODS AND THIS IS WORK FROM HOME OPPORTUNITY

Position: Senior Data Engineer / Data Engineer

Desired Experience: 3-8 years

Salary: Best-in-industry

You will act as a key member of the Data consulting team, working directly with the partners and senior

stakeholders of the clients designing and implementing big data and analytics solutions. Communication

and organisation skills are keys for this position, along with a problem-solution attitude.

What is in it for you:

Opportunity to work with a world class team of business consultants and engineers solving some of

the most complex business problems by applying data and analytics techniques

Fast track career growth in a highly entrepreneurial work environment

Best-in-industry renumeration package

Essential Technical Skills:

Technical expertise with emerging Big Data technologies, such as: Python, Spark, Hadoop, Clojure,

Git, SQL and Databricks; and visualization tools: Tableau and PowerBI

Experience with cloud, container and micro service infrastructures

Experience working with divergent data sets that meet the requirements of the Data Science and

Data Analytics teams

Hands-on experience with data modelling, query techniques and complexity analysis

Desirable Skills:

Experience/Knowledge of working in an agile environment and experience with agile

methodologies such as Scrum

Experience of working with development teams and product owners to understand their

requirement

Certifications on any of the above areas will be preferred.

Your duties will include:

Develop data solutions within a Big Data Azure and/or other cloud environments

Working with divergent data sets that meet the requirements of the Data Science and Data Analytics

teams

Build and design Data Architectures using Azure Data factory, Databricks, Data lake, Synapse

Liaising with CTO, Product Owners and other Operations teams to deliver engineering roadmaps

showing key items such as upgrades, technical refreshes and new versions

Perform data mapping activities to describe source data, target data and the high-level or

detailed transformations that need to occur;

Assist Data Analyst team in developing KPIs and reporting in tools viz. Power BI, Tableau

Data Integration, Transformation, Modelling

Maintaining all relevant documentation and knowledge bases

Research and suggest new database products, services and protocols

Essential Personal Traits:

You should be able to work independently and communicate effectively with remote teams.

Timely communication/escalation of issues/dependencies to higher management.

Curiosity to learn and apply emerging technologies to solve business problems

This advertiser has chosen not to accept applicants from your region.

Data Engineer- Lead Data Engineer

Bengaluru, Karnataka Paytm

Posted today

Job Viewed

Tap Again To Close

Job Description

Role Overview We are seeking an experienced Lead Data Engineer to join our Data Engineering team at Paytm, India's leading digital payments and financial services platform. This is a critical role responsible for designing, building, and maintaining large-scale, real-time data streams that process billions of transactions and user interactions daily. Data accuracy and stream reliability are essential to our operations, as data quality issues can result in financial losses and impact customer a Lead Data Engineer at Paytm, you will be responsible for building robust data systems that support India's largest digital payments ecosystem. You'll architect and implement reliable, real-time data streaming solutions where precision and data correctness are fundamental requirements . Your work will directly support millions of users across merchant payments, peer-to-peer transfers, bill payments, and financial services, where data accuracy is crucial for maintaining customer confidence and operational excellence.This role requires expertise in designing fault-tolerant, scalable data architectures that maintain high uptime standards while processing peak transaction loads during festivals and high-traffic events. We place the highest priority on data quality and system reliability, as our customers depend on accurate, timely information for their financial decisions. You'll collaborate with cross-functional teams including data scientists, product managers, and risk engineers to deliver data solutions that enable real-time fraud detection, personalized recommendations, credit scoring, and regulatory compliance reporting.Key technical challenges include maintaining data consistency across distributed systems with demanding performance requirements, implementing comprehensive data quality frameworks with real-time validation, optimizing query performance on large datasets, and ensuring complete data lineage and governance across multiple business domains. At Paytm, reliable data streams are fundamental to our operations and our commitment to protecting customers' financial security and maintaining India's digital payments ResponsibilitiesData Stream Architecture & Development Design and implement reliable, scalable data streams handling high-volume transaction data with strong data integrity controlsBuild real-time processing systems using modern data engineering frameworks (Java/Python stack) with excellent performance characteristicsDevelop robust data ingestion systems from multiple sources with built-in redundancy and monitoring capabilitiesImplement comprehensive data quality frameworks, ensuring the 4 C's: Completeness, Consistency, Conformity, and Correctness - ensuring data reliability that supports sound business decisionsDesign automated data validation, profiling, and quality monitoring systems with proactive alerting capabilitiesInfrastructure & Platform Management Manage and optimize distributed data processing platforms with high availability requirements to ensure consistent service deliveryDesign data lake and data warehouse architectures with appropriate partitioning and indexing strategies for optimal query performanceImplement CI/CD processes for data engineering workflows with comprehensive testing and reliable deployment proceduresEnsure high availability and disaster recovery for critical data systems to maintain business continuityPerformance & Optimization Monitor and optimize streaming performance with focus on latency reduction and operational efficiencyImplement efficient data storage strategies including compression, partitioning, and lifecycle management with cost considerationsTroubleshoot and resolve complex data streaming issues in production environments with effective response protocolsConduct proactive capacity planning and performance tuning to support business growth and data volume increasesCollaboration & Leadership Work closely with data scientists, analysts, and product teams to understand important data requirements and service level expectationsMentor junior data engineers with emphasis on data quality best practices and customer-focused approachParticipate in architectural reviews and help establish data engineering standards that prioritize reliability and accuracyDocument technical designs, processes, and operational procedures with focus on maintainability and knowledge sharingRequired QualificationsExperience & Education Bachelor's or Master's degree in Computer Science, Engineering, or related technical field7+ years (Senior) of hands-on data engineering experienceProven experience with large-scale data processing systems (preferably in fintech/payments domain)Experience building and maintaining production data streams processing TB/PB scale data with strong performance and reliability standardsTechnical Skills & RequirementsProgramming Languages: Expert-level proficiency in both Python and Java; experience with Scala preferredBig Data Technologies: Apache Spark (PySpark, Spark SQL, Spark with Java), Apache Kafka, Apache AirflowCloud Platforms: AWS (EMR, Glue, Redshift, S3, Lambda) or equivalent Azure/GCP servicesDatabases: Strong SQL skills, experience with both relational (PostgreSQL, MySQL) and NoSQL databases (MongoDB, Cassandra, Redis)Data Quality Management: Deep understanding of the 4 C's framework - Completeness, Consistency, Conformity, and CorrectnessData Governance: Experience with data lineage tracking, metadata management, and data catalogingData Formats & Protocols: Parquet, Avro, JSON, REST APIs, GraphQLContainerization & DevOps: Docker, Kubernetes, Git, GitLab/GitHub with CI/CD pipeline experienceMonitoring & Observability: Experience with Prometheus, Grafana, or similar monitoring toolsData Modeling: Dimensional modeling, data vault, or similar methodologiesStreaming Technologies: Apache Flink, Kinesis, or Pulsar experience is a plusInfrastructure as Code: Terraform, CloudFormation (preferred)Java-specific: Spring Boot, Maven/Gradle, JUnit for building robust data servicesPreferred QualificationsDomain Expertise Previous experience in fintech, payments, or banking industry with solid understanding of regulatory compliance and financial data requirementsUnderstanding of financial data standards, PCI DSS compliance, and data privacy regulations where compliance is essential for business operationsExperience with real-time fraud detection or risk management systems where data accuracy is crucial for customer protectionAdvanced Technical Skills (Preferred) Experience building automated data quality frameworks covering all 4 C's dimensionsKnowledge of machine learning stream orchestration (MLflow, Kubeflow)Familiarity with data mesh or federated data architecture patternsExperience with change data capture (CDC) tools and techniquesLeadership & Soft Skills Strong problem-solving abilities with experience debugging complex distributed systems in production environmentsExcellent communication skills with ability to explain technical concepts to diverse stakeholders while highlighting business valueExperience mentoring team members and leading technical initiatives with focus on building a quality-oriented cultureProven track record of delivering projects successfully in dynamic, fast-paced financial technology environmentsWhat We Offer Opportunity to work with cutting-edge technology at scaleCompetitive salary and equity compensationComprehensive health and wellness benefitsProfessional development opportunities and conference attendanceFlexible working arrangementsChance to impact millions of users across India's digital payments ecosystemApplication Process Interested candidates should submit:Updated resume highlighting relevant data engineering experience with emphasis on real-time systems and data qualityPortfolio or GitHub profile showcasing data engineering projects, particularly those involving high-throughput streaming systemsCover letter explaining interest in fintech/payments domain and understanding of data criticality in financial servicesReferences from previous technical managers or senior colleagues who can attest to your data quality standards
This advertiser has chosen not to accept applicants from your region.

Data Engineer/ Senior Data Engineer

Pune, Maharashtra VDart Software Services Pvt. Ltd.

Posted today

Job Viewed

Tap Again To Close

Job Description

Job Title

Data Engineer

Local Job Title

Data Engineer

Reports To

BIA Manager

Position Summary:

The BIA Data Engineer Designs, implements, and maintains complex data engineering solutions in the Business Intelligence and Analytics team

Responsible for design, development, implementation, testing, documentation, and support of analytical and data solutions/projects requiring data aggregation/data pipelines/ETL/ELT from multiple sources into an efficient reporting mechanism, database/data warehouse using appropriate tools like Informatica, Azure Data Factory, SSIS. This includes interacting with business to gather requirements, analysis, and creation of functional and technical specs, testing, training, escalation, and follow-up.

Support of the applications would include resolving issues reported by users. Issues could be caused by bugs in the application or user errors or programming errors. Resolution process will include, but not limited to, investigate known bugs on software vendor support website, create tickets or service requests with software vendor, develop scripts to fix data issues, make program changes, test fixes and apply the changes to production.

These tasks and activities will be completed with the help and under the guidance of the supervisor. Participation in team and / or project meetings, to schedule work and discuss status, will be required.

The position also requires staying abreast with changes in technology, programming languages, and software development tools.

Duties

Data Pipeline/ETL (40%): Designs and implements data stores and ETL data flows and data pipelines to connect and prepare operational systems data for analytics and business intelligence (BI) systems.

Support & Operations (10%): Manages production deployments and automation, monitoring, job control and production support. Works with business users to test programs in Development and Quality. Investigates issues using vendor support website(s).

Data Modeling/Designing Datasets (10%): Reviews and understands business requirements for development tasks assigned and applies standard data modelling and design techniques based upon a detailed understanding of requirements.

Data Architecture and Technical Infrastructure (10%): Plans and drives the development of data engineering solutions ensuring that solutions balance functional and non-functional requirements. Monitors application of data standards and architectures including security and compliance.

SDLC Methodology & Project Management (5%): Contributes to technical transitions between development, testing, and production phases of solutions' lifecycle, and the facilitation of the change control, problem management, and communication processes.

Data Governance and Data Quality (5%): Identifies and investigates data quality/integrity problems, determine impact and provide solutions to problems.

Metadata Management & Documentation (5%): Documents all processes and mappings related to Data Pipelines work and follows development best practices as adopted by the BIA team

End-User Support, Education and Enablement (5%): Contributes to training and Data Literacy initiatives within the team and End user community.

Innovation, Continuous Improvement & Optimization (5%): Continuously improves and optimizes existing Data Engineering assets/processes.

Partnership and Community Building (5%): Collaborates with other IT teams, Business Community, data scientists and other architects to meet business requirements. Interact with DBAs on data designs optimal for data engineering solutions performance.

VDart: Digital Consulting & Staffing Solutions

VDart is a leading digital consulting and staffing company founded in 2007 and headquartered in Alpharetta, Georgia. VDart is one of the top staffing companies in USA & also provides technology solutions, supporting businesses in their digital transformation journey. Digital consulting services, Digital consulting agency

Core Services:

  • Digital consulting and transformation solutions
  • Comprehensive staffing services (contract, temporary, permanent placements)
  • Talent management and workforce solutions
  • Technology implementation services

Key Industries: VDart primarily serves industries such as automotive and mobility, banking and finance, healthcare and life sciences, and energy and utilities. VDart - Products, Competitors, Financials, Employees, Headquarters Locations

Notable Partnership: VDart Digital has been a consistently strong supplier for Toyota since 2017 Digital Transformation Consulting & Strategy Services | VDart Digital , demonstrating their capability in large-scale digital transformation projects.

With over 17 years of experience, VDart combines staffing expertise with technology solutions to help organizations modernize their operations and build skilled workforce capabilities across multiple sectors.

This advertiser has chosen not to accept applicants from your region.

Data Engineer-Data Warehouse

Kochi, Kerala IBM

Posted 3 days ago

Job Viewed

Tap Again To Close

Job Description

**Introduction**
* Intuitive individual with an ability to manage change and proven time management
* Proven interpersonal skills while contributing to team effort by accomplishing related results as needed
* Up-to-date technical knowledge by attending educational workshops, reviewing publications
**Your role and responsibilities**
* Minimum 3 years of experience in developing applications programs to implement the ETL workflow by creating the ETL jobs, data models in datamarts using Snowflake, DBT, Unix, SQL technologies.
* Redesign Control M Batch processing for the ETL job build to run efficiently in Production.
* Study existing system to evaluate effectiveness and developed new system to improve efficiency and workflow.
* Responsibilities:
* Perform requirements identification; conduct business program analysis, testing, and system enhancements while providing production support.
* Developer should have good understanding of working in Agile environment, Good understanding of JIRA, Sharepoint tools. Good written and verbal communication skills are a MUST as the candidate is expected to work directly with client counterpart.
**Required technical and professional expertise**
* Responsible to develop triggers, functions, stored procedures to support this effort
* Assist with impact analysis of changing upstream processes to Data Warehouse and Reporting systems. Assist with design, testing, support, and debugging of new and existing ETL and reporting processes.
* Perform data profiling and analysis using a variety of tools. Troubleshoot and support production processes. Create and maintain documentation.
**Preferred technical and professional experience**
* Minimum 3 years of experience in developing applications programs to implement the ETL workflow by creating the ETL jobs, data models in datamarts using Snowflake, DBT, Unix, SQL technologies.
* Redesign Control M Batch processing for the ETL job build to run efficiently in Production.
* Study existing system to evaluate effectiveness and developed new system to improve efficiency and workflow.
* Responsibilities:
* Perform requirements identification; conduct business program analysis, testing, and system enhancements while providing production support.
* Developer should have good understanding of working in Agile environment, Good understanding of JIRA, Sharepoint tools. Good written and verbal communication skills are a MUST as the candidate is expected to work directly with client counterpart.
IBM is committed to creating a diverse environment and is proud to be an equal-opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, gender, gender identity or expression, sexual orientation, national origin, caste, genetics, pregnancy, disability, neurodivergence, age, veteran status, or other characteristics. IBM is also committed to compliance with all fair employment practices regarding citizenship and immigration status.
This advertiser has chosen not to accept applicants from your region.

Data Engineer-Data Platforms

Mumbai, Maharashtra IBM

Posted 3 days ago

Job Viewed

Tap Again To Close

Job Description

**Introduction**
In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology.
**Your role and responsibilities**
As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs.
* Your primary responsibilities include:
* Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements.
* Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization.
* Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too
**Required technical and professional expertise**
* Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python
* Hbase, Hive Good to have Aws -S3,
* athena ,Dynomo DB, Lambda, Jenkins GIT
* Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine).
* Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations
**Preferred technical and professional experience**
* Understanding of Devops.
* Experience in building scalable end-to-end data ingestion and processing solutions
* Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala
IBM is committed to creating a diverse environment and is proud to be an equal-opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, gender, gender identity or expression, sexual orientation, national origin, caste, genetics, pregnancy, disability, neurodivergence, age, veteran status, or other characteristics. IBM is also committed to compliance with all fair employment practices regarding citizenship and immigration status.
This advertiser has chosen not to accept applicants from your region.
Be The First To Know

About the latest Data professionals Jobs in India !

Data Engineer-Data Platforms

Navi Mumbai, Maharashtra IBM

Posted 3 days ago

Job Viewed

Tap Again To Close

Job Description

**Introduction**
In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology.
**Your role and responsibilities**
* As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs.
* Your primary responsibilities include:
* Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements.
* Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization.
* Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too.
**Required technical and professional expertise**
* Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python
* Hbase, Hive Good to have Aws -S3,
* athena ,Dynomo DB, Lambda, Jenkins GIT
* Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine).
* Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations.
**Preferred technical and professional experience**
* Understanding of Devops.
* Experience in building scalable end-to-end data ingestion and processing solutions
* Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala
IBM is committed to creating a diverse environment and is proud to be an equal-opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, gender, gender identity or expression, sexual orientation, national origin, caste, genetics, pregnancy, disability, neurodivergence, age, veteran status, or other characteristics. IBM is also committed to compliance with all fair employment practices regarding citizenship and immigration status.
This advertiser has chosen not to accept applicants from your region.

Data Engineer-Data Platforms

Mumbai, Maharashtra IBM

Posted 3 days ago

Job Viewed

Tap Again To Close

Job Description

**Introduction**
In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology.
**Your role and responsibilities**
* As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs.
* Your primary responsibilities include:
* Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements.
* Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization.
* Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too.
**Required technical and professional expertise**
* Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python
* Hbase, Hive Good to have Aws -S3,
* athena ,Dynomo DB, Lambda, Jenkins GIT
* Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine).
* Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations.
**Preferred technical and professional experience**
* Understanding of Devops.
* Experience in building scalable end-to-end data ingestion and processing solutions
* Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala
IBM is committed to creating a diverse environment and is proud to be an equal-opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, gender, gender identity or expression, sexual orientation, national origin, caste, genetics, pregnancy, disability, neurodivergence, age, veteran status, or other characteristics. IBM is also committed to compliance with all fair employment practices regarding citizenship and immigration status.
This advertiser has chosen not to accept applicants from your region.

Data Engineer-Data Platforms

Mumbai, Maharashtra IBM

Posted 4 days ago

Job Viewed

Tap Again To Close

Job Description

**Introduction**
In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology.
**Your role and responsibilities**
As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs.
* Your primary responsibilities include:
* Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements.
* Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization.
* Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too
**Required technical and professional expertise**
* Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python
* Hbase, Hive Good to have Aws -S3,
* athena ,Dynomo DB, Lambda, Jenkins GIT
* Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine).
* Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations
**Preferred technical and professional experience**
* Understanding of Devops.
* Experience in building scalable end-to-end data ingestion and processing solutions
* Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala
IBM is committed to creating a diverse environment and is proud to be an equal-opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, gender, gender identity or expression, sexual orientation, national origin, caste, genetics, pregnancy, disability, neurodivergence, age, veteran status, or other characteristics. IBM is also committed to compliance with all fair employment practices regarding citizenship and immigration status.
This advertiser has chosen not to accept applicants from your region.
 

Nearby Locations

Other Jobs Near Me

Industry

  1. request_quote Accounting
  2. work Administrative
  3. eco Agriculture Forestry
  4. smart_toy AI & Emerging Technologies
  5. school Apprenticeships & Trainee
  6. apartment Architecture
  7. palette Arts & Entertainment
  8. directions_car Automotive
  9. flight_takeoff Aviation
  10. account_balance Banking & Finance
  11. local_florist Beauty & Wellness
  12. restaurant Catering
  13. volunteer_activism Charity & Voluntary
  14. science Chemical Engineering
  15. child_friendly Childcare
  16. foundation Civil Engineering
  17. clean_hands Cleaning & Sanitation
  18. diversity_3 Community & Social Care
  19. construction Construction
  20. brush Creative & Digital
  21. currency_bitcoin Crypto & Blockchain
  22. support_agent Customer Service & Helpdesk
  23. medical_services Dental
  24. medical_services Driving & Transport
  25. medical_services E Commerce & Social Media
  26. school Education & Teaching
  27. electrical_services Electrical Engineering
  28. bolt Energy
  29. local_mall Fmcg
  30. gavel Government & Non Profit
  31. emoji_events Graduate
  32. health_and_safety Healthcare
  33. beach_access Hospitality & Tourism
  34. groups Human Resources
  35. precision_manufacturing Industrial Engineering
  36. security Information Security
  37. handyman Installation & Maintenance
  38. policy Insurance
  39. code IT & Software
  40. gavel Legal
  41. sports_soccer Leisure & Sports
  42. inventory_2 Logistics & Warehousing
  43. supervisor_account Management
  44. supervisor_account Management Consultancy
  45. supervisor_account Manufacturing & Production
  46. campaign Marketing
  47. build Mechanical Engineering
  48. perm_media Media & PR
  49. local_hospital Medical
  50. local_hospital Military & Public Safety
  51. local_hospital Mining
  52. medical_services Nursing
  53. local_gas_station Oil & Gas
  54. biotech Pharmaceutical
  55. checklist_rtl Project Management
  56. shopping_bag Purchasing
  57. home_work Real Estate
  58. person_search Recruitment Consultancy
  59. store Retail
  60. point_of_sale Sales
  61. science Scientific Research & Development
  62. wifi Telecoms
  63. psychology Therapy
  64. pets Veterinary
View All Data Professionals Jobs