2,973 Consultant Etl Developer jobs in India

Job No Longer Available

This position is no longer listed on WhatJobs. The employer may be reviewing applications, filled the role, or has removed the listing.

However, we have similar jobs available for you below.

Principal Consultant, DB ETL Developer

Bengaluru, Karnataka Genpact

Posted today

Job Viewed

Tap Again To Close

Job Description

Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients.Powered by our purpose - the relentless pursuit of a world that works better for people - we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI.

Inviting applications for the role of Principal Consultant, DB ETL Developer

In this role, you will be responsible for coding, testing and delivering high quality deliverables, and should be willing to learn new technologies.

Responsibilities

·Will be responsible for design, code & maintain databases and ensuring their stability, reliability, and performance.

·Research and suggest new database products, services and protocols.

·Ensure all database programs meet company and performance requirements.

·Collaborate with other database teams and owners of different applications.

·Modify databases according to requests and perform tests.

·Maintain and own database in all environments

Qualifications we seek in you!

Minimum Qualifications

·BE/B Tech/MCA

·Excellent written and verbal communication skills

Preferred Qualifications/ Skills

·A bachelor’s degree in Computer Science or a related field.

·Hands-on developer in Sybase, DB2, ETL technologies. 

·Worked extensively on data integration, designing, and developing reusable interfaces/

·Advanced experience in Sybase, shell scripting, Unix, database design and modelling, and ETL technologies, Informatica

·Hands-on experience with Snowflake OR Informatica with below details:-

oDemonstrate expertise in Snowflake data modelling and ELT using Snowflake SQL, implementing complex stored procedures and best practices with data warehouse and ETL concepts.

oDesigning, implementing, and testing cloud computing solutions using Snowflake technology

oExpertise in Snowflake advanced concepts like setting up resource monitors, RBAC controls, virtual warehouse sizing, query performance tuning, zero copy clone, time travel, and understand how to use these features.

oCreating, monitoring and optimization of ETL/ELT processes (Talend, Informatica) migrating solutions from on-premises to public cloud platforms.

·Expert level understanding of data warehouse, core database concepts and relational database design

·Skilled at writing, editing large complicated SQL statements 

·Experience in writing stored procedures, optimization, and performance tuning

·Strong Technology acumen and a deep strategic mindset

·Proven track record of delivering results

·Proven analytical skills and experience making decisions based on hard and soft data

·A desire and openness to learning and continuous improvement, both of yourself and your team members

·Exposure to SDLC tools, such as: JIRA, Confluence, SVN, TeamCity, Jenkins, Nolio, and Crucible. 

·Experience on DevOps, CI/CD, and Agile methodology. 

·Good to have experience with Business Intelligence tools

·Familiarity with Postgres and Python is a plus

This advertiser has chosen not to accept applicants from your region.

Data Migration

Hyderabad, Andhra Pradesh Cognitus Consulting

Posted today

Job Viewed

Tap Again To Close

Job Description

  • Strong management skills with the proven ability to drive business to the completion of mock loads, data cleansing, transformation rules creation/tracking, and the ability to execute to the plan. 
  • 4 to 8 years of experience in Data Conversion/Migration and Master Data Management experience in SAP ECC is required.
  • Strong communication skills with an understanding of how to navigate external resources while understanding the limitation and expectations of Data Migrations 
  • The ability to understand an SAP implemented business solutions, disparate and ancillary systems, and upstream/downstream effects 
  • Expertise in Material Management including Configuration, Master and Transaction data and Procurement scenarios.
  • Interacting with multiple teams (i.e., functional teams, development teams, client business teams, and others) to identify gaps
  • Experience with entire Data Migration process, especially with global template model and experience to load the data independently.
  • Experience in migrating data using staging tables and direct transfer in SAP S/4HANA legacy transfer migration cockpit (LTMC) 
  • Experience in creating user-defined migration. 
  • Migrate the various data objects into S4 Hana system across the various functional modules
  • Responsible for Pre-Load and Post Load data validation
  • Troubleshoot Data Migration issues
  • ducation Qualification- 

  • Any Bachelor's or master’s degree in Computer / IT Engg or related field 
  • Competencies- 

    Must have migration skills

    Must have experience on S/4 HANA 

    Master Data management experience 

    Certificates-

    SAP Certified Technology Associate - OS/DB Migration for SAP NetWeaver 7.52 

    Skills-

    Must have good Data migration skills,

    Good experience S/4 HANA database 

    Must have experience on SAP experience on other modules 

    This advertiser has chosen not to accept applicants from your region.

    Data migration

    Pune, Maharashtra Vertiv

    Posted today

    Job Viewed

    Tap Again To Close

    Job Description

    Vertiv, a $ global organization with nearly 27,000 employees, designs, builds and services critical infrastructure that enables vital applications for data centers, communication networks, and commercial and industrial facilities.  We support today’s growing mobile and cloud computing markets with a portfolio of power, thermal and infrastructure management solutions.

    Job Summary
    The Data Migration – Item MDM  will manage extract, transform and load (ETL) item related data from/to Oracle PD/PIM and Oracle EBS.

    Responsibilities :

    • Part of the Data Migration team doing ETL activities in
      1. Oracle Fusion cloud PD and PIM related to updating item attributes and BOM. Loading new item, document, attachments and BOM information
      2. Oracle EBS related migration of all master and transactional data. Updating item attribution and BOM information. 
    • Previous experience of Data Migration of Item related data in Oracle PD/PIM or Oracle EBS a must. 
    • Adhere to a data migration strategy and usage of specific data migration tools.
    • Identify risks and issues in a timely manner and escalate for resolution as needed.
    • Manage data quality across different phases of the data migration and make sure that data is fit for purpose.
    • Knowledge of Fusion Data Migration tools including FBDI/HDL/ADFDI and Fusion Web Services.
    • Work collaboratively to ensure data is cleansed in a timely manner.
    • Substantial experience working with databases and ETL tools capable of data cleansing.
    • Perform data migration audit, reconciliation and exception reporting. 
    • Work with subject matter experts and project team to identify, define, collate, document and communicate the data migration requirements.
    • Work across multiple functional work streams to understand data usage and implications for data migration. 
    • Support initiatives for data integrity and governance. Perform source data identification and analysis to manage source to target data mapping. 
    • Managing master and transactional data, including creation, updates, and deletion.

    Requirements :

    • Bachelor's Degree in Information Technology, Process Management or related degree or experience
    • At least 4 years of combined experience in item/product data migration specifically extract, transform and load.
    • Candidates should have 2+ years of experience in Oracle Fusion and Oracle EBS data migration roles.
    • Business Knowledge:  Demonstrates strong knowledge of current and possible future policies, practices, trends, technology, and information related to the business and the organization.
    • Communication:  Demonstrates excellent listening and communication skills (written and verbal)
    • Initiative :  Works independently and is highly motivated to initiate and accept new challenges
    • Judgment/Decision Making :  Makes solid decisions based on a mixture of analysis, wisdom, experience, and judgment.
    • Managing & Adapting to Change :  Readily adapts to changes in priority of initiatives and overall strategic direction within a multi-plan, geographically widespread organization.
    • Professionalism :  Exhibits appropriate attributes in all aspects of performance and demeanor
    • Teamwork :  Organizes and directs effective teams at the cross-functional level that consistently achieve stated goals
    • Results Oriented :  Bottom-line oriented and can be counted on to consistently meet and exceed goals.
    This advertiser has chosen not to accept applicants from your region.

    Data Migration

    Karnataka, Karnataka NETSACH GLOBAL

    Posted today

    Job Viewed

    Tap Again To Close

    Job Description


    Greetings from Netsach - A Cyber Security Company.


    We are looking for an experienced SAP MDG Applications consultant and candidate must have worked in S4/HANA for at least one AMS or implementation Project.


    Job Title: SAP Data & MDG Application

    Exp: 5-10yrs

    No of Openings: 3

    Job Location: Bangalore/Hyderabad/Mumbai/Kolkata

    Work Type: Fulltime - Hybrid


    Interested candidates please share your resume at and post in netsachglobal.com.



    Experience of working in S4/HANA for at least one AMS or implementation Project.


    Along with the above, the candidate should have strong knowledge in:


    In-depth exposure to SAP MDG applications - F, S, C, M (Financial Master Data, Supplier, Customer and Material Master Data maintenance)


    Prior functional experience with Data Migration, SAP FI, SAP SD or SAP MM modules


    Participate in requirement gathering sessions and then document the FSD


    Good understanding on MDG standard process and should be able to guide user around MDG


    Experience in SAP MDG S4HANA Configuration (Including Data Model, BRFPlus and Floor Plan manager )


    Experience in configuration rule-based workflow


    Experience in integration business process requirements with technical implementation of SAP Master Data Governance.


    Experience in developing user statistics reports in MDG.


    Knowledge on generating statistics reports for material master data cleansing activities.


    Thank You

    Emily Jha

    +91

    Netsach - A Cyber Security Company


    This advertiser has chosen not to accept applicants from your region.

    Data Migration

    NETSACH GLOBAL

    Posted 16 days ago

    Job Viewed

    Tap Again To Close

    Job Description

    full-time,hybrid


    Greetings from Netsach - A Cyber Security Company.


    We are looking for an experienced SAP MDG Applications consultant and candidate must have worked in S4/HANA for at least one AMS or implementation Project.


    Job Title: SAP Data & MDG Application

    Exp: 5-10yrs

    No of Openings: 3

    Job Location: Bangalore/Hyderabad/Mumbai/Kolkata

    Work Type: Fulltime - Hybrid


    Interested candidates please share your resume at and post in netsachglobal.com.



    Experience of working in S4/HANA for at least one AMS or implementation Project.


    Along with the above, the candidate should have strong knowledge in: 


    In-depth exposure to SAP MDG applications - F, S, C, M (Financial Master Data, Supplier, Customer and Material Master Data maintenance)


    Prior functional experience with Data Migration, SAP FI, SAP SD or SAP MM modules 


    Participate in requirement gathering sessions and then document the FSD


    Good understanding on MDG standard process and should be able to guide user around MDG


    Experience in SAP MDG S4HANA Configuration (Including Data Model, BRFPlus and Floor Plan manager )


    Experience in configuration rule-based workflow


    Experience in integration business process requirements with technical implementation of SAP Master Data Governance.


    Experience in developing user statistics reports in MDG.


    Knowledge on generating statistics reports for material master data cleansing activities.


    Thank You

    Emily Jha

    +91

    Netsach - A Cyber Security Company


    This advertiser has chosen not to accept applicants from your region.

    Data Migration Engineer

    Mumbai, Maharashtra Oracle

    Posted 1 day ago

    Job Viewed

    Tap Again To Close

    Job Description

    **Job Description**
    **Data Migration Engineer**
    About Oracle FSGIU - Finergy:
    The Finergy division within Oracle FSGIU is dedicated to the Banking, Financial Services, and Insurance (BFSI) sector. We offer deep industry knowledge and expertise to address the complex financial needs of our clients. With proven methodologies that accelerate deployment and personalization tools that create loyal customers, Finergy has established itself as a leading provider of end-to-end banking solutions. Our single platform for a wide range of banking services enhances operational efficiency, and our expert consulting services ensure technology aligns with our clients' business goals.
    **Job Summary:**
    We are seeking a skilled Data Migration Engineer with expertise in AWS, Databricks, Python, PySpark, and SQL to lead and execute complex data migration projects. The ideal candidate will design, develop, and implement data migration solutions to move large volumes of data from legacy systems to modern cloud-based platforms, ensuring data integrity, accuracy, and minimal downtime.
    **Job Responsibilities**
    **Software Development:**
    + Design, develop, test, and deploy high-performance and scalable data solutions using Python, PySpark, SQL
    + Collaborate with cross-functional teams to understand business requirements and translate them into technical specifications.
    + Implement efficient and maintainable code using best practices and coding standards.
    **AWS & Databricks Implementation:**
    + Work with Databricks platform for big data processing and analytics.
    + Develop and maintain ETL processes using Databricks notebooks.
    + Implement and optimize data pipelines for data transformation and integration.
    + Utilize AWS services (e.g., S3, Glue, Redshift, Lambda) and Databricks to build and optimize data migration pipelines.
    + Leverage PySpark for large-scale data processing and transformation tasks.
    **Continuous Learning:**
    + Stay updated on the latest industry trends, tools, and technologies related to Python, SQL, and Databricks.
    + Share knowledge with the team and contribute to a culture of continuous improvement.
    **SQL Database Management:**
    + Utilize expertise in SQL to design, optimize, and maintain relational databases.
    + Write complex SQL queries for data retrieval, manipulation, and analysis.
    **Qualifications & Skills:**
    + Education: Bachelor's degree in Computer Science, Engineering, Data Science, or a related field. Advanced degrees are a plus.
    + 3 to 5+ Years of experience in Databricks and big data frameworks
    + Proficient in AWS services and data migration
    + Experience in Unity Catalogue
    + Familiarity with Batch and real time processing
    + Data engineering with strong skills in Python, PySpark, SQL
    + Certifications: AWS Certified Solutions Architect, Databricks Certified Professional, or similar are a plus.
    **Soft Skills:**
    + Strong problem-solving and analytical skills.
    + Excellent communication and collaboration abilities.
    + Ability to work in a fast-paced, agile environment.
    **Responsibilities**
    **Job Responsibilities**
    **Software Development:**
    + Design, develop, test, and deploy high-performance and scalable data solutions using Python, PySpark, SQL
    + Collaborate with cross-functional teams to understand business requirements and translate them into technical specifications.
    + Implement efficient and maintainable code using best practices and coding standards.
    **AWS & Databricks Implementation:**
    + Work with Databricks platform for big data processing and analytics.
    + Develop and maintain ETL processes using Databricks notebooks.
    + Implement and optimize data pipelines for data transformation and integration.
    + Utilize AWS services (e.g., S3, Glue, Redshift, Lambda) and Databricks to build and optimize data migration pipelines.
    + Leverage PySpark for large-scale data processing and transformation tasks.
    **Continuous Learning:**
    + Stay updated on the latest industry trends, tools, and technologies related to Python, SQL, and Databricks.
    + Share knowledge with the team and contribute to a culture of continuous improvement.
    **SQL Database Management:**
    + Utilize expertise in SQL to design, optimize, and maintain relational databases.
    + Write complex SQL queries for data retrieval, manipulation, and analysis.
    Career Level - IC1
    **About Us**
    As a world leader in cloud solutions, Oracle uses tomorrow's technology to tackle today's challenges. We've partnered with industry-leaders in almost every sector-and continue to thrive after 40+ years of change by operating with integrity.
    We know that true innovation starts when everyone is empowered to contribute. That's why we're committed to growing an inclusive workforce that promotes opportunities for all.
    Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs.
    We're committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing or by calling +1 in the United States.
    Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans' status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
    This advertiser has chosen not to accept applicants from your region.

    Data Migration Engineer

    Bengaluru, Karnataka Oracle

    Posted 1 day ago

    Job Viewed

    Tap Again To Close

    Job Description

    **Job Description**
    **Data Migration Engineer**
    About Oracle FSGIU - Finergy:
    The Finergy division within Oracle FSGIU is dedicated to the Banking, Financial Services, and Insurance (BFSI) sector. We offer deep industry knowledge and expertise to address the complex financial needs of our clients. With proven methodologies that accelerate deployment and personalization tools that create loyal customers, Finergy has established itself as a leading provider of end-to-end banking solutions. Our single platform for a wide range of banking services enhances operational efficiency, and our expert consulting services ensure technology aligns with our clients' business goals.
    **Job Summary:**
    We are seeking a skilled Data Migration Engineer with expertise in AWS, Databricks, Python, PySpark, and SQL to lead and execute complex data migration projects. The ideal candidate will design, develop, and implement data migration solutions to move large volumes of data from legacy systems to modern cloud-based platforms, ensuring data integrity, accuracy, and minimal downtime.
    **Job Responsibilities**
    **Software Development:**
    + Design, develop, test, and deploy high-performance and scalable data solutions using Python, PySpark, SQL
    + Collaborate with cross-functional teams to understand business requirements and translate them into technical specifications.
    + Implement efficient and maintainable code using best practices and coding standards.
    **AWS & Databricks Implementation:**
    + Work with Databricks platform for big data processing and analytics.
    + Develop and maintain ETL processes using Databricks notebooks.
    + Implement and optimize data pipelines for data transformation and integration.
    + Utilize AWS services (e.g., S3, Glue, Redshift, Lambda) and Databricks to build and optimize data migration pipelines.
    + Leverage PySpark for large-scale data processing and transformation tasks.
    **Continuous Learning:**
    + Stay updated on the latest industry trends, tools, and technologies related to Python, SQL, and Databricks.
    + Share knowledge with the team and contribute to a culture of continuous improvement.
    **SQL Database Management:**
    + Utilize expertise in SQL to design, optimize, and maintain relational databases.
    + Write complex SQL queries for data retrieval, manipulation, and analysis.
    **Qualifications & Skills:**
    + Education: Bachelor's degree in Computer Science, Engineering, Data Science, or a related field. Advanced degrees are a plus.
    + 3 to 5+ Years of experience in Databricks and big data frameworks
    + Proficient in AWS services and data migration
    + Experience in Unity Catalogue
    + Familiarity with Batch and real time processing
    + Data engineering with strong skills in Python, PySpark, SQL
    + Certifications: AWS Certified Solutions Architect, Databricks Certified Professional, or similar are a plus.
    **Soft Skills:**
    + Strong problem-solving and analytical skills.
    + Excellent communication and collaboration abilities.
    + Ability to work in a fast-paced, agile environment.
    **Responsibilities**
    **Job Responsibilities**
    **Software Development:**
    + Design, develop, test, and deploy high-performance and scalable data solutions using Python, PySpark, SQL
    + Collaborate with cross-functional teams to understand business requirements and translate them into technical specifications.
    + Implement efficient and maintainable code using best practices and coding standards.
    **AWS & Databricks Implementation:**
    + Work with Databricks platform for big data processing and analytics.
    + Develop and maintain ETL processes using Databricks notebooks.
    + Implement and optimize data pipelines for data transformation and integration.
    + Utilize AWS services (e.g., S3, Glue, Redshift, Lambda) and Databricks to build and optimize data migration pipelines.
    + Leverage PySpark for large-scale data processing and transformation tasks.
    **Continuous Learning:**
    + Stay updated on the latest industry trends, tools, and technologies related to Python, SQL, and Databricks.
    + Share knowledge with the team and contribute to a culture of continuous improvement.
    **SQL Database Management:**
    + Utilize expertise in SQL to design, optimize, and maintain relational databases.
    + Write complex SQL queries for data retrieval, manipulation, and analysis.
    Career Level - IC1
    **About Us**
    As a world leader in cloud solutions, Oracle uses tomorrow's technology to tackle today's challenges. We've partnered with industry-leaders in almost every sector-and continue to thrive after 40+ years of change by operating with integrity.
    We know that true innovation starts when everyone is empowered to contribute. That's why we're committed to growing an inclusive workforce that promotes opportunities for all.
    Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs.
    We're committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing or by calling +1 in the United States.
    Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans' status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
    This advertiser has chosen not to accept applicants from your region.
    Be The First To Know

    About the latest Consultant etl developer Jobs in India !

    Data Migration Engineer

    Pune, Maharashtra Oracle

    Posted 1 day ago

    Job Viewed

    Tap Again To Close

    Job Description

    **Job Description**
    **Data Migration Engineer**
    About Oracle FSGIU - Finergy:
    The Finergy division within Oracle FSGIU is dedicated to the Banking, Financial Services, and Insurance (BFSI) sector. We offer deep industry knowledge and expertise to address the complex financial needs of our clients. With proven methodologies that accelerate deployment and personalization tools that create loyal customers, Finergy has established itself as a leading provider of end-to-end banking solutions. Our single platform for a wide range of banking services enhances operational efficiency, and our expert consulting services ensure technology aligns with our clients' business goals.
    **Job Summary:**
    We are seeking a skilled Data Migration Engineer with expertise in AWS, Databricks, Python, PySpark, and SQL to lead and execute complex data migration projects. The ideal candidate will design, develop, and implement data migration solutions to move large volumes of data from legacy systems to modern cloud-based platforms, ensuring data integrity, accuracy, and minimal downtime.
    **Job Responsibilities**
    **Software Development:**
    + Design, develop, test, and deploy high-performance and scalable data solutions using Python, PySpark, SQL
    + Collaborate with cross-functional teams to understand business requirements and translate them into technical specifications.
    + Implement efficient and maintainable code using best practices and coding standards.
    **AWS & Databricks Implementation:**
    + Work with Databricks platform for big data processing and analytics.
    + Develop and maintain ETL processes using Databricks notebooks.
    + Implement and optimize data pipelines for data transformation and integration.
    + Utilize AWS services (e.g., S3, Glue, Redshift, Lambda) and Databricks to build and optimize data migration pipelines.
    + Leverage PySpark for large-scale data processing and transformation tasks.
    **Continuous Learning:**
    + Stay updated on the latest industry trends, tools, and technologies related to Python, SQL, and Databricks.
    + Share knowledge with the team and contribute to a culture of continuous improvement.
    **SQL Database Management:**
    + Utilize expertise in SQL to design, optimize, and maintain relational databases.
    + Write complex SQL queries for data retrieval, manipulation, and analysis.
    **Qualifications & Skills:**
    + Education: Bachelor's degree in Computer Science, Engineering, Data Science, or a related field. Advanced degrees are a plus.
    + 3 to 5+ Years of experience in Databricks and big data frameworks
    + Proficient in AWS services and data migration
    + Experience in Unity Catalogue
    + Familiarity with Batch and real time processing
    + Data engineering with strong skills in Python, PySpark, SQL
    + Certifications: AWS Certified Solutions Architect, Databricks Certified Professional, or similar are a plus.
    **Soft Skills:**
    + Strong problem-solving and analytical skills.
    + Excellent communication and collaboration abilities.
    + Ability to work in a fast-paced, agile environment.
    **Responsibilities**
    **Job Responsibilities**
    **Software Development:**
    + Design, develop, test, and deploy high-performance and scalable data solutions using Python, PySpark, SQL
    + Collaborate with cross-functional teams to understand business requirements and translate them into technical specifications.
    + Implement efficient and maintainable code using best practices and coding standards.
    **AWS & Databricks Implementation:**
    + Work with Databricks platform for big data processing and analytics.
    + Develop and maintain ETL processes using Databricks notebooks.
    + Implement and optimize data pipelines for data transformation and integration.
    + Utilize AWS services (e.g., S3, Glue, Redshift, Lambda) and Databricks to build and optimize data migration pipelines.
    + Leverage PySpark for large-scale data processing and transformation tasks.
    **Continuous Learning:**
    + Stay updated on the latest industry trends, tools, and technologies related to Python, SQL, and Databricks.
    + Share knowledge with the team and contribute to a culture of continuous improvement.
    **SQL Database Management:**
    + Utilize expertise in SQL to design, optimize, and maintain relational databases.
    + Write complex SQL queries for data retrieval, manipulation, and analysis.
    Career Level - IC1
    **About Us**
    As a world leader in cloud solutions, Oracle uses tomorrow's technology to tackle today's challenges. We've partnered with industry-leaders in almost every sector-and continue to thrive after 40+ years of change by operating with integrity.
    We know that true innovation starts when everyone is empowered to contribute. That's why we're committed to growing an inclusive workforce that promotes opportunities for all.
    Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs.
    We're committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing or by calling +1 in the United States.
    Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans' status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
    This advertiser has chosen not to accept applicants from your region.

    Data Migration Engineer

    Bengaluru, Karnataka Oracle

    Posted 1 day ago

    Job Viewed

    Tap Again To Close

    Job Description

    **Job Description**
    **Data Migration Engineer**
    About Oracle FSGIU - Finergy:
    The Finergy division within Oracle FSGIU is dedicated to the Banking, Financial Services, and Insurance (BFSI) sector. We offer deep industry knowledge and expertise to address the complex financial needs of our clients. With proven methodologies that accelerate deployment and personalization tools that create loyal customers, Finergy has established itself as a leading provider of end-to-end banking solutions. Our single platform for a wide range of banking services enhances operational efficiency, and our expert consulting services ensure technology aligns with our clients' business goals.
    **Job Summary:**
    We are seeking a skilled Data Migration Engineer with expertise in AWS, Databricks, Python, PySpark, and SQL to lead and execute complex data migration projects. The ideal candidate will design, develop, and implement data migration solutions to move large volumes of data from legacy systems to modern cloud-based platforms, ensuring data integrity, accuracy, and minimal downtime.
    **Job Responsibilities**
    **Software Development:**
    + Design, develop, test, and deploy high-performance and scalable data solutions using Python, PySpark, SQL
    + Collaborate with cross-functional teams to understand business requirements and translate them into technical specifications.
    + Implement efficient and maintainable code using best practices and coding standards.
    **AWS & Databricks Implementation:**
    + Work with Databricks platform for big data processing and analytics.
    + Develop and maintain ETL processes using Databricks notebooks.
    + Implement and optimize data pipelines for data transformation and integration.
    + Utilize AWS services (e.g., S3, Glue, Redshift, Lambda) and Databricks to build and optimize data migration pipelines.
    + Leverage PySpark for large-scale data processing and transformation tasks.
    **Continuous Learning:**
    + Stay updated on the latest industry trends, tools, and technologies related to Python, SQL, and Databricks.
    + Share knowledge with the team and contribute to a culture of continuous improvement.
    **SQL Database Management:**
    + Utilize expertise in SQL to design, optimize, and maintain relational databases.
    + Write complex SQL queries for data retrieval, manipulation, and analysis.
    **Qualifications & Skills:**
    + Education: Bachelor's degree in Computer Science, Engineering, Data Science, or a related field. Advanced degrees are a plus.
    + 3 to 5+ Years of experience in Databricks and big data frameworks
    + Proficient in AWS services and data migration
    + Experience in Unity Catalogue
    + Familiarity with Batch and real time processing
    + Data engineering with strong skills in Python, PySpark, SQL
    + Certifications: AWS Certified Solutions Architect, Databricks Certified Professional, or similar are a plus.
    **Soft Skills:**
    + Strong problem-solving and analytical skills.
    + Excellent communication and collaboration abilities.
    + Ability to work in a fast-paced, agile environment.
    **Responsibilities**
    **Job Responsibilities**
    **Software Development:**
    + Design, develop, test, and deploy high-performance and scalable data solutions using Python, PySpark, SQL
    + Collaborate with cross-functional teams to understand business requirements and translate them into technical specifications.
    + Implement efficient and maintainable code using best practices and coding standards.
    **AWS & Databricks Implementation:**
    + Work with Databricks platform for big data processing and analytics.
    + Develop and maintain ETL processes using Databricks notebooks.
    + Implement and optimize data pipelines for data transformation and integration.
    + Utilize AWS services (e.g., S3, Glue, Redshift, Lambda) and Databricks to build and optimize data migration pipelines.
    + Leverage PySpark for large-scale data processing and transformation tasks.
    **Continuous Learning:**
    + Stay updated on the latest industry trends, tools, and technologies related to Python, SQL, and Databricks.
    + Share knowledge with the team and contribute to a culture of continuous improvement.
    **SQL Database Management:**
    + Utilize expertise in SQL to design, optimize, and maintain relational databases.
    + Write complex SQL queries for data retrieval, manipulation, and analysis.
    Career Level - IC1
    **About Us**
    As a world leader in cloud solutions, Oracle uses tomorrow's technology to tackle today's challenges. We've partnered with industry-leaders in almost every sector-and continue to thrive after 40+ years of change by operating with integrity.
    We know that true innovation starts when everyone is empowered to contribute. That's why we're committed to growing an inclusive workforce that promotes opportunities for all.
    Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs.
    We're committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing or by calling +1 in the United States.
    Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans' status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
    This advertiser has chosen not to accept applicants from your region.

    Data Migration Engineer

    Mumbai, Maharashtra Oracle

    Posted 1 day ago

    Job Viewed

    Tap Again To Close

    Job Description

    **Job Description**
    **Data Migration Engineer**
    About Oracle FSGIU - Finergy:
    The Finergy division within Oracle FSGIU is dedicated to the Banking, Financial Services, and Insurance (BFSI) sector. We offer deep industry knowledge and expertise to address the complex financial needs of our clients. With proven methodologies that accelerate deployment and personalization tools that create loyal customers, Finergy has established itself as a leading provider of end-to-end banking solutions. Our single platform for a wide range of banking services enhances operational efficiency, and our expert consulting services ensure technology aligns with our clients' business goals.
    **Job Summary:**
    We are seeking a skilled Data Migration Engineer with expertise in AWS, Databricks, Python, PySpark, and SQL to lead and execute complex data migration projects. The ideal candidate will design, develop, and implement data migration solutions to move large volumes of data from legacy systems to modern cloud-based platforms, ensuring data integrity, accuracy, and minimal downtime.
    **Job Responsibilities**
    **Software Development:**
    + Design, develop, test, and deploy high-performance and scalable data solutions using Python, PySpark, SQL
    + Collaborate with cross-functional teams to understand business requirements and translate them into technical specifications.
    + Implement efficient and maintainable code using best practices and coding standards.
    **AWS & Databricks Implementation:**
    + Work with Databricks platform for big data processing and analytics.
    + Develop and maintain ETL processes using Databricks notebooks.
    + Implement and optimize data pipelines for data transformation and integration.
    + Utilize AWS services (e.g., S3, Glue, Redshift, Lambda) and Databricks to build and optimize data migration pipelines.
    + Leverage PySpark for large-scale data processing and transformation tasks.
    **Continuous Learning:**
    + Stay updated on the latest industry trends, tools, and technologies related to Python, SQL, and Databricks.
    + Share knowledge with the team and contribute to a culture of continuous improvement.
    **SQL Database Management:**
    + Utilize expertise in SQL to design, optimize, and maintain relational databases.
    + Write complex SQL queries for data retrieval, manipulation, and analysis.
    **Qualifications & Skills:**
    + Education: Bachelor's degree in Computer Science, Engineering, Data Science, or a related field. Advanced degrees are a plus.
    + 3 to 5+ Years of experience in Databricks and big data frameworks
    + Proficient in AWS services and data migration
    + Experience in Unity Catalogue
    + Familiarity with Batch and real time processing
    + Data engineering with strong skills in Python, PySpark, SQL
    + Certifications: AWS Certified Solutions Architect, Databricks Certified Professional, or similar are a plus.
    **Soft Skills:**
    + Strong problem-solving and analytical skills.
    + Excellent communication and collaboration abilities.
    + Ability to work in a fast-paced, agile environment.
    **Responsibilities**
    **Job Responsibilities**
    **Software Development:**
    + Design, develop, test, and deploy high-performance and scalable data solutions using Python, PySpark, SQL
    + Collaborate with cross-functional teams to understand business requirements and translate them into technical specifications.
    + Implement efficient and maintainable code using best practices and coding standards.
    **AWS & Databricks Implementation:**
    + Work with Databricks platform for big data processing and analytics.
    + Develop and maintain ETL processes using Databricks notebooks.
    + Implement and optimize data pipelines for data transformation and integration.
    + Utilize AWS services (e.g., S3, Glue, Redshift, Lambda) and Databricks to build and optimize data migration pipelines.
    + Leverage PySpark for large-scale data processing and transformation tasks.
    **Continuous Learning:**
    + Stay updated on the latest industry trends, tools, and technologies related to Python, SQL, and Databricks.
    + Share knowledge with the team and contribute to a culture of continuous improvement.
    **SQL Database Management:**
    + Utilize expertise in SQL to design, optimize, and maintain relational databases.
    + Write complex SQL queries for data retrieval, manipulation, and analysis.
    Career Level - IC1
    **About Us**
    As a world leader in cloud solutions, Oracle uses tomorrow's technology to tackle today's challenges. We've partnered with industry-leaders in almost every sector-and continue to thrive after 40+ years of change by operating with integrity.
    We know that true innovation starts when everyone is empowered to contribute. That's why we're committed to growing an inclusive workforce that promotes opportunities for all.
    Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs.
    We're committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing or by calling +1 in the United States.
    Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans' status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
    This advertiser has chosen not to accept applicants from your region.
     

    Nearby Locations

    Other Jobs Near Me

    Industry

    1. request_quote Accounting
    2. work Administrative
    3. eco Agriculture Forestry
    4. smart_toy AI & Emerging Technologies
    5. school Apprenticeships & Trainee
    6. apartment Architecture
    7. palette Arts & Entertainment
    8. directions_car Automotive
    9. flight_takeoff Aviation
    10. account_balance Banking & Finance
    11. local_florist Beauty & Wellness
    12. restaurant Catering
    13. volunteer_activism Charity & Voluntary
    14. science Chemical Engineering
    15. child_friendly Childcare
    16. foundation Civil Engineering
    17. clean_hands Cleaning & Sanitation
    18. diversity_3 Community & Social Care
    19. construction Construction
    20. brush Creative & Digital
    21. currency_bitcoin Crypto & Blockchain
    22. support_agent Customer Service & Helpdesk
    23. medical_services Dental
    24. medical_services Driving & Transport
    25. medical_services E Commerce & Social Media
    26. school Education & Teaching
    27. electrical_services Electrical Engineering
    28. bolt Energy
    29. local_mall Fmcg
    30. gavel Government & Non Profit
    31. emoji_events Graduate
    32. health_and_safety Healthcare
    33. beach_access Hospitality & Tourism
    34. groups Human Resources
    35. precision_manufacturing Industrial Engineering
    36. security Information Security
    37. handyman Installation & Maintenance
    38. policy Insurance
    39. code IT & Software
    40. gavel Legal
    41. sports_soccer Leisure & Sports
    42. inventory_2 Logistics & Warehousing
    43. supervisor_account Management
    44. supervisor_account Management Consultancy
    45. supervisor_account Manufacturing & Production
    46. campaign Marketing
    47. build Mechanical Engineering
    48. perm_media Media & PR
    49. local_hospital Medical
    50. local_hospital Military & Public Safety
    51. local_hospital Mining
    52. medical_services Nursing
    53. local_gas_station Oil & Gas
    54. biotech Pharmaceutical
    55. checklist_rtl Project Management
    56. shopping_bag Purchasing
    57. home_work Real Estate
    58. person_search Recruitment Consultancy
    59. store Retail
    60. point_of_sale Sales
    61. science Scientific Research & Development
    62. wifi Telecoms
    63. psychology Therapy
    64. pets Veterinary
    View All Consultant Etl Developer Jobs