3,839 Etl Processes jobs in India

Data Engineering

Chennai, Tamil Nadu EXL

Posted today

Job Viewed

Tap Again To Close

Job Description

Responsibilities:

  • Work with stakeholders to understand the data requirements to design, develop, and maintain complex ETL processes.
  • Create the data integration and data diagram documentation.
  • Lead the data validation, UAT and regression test for new data asset creation.
  • Create and maintain data models, including schema design and optimization.
  • Create and manage data pipelines that automate the flow of data, ensuring data quality and consistency.

Qualifications and Skills:

  • Strong knowledge on Python and Pyspark
  • Expectation is to have ability to write Pyspark scripts for developing data workflows.
  • Strong knowledge on SQL, Hadoop, Hive, Azure, Databricks and Greenplum
  • Expectation is to write SQL to query metadata and tables from different data management system such as, Oracle, Hive, Databricks and Greenplum.
  • Familiarity with big data technologies like Hadoop, Spark, and distributed computing frameworks.
  • Expectation is to use Hue and run Hive SQL queries, schedule Apache Oozie jobs to automate the data workflows.
  • Good working experience of communicating with the stakeholders and collaborate effectively with the business team for data testing.
  • Expectation is to have strong problem-solving and troubleshooting skills.
  • Expectation is to establish comprehensive data quality test cases, procedures and implement automated data validation processes.
  • Degree in Data Science, Statistics, Computer Science or other related fields or an equivalent combination of education and experience.
  • 3-7 years of experience in Data Engineer.
  • Proficiency in programming languages commonly used in data engineering, such as Python, Pyspark, SQL.
  • Experience in Azure cloud computing platform, such as developing ETL processes using Azure Data Factory, big data processing and analytics with Azure Databricks.
  • Strong communication, problem solving and analytical skills with the ability to do time management and multi-tasking with attention to detail and accuracy.
This advertiser has chosen not to accept applicants from your region.

Data Engineering

Chennai, Tamil Nadu EXL

Posted today

Job Viewed

Tap Again To Close

Job Description

Responsibilities:

- Work with stakeholders to understand the data requirements to design, develop, and maintain complex ETL processes.
- Create the data integration and data diagram documentation.
- Lead the data validation, UAT and regression test for new data asset creation.
- Create and maintain data models, including schema design and optimization.
- Create and manage data pipelines that automate the flow of data, ensuring data quality and consistency.

Qualifications and Skills:

- Strong knowledge on Python and Pyspark
- Expectation is to have ability to write Pyspark scripts for developing data workflows.
- Strong knowledge on SQL, Hadoop, Hive, Azure, Databricks and Greenplum
- Expectation is to write SQL to query metadata and tables from different data management system such as, Oracle, Hive, Databricks and Greenplum.
- Familiarity with big data technologies like Hadoop, Spark, and distributed computing frameworks.
- Expectation is to use Hue and run Hive SQL queries, schedule Apache Oozie jobs to automate the data workflows.
- Good working experience of communicating with the stakeholders and collaborate effectively with the business team for data testing.
- Expectation is to have strong problem-solving and troubleshooting skills.
- Expectation is to establish comprehensive data quality test cases, procedures and implement automated data validation processes.
- Degree in Data Science, Statistics, Computer Science or other related fields or an equivalent combination of education and experience.
- 3-7 years of experience in Data Engineer.
- Proficiency in programming languages commonly used in data engineering, such as Python, Pyspark, SQL.
- Experience in Azure cloud computing platform, such as developing ETL processes using Azure Data Factory, big data processing and analytics with Azure Databricks.
- Strong communication, problem solving and analytical skills with the ability to do time management and multi-tasking with attention to detail and accuracy.
This advertiser has chosen not to accept applicants from your region.

Data Engineering

Chennai, Tamil Nadu EXL

Posted today

Job Viewed

Tap Again To Close

Job Description

Responsibilities:

  • Work with stakeholders to understand the data requirements to design, develop, and maintain complex ETL processes.
  • Create the data integration and data diagram documentation.
  • Lead the data validation, UAT and regression test for new data asset creation.
  • Create and maintain data models, including schema design and optimization.
  • Create and manage data pipelines that automate the flow of data, ensuring data quality and consistency.

Qualifications and Skills:

  • Strong knowledge on Python and Pyspark
  • Expectation is to have ability to write Pyspark scripts for developing data workflows.
  • Strong knowledge on SQL, Hadoop, Hive, Azure, Databricks and Greenplum
  • Expectation is to write SQL to query metadata and tables from different data management system such as, Oracle, Hive, Databricks and Greenplum.
  • Familiarity with big data technologies like Hadoop, Spark, and distributed computing frameworks.
  • Expectation is to use Hue and run Hive SQL queries, schedule Apache Oozie jobs to automate the data workflows.
  • Good working experience of communicating with the stakeholders and collaborate effectively with the business team for data testing.
  • Expectation is to have strong problem-solving and troubleshooting skills.
  • Expectation is to establish comprehensive data quality test cases, procedures and implement automated data validation processes.
  • Degree in Data Science, Statistics, Computer Science or other related fields or an equivalent combination of education and experience.
  • 3-7 years of experience in Data Engineer.
  • Proficiency in programming languages commonly used in data engineering, such as Python, Pyspark, SQL.
  • Experience in Azure cloud computing platform, such as developing ETL processes using Azure Data Factory, big data processing and analytics with Azure Databricks.
  • Strong communication, problem solving and analytical skills with the ability to do time management and multi-tasking with attention to detail and accuracy.
This advertiser has chosen not to accept applicants from your region.

Data Engineering

Tamil Nadu, Tamil Nadu EXL

Posted today

Job Viewed

Tap Again To Close

Job Description

Responsibilities:

  • Work with stakeholders to understand the data requirements to design, develop, and maintain complex ETL processes.
  • Create the data integration and data diagram documentation.
  • Lead the data validation, UAT and regression test for new data asset creation.
  • Create and maintain data models, including schema design and optimization.
  • Create and manage data pipelines that automate the flow of data, ensuring data quality and consistency.

Qualifications and Skills:

  • Strong knowledge on Python and Pyspark
  • Expectation is to have ability to write Pyspark scripts for developing data workflows.
  • Strong knowledge on SQL, Hadoop, Hive, Azure, Databricks and Greenplum
  • Expectation is to write SQL to query metadata and tables from different data management system such as, Oracle, Hive, Databricks and Greenplum.
  • Familiarity with big data technologies like Hadoop, Spark, and distributed computing frameworks.
  • Expectation is to use Hue and run Hive SQL queries, schedule Apache Oozie jobs to automate the data workflows.
  • Good working experience of communicating with the stakeholders and collaborate effectively with the business team for data testing.
  • Expectation is to have strong problem-solving and troubleshooting skills.
  • Expectation is to establish comprehensive data quality test cases, procedures and implement automated data validation processes.
  • Degree in Data Science, Statistics, Computer Science or other related fields or an equivalent combination of education and experience.
  • 3-7 years of experience in Data Engineer.
  • Proficiency in programming languages commonly used in data engineering, such as Python, Pyspark, SQL.
  • Experience in Azure cloud computing platform, such as developing ETL processes using Azure Data Factory, big data processing and analytics with Azure Databricks.
  • Strong communication, problem solving and analytical skills with the ability to do time management and multi-tasking with attention to detail and accuracy.
This advertiser has chosen not to accept applicants from your region.

Data engineering

Chennai, Tamil Nadu EXL

Posted 1 day ago

Job Viewed

Tap Again To Close

Job Description

permanent
Responsibilities:
Work with stakeholders to understand the data requirements to design, develop, and maintain complex ETL processes.
Create the data integration and data diagram documentation.
Lead the data validation, UAT and regression test for new data asset creation.
Create and maintain data models, including schema design and optimization.
Create and manage data pipelines that automate the flow of data, ensuring data quality and consistency.
Qualifications and Skills:
Strong knowledge on Python and Pyspark
Expectation is to have ability to write Pyspark scripts for developing data workflows.
Strong knowledge on SQL, Hadoop, Hive, Azure, Databricks and Greenplum
Expectation is to write SQL to query metadata and tables from different data management system such as, Oracle, Hive, Databricks and Greenplum.
Familiarity with big data technologies like Hadoop, Spark, and distributed computing frameworks.
Expectation is to use Hue and run Hive SQL queries, schedule Apache Oozie jobs to automate the data workflows.
Good working experience of communicating with the stakeholders and collaborate effectively with the business team for data testing.
Expectation is to have strong problem-solving and troubleshooting skills.
Expectation is to establish comprehensive data quality test cases, procedures and implement automated data validation processes.
Degree in Data Science, Statistics, Computer Science or other related fields or an equivalent combination of education and experience.
3-7 years of experience in Data Engineer.
Proficiency in programming languages commonly used in data engineering, such as Python, Pyspark, SQL.
Experience in Azure cloud computing platform, such as developing ETL processes using Azure Data Factory, big data processing and analytics with Azure Databricks.
Strong communication, problem solving and analytical skills with the ability to do time management and multi-tasking with attention to detail and accuracy.
This advertiser has chosen not to accept applicants from your region.

Data Engineering

EXL

Posted today

Job Viewed

Tap Again To Close

Job Description

Responsibilities:

  • Work with stakeholders to understand the data requirements to design, develop, and maintain complex ETL processes.
  • Create the data integration and data diagram documentation.
  • Lead the data validation, UAT and regression test for new data asset creation.
  • Create and maintain data models, including schema design and optimization.
  • Create and manage data pipelines that automate the flow of data, ensuring data quality and consistency.

Qualifications and Skills:

  • Strong knowledge on Python and Pyspark
  • Expectation is to have ability to write Pyspark scripts for developing data workflows.
  • Strong knowledge on SQL, Hadoop, Hive, Azure, Databricks and Greenplum
  • Expectation is to write SQL to query metadata and tables from different data management system such as, Oracle, Hive, Databricks and Greenplum.
  • Familiarity with big data technologies like Hadoop, Spark, and distributed computing frameworks.
  • Expectation is to use Hue and run Hive SQL queries, schedule Apache Oozie jobs to automate the data workflows.
  • Good working experience of communicating with the stakeholders and collaborate effectively with the business team for data testing.
  • Expectation is to have strong problem-solving and troubleshooting skills.
  • Expectation is to establish comprehensive data quality test cases, procedures and implement automated data validation processes.
  • Degree in Data Science, Statistics, Computer Science or other related fields or an equivalent combination of education and experience.
  • 3-7 years of experience in Data Engineer.
  • Proficiency in programming languages commonly used in data engineering, such as Python, Pyspark, SQL.
  • Experience in Azure cloud computing platform, such as developing ETL processes using Azure Data Factory, big data processing and analytics with Azure Databricks.
  • Strong communication, problem solving and analytical skills with the ability to do time management and multi-tasking with attention to detail and accuracy.
This advertiser has chosen not to accept applicants from your region.

Data Engineering, Associate

Bangalore, Karnataka BlackRock

Posted 1 day ago

Job Viewed

Tap Again To Close

Job Description

**About this role**
At BlackRock, technology has always been at the core of what we do - and today, our technologists continue to shape the future of the industry with their innovative work. We are not only curious but also collaborative and eager to embrace experimentation as a means to solve complex challenges. Here you'll find an environment that promotes working across teams, businesses, regions and specialties - and a firm committed to supporting your growth as a technologist through curated learning opportunities, tech-specific career paths, and access to experts and leaders around the world.
We are seeking a highly skilled and motivated Senior level Data Engineer to join the Private Market Data Engineering team within Aladdin Data at BlackRock for driving our Private Market Data Engineering vision of making private markets more accessible and transparent for clients. In this role, you will work multi-functionally with Product, Data Research, Engineering, and Program management.
Engineers looking to work in the areas of orchestration, data modeling, data pipelines, APIs, storage, distribution, distributed computation, consumption and infrastructure are ideal candidates. The candidate will have extensive experience in developing data pipelines using Python, Java, Apache Airflow orchestration platform, DBT (Data Build Tool), Great Expectations for data validation, Apache Spark, MongoDB, Elasticsearch, Snowflake and PostgreSQL. In this role, you will be responsible for designing, developing, and maintaining robust and scalable data pipelines. You will collaborate with various stakeholders to ensure the data pipelines are efficient, reliable, and meet the needs of the business.
**Key Responsibilities**
+ Design, develop, and maintain data pipelines using Aladdin Data Enterprise Data Platform framework
+ Develop ETL/ELT data pipelines using Python, SQL and deploy them as containerized apps on a Kubernetes cluster
+ Develop API for data distribution on top of the standard data model of the Enterprise Data Platform
+ Design and develop optimized back-end services in Java / Python for APIs to handle faster data retrieval and optimized processing
+ Develop reusable back-end services for data pipeline processing in Python / Java
+ Develop data transformation using DBT (Data Build Tool) with SQL or Python
+ Ensure data quality and integrity through automated testing and validation using tools like Great Expectations
+ Implement all observability requirements in the data pipeline
+ Optimize data workflows for performance and scalability
+ Monitor and troubleshoot data pipeline issues, ensuring timely resolution
+ Document data engineering processes and best practices whenever required
**Required Skills and Qualifications**
+ Must have 5 to 8 years of experience in data engineering, with a focus on building data pipelines and Data Services APIs
+ Strong server-side programming skills in Python and/or Java.
+ Experience working with backend microservices and APIs using Java and/or Python
+ Experience with Apache Airflow or any other orchestration framework for data orchestration
+ Proficiency in DBT for data transformation and modeling
+ Experience with data quality validation tools like Great Expectations or any other similar tools
+ Strong at writing SQL and experience with relational databases like SQL Server, PostgreSQL
+ Experience with cloud-based data warehouse platform like Snowflake
+ Experience working on NoSQL databases like Elasticsearch and MongoDB
+ Experience working with container orchestration platform like Kubernetes on AWS and/or Azure cloud environments
+ Experience on Cloud platforms like AWS and/or Azure
+ Ability to work collaboratively in a team environment
+ Need to possess critical skills of being detail oriented, passion to learn new technologies and good analytical and problem-solving skills
+ Experience with Financial Services application is a plus
+ Effective communication skills, both written and verbal
+ Bachelor's or Master's degree in computer science, Engineering, or a related field
**Our benefits**
To help you stay energized, engaged and inspired, we offer a wide range of benefits including a strong retirement plan, tuition reimbursement, comprehensive healthcare, support for working parents and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about.
**Our hybrid work model**
BlackRock's hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person - aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock.
**About BlackRock**
At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their children's educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress.
This mission would not be possible without our smartest investment - the one we make in our employees. It's why we're dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive.
For additional information on BlackRock, please visit @blackrock ( | Twitter: @blackrock ( | LinkedIn: is proud to be an Equal Opportunity Employer. We evaluate qualified applicants without regard to age, disability, family status, gender identity, race, religion, sex, sexual orientation and other protected attributes at law.
This advertiser has chosen not to accept applicants from your region.
Be The First To Know

About the latest Etl processes Jobs in India !

Data Engineering Consultant

Hyderabad, Andhra Pradesh UnitedHealth Group

Posted 1 day ago

Job Viewed

Tap Again To Close

Job Description

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start **Caring. Connecting. Growing together.**
**Primary Responsibilities:**
+ The Data Engineering Consultant role will be working in support of the Information Security organization Data Enablement team. Defining, designing, constructing, processing and supporting data pipelines, data products and data assets, all in support of a wide variety of client use cases within the Security organization
+ In this role, you will be working with a variety of programming languages on a tech stack that leverages Snowflake, various Cloud CSP data storage offerings, Airflow orchestration and scheduling and various internal CICD and Devops packages. You will be supporting our ongoing data strategy and solutions with a focus on Medalion Data architecture
+ You will be working with other engineers, analysts and clients in understanding business problems, designing proposed solutions and supporting the products that you help to build
+ A successful candidate for this role will have experience working in a data products environment, is deeply skilled with Snowflake and Airflow and is skilled in building and supporting complex data pipelines and ETL/ELT with Python, Spark, Pyspark, Scala, Java and SQL and has demonstrated experience supporting industry standard data governance processes and data quality processes. Demonstrated ability to conduct data modeling and data model construction and support are also essential as is the ability to performance tune and optimize data pipelines
+ Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regard to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so
**Required Qualifications:**
+ Undergraduate degree in Computer Science, Data Science, Data Analytics, Mathematics, Information Systems or some related field with an emphasis on coding, analytics or applied mathematics
+ 4+ years previous experience with direct hands on data engineering on Snowflake with Airflow
+ 5+ years hand on experience coding complex data pipelines with Python, Pyspark, Scala, SQL and related languages
+ 3+ years experience working with and integrating pipelines and tech stack components to database products like Postgres, MySQL, MS SQLserver, Oracle, Mongo, Cassandra
+ Demonstrated experience working with and operating on one of the following cloud storage technologies - AWS, GCP, Azure
+ Demonstrated experience operating in an environment with modern data governance processes and protocols
+ Demonstrated experience building data quality monitoring solutions
+ Demonstrated experience working with modern CICD and devops principals including automated deployment
+ Demonstrated problem solving skills
+ Solid communications skills - verbally and written
**Preferred Qualifications:**
+ Advanced degree in computer science, math, analytics, data science or other similarly technical field
+ Experience with Security data and information security
+ Experience with Health care data
+ Experience implementing data privacy controls for variety of data from HIPAA to PII to PCI etc.
+ Experience with streaming technologies like KAFKA
+ Experience building and ingesting data from APIs
+ Experience with Azure Data Factory
+ Exposure to reporting technologies like Tableau, Powerbi, Microstrategy
_At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission._
#NJP
This advertiser has chosen not to accept applicants from your region.

Director Data Engineering

Chennai, Tamil Nadu UnitedHealth Group

Posted 1 day ago

Job Viewed

Tap Again To Close

Job Description

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start **Caring. Connecting. Growing together.**
We are seeking a visionary and technically adept Senior Data Engineering Leader to architect, scale, and optimize our data infrastructure. This role will drive the design and implementation of robust, cost-efficient, and observable data pipelines that power analytics, AI/ML, and operational systems across the enterprise. The ideal candidate will be a strategic thinker who can influence senior leadership and lead high-performing engineering teams.
**Primary Responsibilities:**
+ Data Pipeline Architecture: Design and implement scalable, high-performance data pipelines that support batch and real-time processing across diverse data domains
+ Total Cost of Ownership (TCO): Architect solutions with a focus on long-term sustainability, balancing performance, scalability, and cost efficiency
+ Operational Observability: Establish proactive monitoring, alerting, and logging frameworks to ensure system health, data quality, and SLA adherence
+ CI/CD & Automation: Champion automated testing, deployment, and release processes using modern DevOps practices. Ensure robust version control and rollback strategies
+ Blue-Green Deployments: Implement blue-green or canary deployment strategies to minimize downtime and risk during releases
+ Strategic Communication: Translate complex architectural decisions into business value. Confidently present and defend architectural choices to senior technology and business leaders
+ Leadership & Mentorship: Lead and mentor a team of data engineers, fostering a culture of innovation, accountability, and continuous improvement
+ Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regard to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so
**Required Qualifications:**
+ Undergraduate degree or equivalent experience
+ Proven experience leading enterprise-scale data engineering initiatives
+ Hands-on experience with CI/CD pipelines, infrastructure-as-code (e.g., Terraform), and containerization (e.g., Docker, Kubernetes)
+ Experience with data governance, privacy, and compliance frameworks
+ Experience with AI/ML data pipelines and MLOps practices
+ Experience in healthcare, finance, or any other regulated industries
+ Deep expertise in data architecture, distributed systems, and cloud-native technologies (e.g., AWS, GCP, Azure)
+ Solid command of data modeling, ETL/ELT, orchestration tools (e.g., Airflow, dbt), and streaming platforms (e.g., Kafka)
+ Demonstrated success in implementing observability frameworks (e.g., Prometheus, Grafana, Datadog)
+ Proven excellent communication and stakeholder management skills
_At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission._
This advertiser has chosen not to accept applicants from your region.

Data Engineering Lead

Gurgaon, Haryana UnitedHealth Group

Posted 1 day ago

Job Viewed

Tap Again To Close

Job Description

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start **Caring. Connecting. Growing together.**
**Primary Responsibilities:**
+ Design and develop applications and services running on Azure, with a solid emphasis on Azure Databricks, ensuring optimal performance, scalability, and security
+ Build and maintain data pipelines using Azure Databricks and other Azure data integration tools
+ Write, read, and debug Spark, Scala, and Python code to process and analyze large datasets
+ Write extensive query in SQL and Snowflake
+ Implement security and access control measures and regularly audit Azure platform and infrastructure to ensure compliance
+ Create, understand, and validate design and estimated effort for given module/task, and be able to justify it
+ Implement and adhere to best engineering practices like design, unit testing, functional testing automation, continuous integration, and delivery
+ Maintain code quality by writing clean, maintainable, and testable code
+ Monitor performance and optimize resources to ensure cost-effectiveness and high availability
+ Define and document best practices and strategies regarding application deployment and infrastructure maintenance
+ Provide technical support and consultation for infrastructure questions
+ Help develop, manage, and monitor continuous integration and delivery systems
+ Take accountability and ownership of features and teamwork
+ Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regard to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so
**Required Qualifications:**
+ B. Tech or MCA (16+ years of formal education)
+ Overall 7+ years of experience
+ 5+ years of experience in writing advanced level SQL
+ 3+ years of experience in Azure (ADF), Databricks and DevOps
+ 3+ years of experience in architecting, designing, developing, and implementing cloud solutions on Azure
+ 2+ years of experience in writing, reading, and debugging Spark, Scala, and Python code
+ Experience in interacting with international customers to gather requirements and convert them into solutions using relevant skills
+ Proficiency in programming languages and scripting tools
+ Understanding of cloud data storage and database technologies such as SQL and NoSQL
+ solid troubleshooting skills and perform troubleshooting of issues in different technologies and environments
+ Familiarity with DevOps practices and tools, such as continuous integration and continuous deployment (CI/CD) and Teraform
+ Proven ability to collaborate with multidisciplinary teams of business analysts, developers, data scientists, and subject-matter experts
+ Proven proactive approach to spotting problems, areas for improvement, and performance bottlenecks
+ Proven excellent communication, writing, and presentation skills
**Preferred Qualifications:**
+ Experience and skills with Snowflake
+ Knowledge of AI/ML or LLM (GenAI)
+ Knowledge of US Healthcare domain and experience with healthcare data
_At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission._
This advertiser has chosen not to accept applicants from your region.
 

Nearby Locations

Other Jobs Near Me

Industry

  1. request_quote Accounting
  2. work Administrative
  3. eco Agriculture Forestry
  4. smart_toy AI & Emerging Technologies
  5. school Apprenticeships & Trainee
  6. apartment Architecture
  7. palette Arts & Entertainment
  8. directions_car Automotive
  9. flight_takeoff Aviation
  10. account_balance Banking & Finance
  11. local_florist Beauty & Wellness
  12. restaurant Catering
  13. volunteer_activism Charity & Voluntary
  14. science Chemical Engineering
  15. child_friendly Childcare
  16. foundation Civil Engineering
  17. clean_hands Cleaning & Sanitation
  18. diversity_3 Community & Social Care
  19. construction Construction
  20. brush Creative & Digital
  21. currency_bitcoin Crypto & Blockchain
  22. support_agent Customer Service & Helpdesk
  23. medical_services Dental
  24. medical_services Driving & Transport
  25. medical_services E Commerce & Social Media
  26. school Education & Teaching
  27. electrical_services Electrical Engineering
  28. bolt Energy
  29. local_mall Fmcg
  30. gavel Government & Non Profit
  31. emoji_events Graduate
  32. health_and_safety Healthcare
  33. beach_access Hospitality & Tourism
  34. groups Human Resources
  35. precision_manufacturing Industrial Engineering
  36. security Information Security
  37. handyman Installation & Maintenance
  38. policy Insurance
  39. code IT & Software
  40. gavel Legal
  41. sports_soccer Leisure & Sports
  42. inventory_2 Logistics & Warehousing
  43. supervisor_account Management
  44. supervisor_account Management Consultancy
  45. supervisor_account Manufacturing & Production
  46. campaign Marketing
  47. build Mechanical Engineering
  48. perm_media Media & PR
  49. local_hospital Medical
  50. local_hospital Military & Public Safety
  51. local_hospital Mining
  52. medical_services Nursing
  53. local_gas_station Oil & Gas
  54. biotech Pharmaceutical
  55. checklist_rtl Project Management
  56. shopping_bag Purchasing
  57. home_work Real Estate
  58. person_search Recruitment Consultancy
  59. store Retail
  60. point_of_sale Sales
  61. science Scientific Research & Development
  62. wifi Telecoms
  63. psychology Therapy
  64. pets Veterinary
View All Etl Processes Jobs