2,326 Cloud Data jobs in India

Senior Data Engineer

Mumbai, Maharashtra Aunalytics

Job Viewed

Tap Again To Close

Job Description

Position Overview 

As a Senior Data Engineer (multiple openings), you will work independently to design strategies for enterprise database systems and set standards for operations, programming, and security. Critical at all stages of the data science process, Data Engineers work cross-functionally with both external and internal teams – from business analysts to data scientists; web app developers to platform engineers; IT teams to high-level executives.

Essential Duties & Responsibilities: 

  • Will Design and construct large relational databases.
  • Integrate new systems with existing warehouse structure and refine system performance and functionality.
  • Will serve as a data expert who understands how a client’s data fits into our Industry Intelligent data models.
  • Utilizing this knowledge and the industry’s newest technologies, will create high performance databases that become the very foundation of the work we do.
  • Will build and own “one source of truth” data sets to facilitate consistency and efficiency in extracting and analyzing data from disparate data sources.
  • Will ensure data integrity by developing and executing necessary processes and controls around the flow of data.
  • Will apply mathematical skills and knowledge in combination with computer science and programming skills and knowledge to innovate and improve efficiency of managing data to allow for greater speed and accuracy of producing analyses, metrics, and insights.
  • Will collaborate with internal and external teams to understand business needs/issues, troubleshoot problems, conduct root cause analysis, and develop cost effective resolutions for data anomalies.
  • Will provide input into data governance initiatives to enhance current systems, ensure development of efficient application systems, influence the development of data policy, and support overall corporate and business goals.
  • In the performance of these complex and highly technical responsibilities, will use technology to analyze data from applicable systems to review data processes, identify issues, and determine actions to resolve or escalate problems that require data, system, or process improvement.
  • Will verify the accuracy of table changes and data transformation processes as well as test changes prior to deployment as appropriate.
  • Will be called on to recommend and implement enhancements that standardize and streamline processes, assure data quality and reliability, and reduce processing time to meet client expectations.
  • Will stay up-to-date on data engineering and data science trends and developments and will follow company policy and procedures which protect sensitive data and maintain compliance with established security standards and best practices.
  • Will perform other related professional responsibilities as may be needed.

Required Skills: 

  • Requires Bachelor’s degree/foreign equivalent in Computer Science, Computer Engineering, Mathematics, or a related field.
  • Requires 5 years of work experience, to include:
    • 5 years of experience with relational database structures, data lake management, and SQL;
    • 5 years of experience in data architecture designing and programming ETL procedures;
    • 5 years of experience working with commercial relational database systems (such as electronic medical records or other clinical systems, client relationship management software, sales and online marketing data systems, or accounting systems);
    • 5 years of experience with one of the following: PHP, Java, or Python;
    • 5 years of experience with data management methodologies and data quality assurance practices; and
  • Ability to professionally communicate ideas (verbal and written).
  • Experience may be gained concurrently.
  • May also hold a Master’s degree/foreign equivalent in above areas with 3 years of above experience.

What's in it for You? 

  • Opportunity to work with a rapidly expanding tech company in the booming field of data analytics alongside some of the brightest minds in the industry 
  • Opportunity to work with cutting-edge technology in a casual, fun environment  
  • Opportunity to be a part of a local company committed to making a difference in our community
  • Competitive salary and benefits package including health, vision, dental and life insurance and 401(k) plan. 
This advertiser has chosen not to accept applicants from your region.

Job No Longer Available

This position is no longer listed on WhatJobs. The employer may be reviewing applications, filled the role, or has removed the listing.

However, we have similar jobs available for you below.

Cloud Data Engineer

CAI

Posted 6 days ago

Job Viewed

Tap Again To Close

Job Description

Cloud Data Engineer
**Req number:**
R5934
**Employment type:**
Full time
**Worksite flexibility:**
Remote
**Who we are**
CAI is a global technology services firm with over 8,500 associates worldwide and a yearly revenue of $1 billion+. We have over 40 years of excellence in uniting talent and technology to power the possible for our clients, colleagues, and communities. As a privately held company, we have the freedom and focus to do what is right-whatever it takes. Our tailor-made solutions create lasting results across the public and commercial sectors, and we are trailblazers in bringing neurodiversity to the enterprise.
**Job Summary**
We are seeking a motivated Cloud Data Engineer that has experience in building data products using Databricks and related technologies. This is a Full-time and Remote position.
**Job Description**
**What You'll Do**
+ Analyze and understand existing data warehouse implementations to support migration and consolidation efforts.
+ Reverse-engineer legacy stored procedures (PL/SQL, SQL) and translate business logic into scalable Spark SQL code within Databricks notebooks.
+ Design and develop data lake solutions on AWS using S3 and Delta Lake architecture, leveraging Databricks for processing and transformation.
+ Build and maintain robust data pipelines using ETL tools with ingestion into S3 and processing in Databricks.
+ Collaborate with data architects to implement ingestion and transformation frameworks aligned with enterprise standards.
+ Evaluate and optimize data models (Star, Snowflake, Flattened) for performance and scalability in the new platform.
+ Document ETL processes, data flows, and transformation logic to ensure transparency and maintainability.
+ Perform foundational data administration tasks including job scheduling, error troubleshooting, performance tuning, and backup coordination.
+ Work closely with cross-functional teams to ensure smooth transition and integration of data sources into the unified platform.
+ Participate in Agile ceremonies and contribute to sprint planning, retrospectives, and backlog grooming.
+ Triage, debug and fix technical issues related to Data Lakes.
+ Maintain and Manage Code repositories like Git.
**What You'll Need**
+ 5+ years of experience working with **Databricks** , including Spark SQL and Delta Lake implementations.
+ 3 + years of experience in designing and implementing data lake architectures on Databricks.
+ Strong SQL and PL/SQL skills with the ability to interpret and refactor legacy stored procedures.
+ Hands-on experience with data modeling and warehouse design principles.
+ Proficiency in at least one programming language (Python, Scala, Java).
+ Bachelor's degree in Computer Science, Information Technology, Data Engineering, or related field.
+ Experience working in Agile environments and contributing to iterative development cycles. Experience working on Agile projects and Agile methodology in general.
+ Databricks cloud certification is a big plus.
+ Exposure to enterprise data governance and metadata management practices.
**Physical Demands**
+ This role involves mostly sedentary work, with occasional movement around the office to attend meetings, etc.
+ Ability to perform repetitive tasks on a computer, using a mouse, keyboard, and monitor.
**Reasonable accommodation statement**
If you require a reasonable accommodation in completing this application, interviewing, completing any pre-employment testing, or otherwise participating in the employment selection process, please direct your inquiries to or (888) 824 - 8111.
This advertiser has chosen not to accept applicants from your region.

Cloud Data Engineer

Chennai, Tamil Nadu Giggso

Posted 7 days ago

Job Viewed

Tap Again To Close

Job Description

Key Responsibilities:

• Design, develop, and maintain cloud-based solutions on Azure or AWS.

• Implement and manage real-time data streaming and messaging systems using Kafka.

• Develop scalable applications and services using Java and Python.

• Deploy, manage, and monitor containerized applications using Kubernetes.

• Build and optimize big data processing pipelines using Databricks.

• Manage and maintain databases, including SQL Server and Snowflake, and write

complex SQL scripts.

• Work with Unix/Linux commands to manage and monitor system operations.

• Collaborate with cross-functional teams to ensure seamless integration of cloud-based

solutions.


Key Skills:

• Expertise in Azure or AWS cloud platforms.

• Proficiency in Kafka, Java, Python, and Kubernetes.

• Hands-on experience with Databricks for big data processing.

• Strong database management skills with SQL Server, Snowflake, and advanced SQL

scripting.

• Solid understanding of Unix/Linux commands.


General Requirements for Both Off-Shore Roles:

• Bachelor’s degree in computer science, Engineering, or a related field (or equivalent

experience).

• 5+ years of experience in cloud and data engineering roles.

• Strong problem-solving and analytical skills.

• Excellent communication and collaboration abilities.

• Proven ability to work in a fast-paced, agile environment.

This advertiser has chosen not to accept applicants from your region.

Cloud Data Engineer

Chennai, Tamil Nadu Giggso

Posted today

Job Viewed

Tap Again To Close

Job Description

Key Responsibilities:

• Design, develop, and maintain cloud-based solutions on Azure or AWS.

• Implement and manage real-time data streaming and messaging systems using Kafka.

• Develop scalable applications and services using Java and Python.

• Deploy, manage, and monitor containerized applications using Kubernetes.

• Build and optimize big data processing pipelines using Databricks.

• Manage and maintain databases, including SQL Server and Snowflake, and write

complex SQL scripts.

• Work with Unix/Linux commands to manage and monitor system operations.

• Collaborate with cross-functional teams to ensure seamless integration of cloud-based

solutions.

Key Skills:

• Expertise in Azure or AWS cloud platforms.

• Proficiency in Kafka, Java, Python, and Kubernetes.

• Hands-on experience with Databricks for big data processing.

• Strong database management skills with SQL Server, Snowflake, and advanced SQL

scripting.

• Solid understanding of Unix/Linux commands.

General Requirements for Both Off-Shore Roles:

• Bachelor’s degree in computer science, Engineering, or a related field (or equivalent

experience).

• 5+ years of experience in cloud and data engineering roles.

• Strong problem-solving and analytical skills.

• Excellent communication and collaboration abilities.

• Proven ability to work in a fast-paced, agile environment.

This advertiser has chosen not to accept applicants from your region.

Cloud Data Engineer

Chennai, Tamil Nadu Giggso

Posted 7 days ago

Job Viewed

Tap Again To Close

Job Description

Key Responsibilities:
• Design, develop, and maintain cloud-based solutions on Azure or AWS.
• Implement and manage real-time data streaming and messaging systems using Kafka.
• Develop scalable applications and services using Java and Python.
• Deploy, manage, and monitor containerized applications using Kubernetes.
• Build and optimize big data processing pipelines using Databricks.
• Manage and maintain databases, including SQL Server and Snowflake, and write
complex SQL scripts.
• Work with Unix/Linux commands to manage and monitor system operations.
• Collaborate with cross-functional teams to ensure seamless integration of cloud-based
solutions.

Key Skills:
• Expertise in Azure or AWS cloud platforms.
• Proficiency in Kafka, Java, Python, and Kubernetes.
• Hands-on experience with Databricks for big data processing.
• Strong database management skills with SQL Server, Snowflake, and advanced SQL
scripting.
• Solid understanding of Unix/Linux commands.

General Requirements for Both Off-Shore Roles:
• Bachelor’s degree in computer science, Engineering, or a related field (or equivalent
experience).
• 5+ years of experience in cloud and data engineering roles.
• Strong problem-solving and analytical skills.
• Excellent communication and collaboration abilities.
• Proven ability to work in a fast-paced, agile environment.
This advertiser has chosen not to accept applicants from your region.

Cloud Data Analytics

Chennai, Tamil Nadu Anicalls (Pty) Ltd

Posted today

Job Viewed

Tap Again To Close

Job Description

• Hands-on experience creating automated data pipelines using modern technology stacks for batch ETL, data streaming, or change data capture, and for data processing to load advanced analytics data repositories
• Experience in designing data lake storage structures, data acquisition, transformation, and distribution processing
• Proficient in designing and implementing data integration processes in a large distributed environment using cloud services e.g. Azure Data Factory, Data Catalog, Databricks, Stream Analytics
• Advanced experience in SQL programming
• Proficient in programming languages (e.g. Python, Java) and REST APIs (e.g. Azure API Management, MuleSoft) to process data
This advertiser has chosen not to accept applicants from your region.

Cloud Data Engineer

Cai

Posted today

Job Viewed

Tap Again To Close

Job Description

Cloud Data Engineer

Req number:

R5934

Employment type:

Full time

Worksite flexibility:

Remote Who we are

CAI is a global technology services firm with over 8,500 associates worldwide and a yearly revenue of $1 billion+. We have over 40 years of excellence in uniting talent and technology to power the possible for our clients, colleagues, and communities. As a privately held company, we have the freedom and focus to do what is right—whatever it takes. Our tailor-made solutions create lasting results across the public and commercial sectors, and we are trailblazers in bringing neurodiversity to the enterprise.

Job Summary

We are seeking a motivated Cloud Data Engineer that has experience in building data products using Databricks and related technologies. This is a Full-time and Remote position.

Job Description

What You’ll Do

  • Analyze and understand existing data warehouse implementations to support migration and consolidation efforts.
  • Reverse-engineer legacy stored procedures (PL/SQL, SQL) and translate business logic into scalable Spark SQL code within Databricks notebooks.
  • Design and develop data lake solutions on AWS using S3 and Delta Lake architecture, leveraging Databricks for processing and transformation.
  • Build and maintain robust data pipelines using ETL tools with ingestion into S3 and processing in Databricks.
  • Collaborate with data architects to implement ingestion and transformation frameworks aligned with enterprise standards.
  • Evaluate and optimize data models (Star, Snowflake, Flattened) for performance and scalability in the new platform.
  • Document ETL processes, data flows, and transformation logic to ensure transparency and maintainability.
  • Perform foundational data administration tasks including job scheduling, error troubleshooting, performance tuning, and backup coordination.
  • Work closely with cross-functional teams to ensure smooth transition and integration of data sources into the unified platform.
  • Participate in Agile ceremonies and contribute to sprint planning, retrospectives, and backlog grooming.
  • Triage, debug and fix technical issues related to Data Lakes.
  • Maintain and Manage Code repositories like Git.

What You'll Need

  • 5+ years of experience working with Databricks , including Spark SQL and Delta Lake implementations.
  • 3 + years of experience in designing and implementing data lake architectures on Databricks.
  • Strong SQL and PL/SQL skills with the ability to interpret and refactor legacy stored procedures.
  • Hands-on experience with data modeling and warehouse design principles.
  • Proficiency in at least one programming language (Python, Scala, Java).
  • Bachelor’s degree in Computer Science, Information Technology, Data Engineering, or related field.
  • Experience working in Agile environments and contributing to iterative development cycles. Experience working on Agile projects and Agile methodology in general.
  • Databricks cloud certification is a big plus.
  • Exposure to enterprise data governance and metadata management practices.

Physical Demands

  • This role involves mostly sedentary work, with occasional movement around the office to attend meetings, etc.
  • Ability to perform repetitive tasks on a computer, using a mouse, keyboard, and monitor.

Reasonable accommodation statement

If you require a reasonable accommodation in completing this application, interviewing, completing any pre-employment testing, or otherwise participating in the employment selection process, please direct your inquiries to or (888) 824 – 8111.

This advertiser has chosen not to accept applicants from your region.

Cloud Data Specialist

Mumbai, Maharashtra Atlas Corp

Posted today

Job Viewed

Tap Again To Close

Job Description

Seaspan teams are goal-driven and share a high-performance culture, focusing on building services offerings to become a leading asset manager. Seaspan provides many of the world's major shipping lines with alternatives to vessel ownership by offering long-term leases on large, modern containerships and pure car, truck carriers (PCTCs) combined with industry leading ship management serves. Seaspan's fleet has evolved over time to meet the varying needs of our customer base. We own vessels in a wide range of sizes, from 2,500 TEU to 24,000 TEU vessels. As a wholly owned subsidiary of Atlas Corp, Seaspan delivers on the company's core strategy as a leading asset management and core infrastructure company.

Position description:

We are seeking a highly skilled and versatile Cloud Data Specialist to join our Data Operations team. Reporting to the Team Lead, Data Operations, the Cloud Data Specialist plays a key role in the development, administration, and support of our Azure-based data platform, with a particular focus on Databricks, data pipeline orchestration using tools like Azure Data Factory (ADF), and environment management using Unity Catalog. A strong foundation in data engineering, cloud data administration, and data governance is essential. Development experience using SQL and Python is required. Knowledge or experience with APIM is nice to have.

Primary responsibilities:

Data Engineering and Platform Management:

  • Design, develop, and optimize scalable data pipelines using Azure Databricks and ADF. 
  • Administer Databricks environments, including user access, clusters, and Unity Catalog for data lineage, governance, and security. 
  • Support the deployment, scheduling, and monitoring of data workflows and jobs in Databricks and ADF. 
  • Implement best practices for CI/CD, version control, and operational monitoring for pipeline deployments. 
  • Implement and manage Delta Lake to ensure reliable, performant, and ACID-compliant data operations.
  • Data Modeling and Integration:

  • Collaborate with business and data engineering teams to design data models that support analytics and reporting use cases. 
  • Support integration of data from multiple sources into the enterprise data lake and data warehouse 
  • Configure API calls to utilize our Azure APIM platform. 
  • Maintain and enhance data quality, structure, and performance within the Lakehouse and warehouse architecture. 
  • Collaboration and Stakeholder Engagement:

  • Work cross-functionally with business units, data scientists, BI analysts, and other stakeholders to understand data requirements. 
  • Translate technical solutions into business-friendly language and deliver clear documentation and training when required. 
  • Required Technical Expertise:

    Apache Spark (on Databricks) 

  • Proficient in PySpark and spark SQL 
  • Spark optimization techniques (caching, partitioning, broadcast joins) 
  • Writing and scheduling notebooks/jobs in Databricks 
  • Understanding of Delta Lake architecture and features 
  • Working with Databricks Workflows (pipelines and job orchestration) 
  • SQL/Python Programming 

  • Handling JSON, XML, and other semi-structured formats 
  • Experience with API integration using requests, http, etc. 
  • Error handling and logging 
  • API Ingestion 

  • Designing and implementing ingestion pipelines for RESTful API 
  • Transforming and loading JSON responses to Spark tables 
  • Cloud & Data Platform Skills 

  • Databricks on Azure 
  • Cluster configuration and management 
  • Unity Catalog features (optional but good to have) 
  • Azure Data Factory

  • Creating and managing pipelines for orchestration 
  • Linked services and datasets for ADLS, Databricks, SQL Server 
  • Parameterized and dynamic ADF pipelines 
  • Triggering Databricks notebooks from ADF 
  • Data Engineering Foundations 

  • Data modeling and warehousing concepts 
  • ETL/ELT design patterns 
  • Data validation and quality checks 
  • Working with structured and semi-structured data (JSON, Parquet, Avro) 
  • DevOps & CI/CD

  • Git/GitHub for version control 
  • CI/CD using Azure DevOps or GitHub Actions for Databricks jobs 
  • Infrastructure-as-code (Terraform for Databricks or ADF) 
  • Additional Requirements:

  • Bachelor's degree in computer science, information systems, or a related field. 
  • 4+ years of experience in a cloud data engineering, data platform, or analytics engineering role. 
  • Familiarity with data governance, security principles, and data quality best practices. 
  • Excellent analytical thinking and problem-solving skills. 
  • Strong communication skills and ability to work collaboratively with technical and non-technical stakeholders. 
  • Microsoft certifications in Azure Data Engineer, Power Platform, or related field is desired 
  • Experience with Azure APIM is nice to have 
  • Knowledge of enterprise data architecture and data warehouse principles (e.g., dimensional modeling) an asset
  • Job Demands and/or Physical Requirements:

  • As Seaspan is a global company, occasional work outside of regular office hours may be required. 
  • Compensation and Benefits package: 

    Seaspan’s total compensation is based on our pay-for-performance philosophy that rewards team members who deliver on and demonstrate our high-performance culture. The exact base salary offered will be commensurate with the incumbent’s experience, job-related skills and knowledge, and internal pay equity. 

    Seaspan Corporation is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, race, color, religion, gender, sexual orientation, gender identity, national origin, disability, or protected Veteran status. We thank all applicants in advance. If your application is shortlisted to be included in the interview process, one of our team will be in contact with you.

    Please note that while this position is open in both Vancouver and Mumbai, it represents a single headcount. The role will be filled in one of the two locations based on candidate availability and suitability, determined by the hiring team.

    This advertiser has chosen not to accept applicants from your region.
    Be The First To Know

    About the latest Cloud data Jobs in India !

    Cloud data engineer

    Pune, Maharashtra Epergne Solutions

    Posted today

    Job Viewed

    Tap Again To Close

    Job Description

    Job Title : Cloud Data Engineer

    Experience : 4 to 5 Years

    Location : Noida / Gurgaon / Hyderabad / Bangalore / Pune

    Notice Period : Immediate (15 Days notice period)


    Responsibilities :
    • Design, develop, and maintain automated data pipelines and ETL processes .
    • Create and optimize data tables and views in Snowflake .
    • Work with datasets across Azure , AWS , and various structured/unstructured formats.
    • Ensure data security, privacy, and compliance with industry and organizational standards (e.g., BHP standards).
    • Support and mentor junior team members by providing guidance on data modelling, data management, and data engineering best practices .


    Required Skills & Experience :
    • Strong hands-on experience in data pipeline development , ETL scripting using Python , and handling data formats like JSON, CSV , etc.
    • Proficiency in AWS services such as:
    • AWS Glue
    • AWS Batch
    • AWS Step Functions
    • Experience with Azure services , including:
    • Azure Data Factory
    • Azure Logic Apps
    • Azure Functions
    • Azure Blob Storage
    • Solid understanding of:
    • Data management principles
    • Data structures & storage solutions
    • Data modeling techniques
    • Strong programming skills in:
    • Python (with OOP concepts)
    • PowerShell
    • Bash scripting
    • Advanced SQL skills, including writing and optimizing complex queries .
    • Working experience with Terraform and CI/CD pipelines for infrastructure and deployment automation.

    This advertiser has chosen not to accept applicants from your region.

    Cloud data engineer

    Bengaluru, Karnataka Epergne Solutions

    Posted today

    Job Viewed

    Tap Again To Close

    Job Description

    Job Title : Cloud Data Engineer

    Experience : 4 to 5 Years

    Location : Noida / Gurgaon / Hyderabad / Bangalore / Pune

    Notice Period : Immediate (15 Days notice period)


    Responsibilities :
    • Design, develop, and maintain automated data pipelines and ETL processes .
    • Create and optimize data tables and views in Snowflake .
    • Work with datasets across Azure , AWS , and various structured/unstructured formats.
    • Ensure data security, privacy, and compliance with industry and organizational standards (e.g., BHP standards).
    • Support and mentor junior team members by providing guidance on data modelling, data management, and data engineering best practices .


    Required Skills & Experience :
    • Strong hands-on experience in data pipeline development , ETL scripting using Python , and handling data formats like JSON, CSV , etc.
    • Proficiency in AWS services such as:
    • AWS Glue
    • AWS Batch
    • AWS Step Functions
    • Experience with Azure services , including:
    • Azure Data Factory
    • Azure Logic Apps
    • Azure Functions
    • Azure Blob Storage
    • Solid understanding of:
    • Data management principles
    • Data structures & storage solutions
    • Data modeling techniques
    • Strong programming skills in:
    • Python (with OOP concepts)
    • PowerShell
    • Bash scripting
    • Advanced SQL skills, including writing and optimizing complex queries .
    • Working experience with Terraform and CI/CD pipelines for infrastructure and deployment automation.

    This advertiser has chosen not to accept applicants from your region.

    Cloud data engineer

    Noida, Uttar Pradesh Epergne Solutions

    Posted today

    Job Viewed

    Tap Again To Close

    Job Description

    Job Title : Cloud Data Engineer

    Experience : 4 to 5 Years

    Location : Noida / Gurgaon / Hyderabad / Bangalore / Pune

    Notice Period : Immediate (15 Days notice period)


    Responsibilities :
    • Design, develop, and maintain automated data pipelines and ETL processes .
    • Create and optimize data tables and views in Snowflake .
    • Work with datasets across Azure , AWS , and various structured/unstructured formats.
    • Ensure data security, privacy, and compliance with industry and organizational standards (e.g., BHP standards).
    • Support and mentor junior team members by providing guidance on data modelling, data management, and data engineering best practices .


    Required Skills & Experience :
    • Strong hands-on experience in data pipeline development , ETL scripting using Python , and handling data formats like JSON, CSV , etc.
    • Proficiency in AWS services such as:
    • AWS Glue
    • AWS Batch
    • AWS Step Functions
    • Experience with Azure services , including:
    • Azure Data Factory
    • Azure Logic Apps
    • Azure Functions
    • Azure Blob Storage
    • Solid understanding of:
    • Data management principles
    • Data structures & storage solutions
    • Data modeling techniques
    • Strong programming skills in:
    • Python (with OOP concepts)
    • PowerShell
    • Bash scripting
    • Advanced SQL skills, including writing and optimizing complex queries .
    • Working experience with Terraform and CI/CD pipelines for infrastructure and deployment automation.

    This advertiser has chosen not to accept applicants from your region.
     

    Nearby Locations

    Other Jobs Near Me

    Industry

    1. request_quote Accounting
    2. work Administrative
    3. eco Agriculture Forestry
    4. smart_toy AI & Emerging Technologies
    5. school Apprenticeships & Trainee
    6. apartment Architecture
    7. palette Arts & Entertainment
    8. directions_car Automotive
    9. flight_takeoff Aviation
    10. account_balance Banking & Finance
    11. local_florist Beauty & Wellness
    12. restaurant Catering
    13. volunteer_activism Charity & Voluntary
    14. science Chemical Engineering
    15. child_friendly Childcare
    16. foundation Civil Engineering
    17. clean_hands Cleaning & Sanitation
    18. diversity_3 Community & Social Care
    19. construction Construction
    20. brush Creative & Digital
    21. currency_bitcoin Crypto & Blockchain
    22. support_agent Customer Service & Helpdesk
    23. medical_services Dental
    24. medical_services Driving & Transport
    25. medical_services E Commerce & Social Media
    26. school Education & Teaching
    27. electrical_services Electrical Engineering
    28. bolt Energy
    29. local_mall Fmcg
    30. gavel Government & Non Profit
    31. emoji_events Graduate
    32. health_and_safety Healthcare
    33. beach_access Hospitality & Tourism
    34. groups Human Resources
    35. precision_manufacturing Industrial Engineering
    36. security Information Security
    37. handyman Installation & Maintenance
    38. policy Insurance
    39. code IT & Software
    40. gavel Legal
    41. sports_soccer Leisure & Sports
    42. inventory_2 Logistics & Warehousing
    43. supervisor_account Management
    44. supervisor_account Management Consultancy
    45. supervisor_account Manufacturing & Production
    46. campaign Marketing
    47. build Mechanical Engineering
    48. perm_media Media & PR
    49. local_hospital Medical
    50. local_hospital Military & Public Safety
    51. local_hospital Mining
    52. medical_services Nursing
    53. local_gas_station Oil & Gas
    54. biotech Pharmaceutical
    55. checklist_rtl Project Management
    56. shopping_bag Purchasing
    57. home_work Real Estate
    58. person_search Recruitment Consultancy
    59. store Retail
    60. point_of_sale Sales
    61. science Scientific Research & Development
    62. wifi Telecoms
    63. psychology Therapy
    64. pets Veterinary
    View All Cloud Data Jobs