Data Scientists

Gurugram, Uttar Pradesh TaskUs

Posted today

Job Viewed

Tap Again To Close

Job Description

Description

Key Responsibilities

AI/ML Development & Research

• Design, develop, and deploy advanced machine learning and deep learning models for complex business problems

• Implement and optimize Large Language Models (LLMs) and Generative AI solutions

• Build agentic AI systems with autonomous decision-making capabilities

• Conduct research on emerging AI technologies and their practical applications

• Perform model evaluation, validation, and continuous improvement

Cloud Infrastructure & Full-Stack Development

• Architect and implement scalable cloud-native ML/AI solutions on AWS, Azure, or GCP

• Develop full-stack applications integrating AI models with modern web technologies

• Build and maintain ML pipelines using cloud services (SageMaker, ML Engine, etc.)

• Implement CI/CD pipelines for ML model deployment and monitoring

• Design and optimize cloud infrastructure for high-performance computing workloads

Data Engineering & Database Management

• Design and implement data pipelines for large-scale data processing

• Work with both SQL and NoSQL databases (PostgreSQL, MongoDB, Cassandra, etc.)

• Optimize database performance for ML workloads and real-time applications

• Implement data governance and quality assurance frameworks

• Handle streaming data processing and real-time analytics

Leadership & Collaboration

• Mentor junior data scientists and guide technical decision-making

• Collaborate with cross-functional teams including product, engineering, and business stakeholders

• Present findings and recommendations to technical and non-technical audiences

• Lead proof-of-concept projects and innovation initiatives

Required Qualifications

Education & Experience

• Master's or PhD in Computer Science, Data Science, Statistics, Mathematics, or related field

• 5+ years of hands-on experience in data science and machine learning

• 3+ years of experience with deep learning frameworks and neural networks

• 2+ years of experience with cloud platforms and full-stack development

Technical Skills - Core AI/ML

• Machine Learning: Scikit-learn, XGBoost, LightGBM, advanced ML algorithms

• Deep Learning: TensorFlow, PyTorch, Keras, CNN, RNN, LSTM, Transformers

• Large Language Models: GPT, BERT, T5, fine-tuning, prompt engineering

• Generative AI: Stable Diffusion, DALL-E, text-to-image, text generation

• Agentic AI: Multi-agent systems, reinforcement learning, autonomous agents

Technical Skills - Development & Infrastructure

• Programming: Python (expert), R, Java/Scala, JavaScript/TypeScript

• Cloud Platforms: AWS (SageMaker, EC2, S3, Lambda), Azure ML, or Google Cloud AI

• Databases: SQL (PostgreSQL, MySQL), NoSQL (MongoDB, Cassandra, DynamoDB)

• Full-Stack Development: React/Vue.js, Node.js, FastAPI, Flask, Docker, Kubernetes

• MLOps: MLflow, Kubeflow, Model versioning, A/B testing frameworks

• Big Data: Spark, Hadoop, Kafka, streaming data processing

Preferred Qualifications

• Experience with vector databases and embeddings (Pinecone, Weaviate, Chroma)

• Knowledge of LangChain, LlamaIndex, or similar LLM frameworks

• Experience with model compression and edge deployment

• Familiarity with distributed computing and parallel processing

• Experience with computer vision and NLP applications

• Knowledge of federated learning and privacy-preserving ML

• Experience with quantum machine learning

• Expertise in MLOps and production ML system design

Key Competencies

Technical Excellence

• Strong mathematical foundation in statistics, linear algebra, and optimization

• Ability to implement algorithms from research papers

• Experience with model interpretability and explainable AI

• Knowledge of ethical AI and bias detection/mitigation

Problem-Solving & Innovation

• Strong analytical and critical thinking skills

• Ability to translate business requirements into technical solutions

• Creative approach to solving complex, ambiguous problems

• Experience with rapid prototyping and experimentation

Communication & Leadership

• Excellent written and verbal communication skills

• Ability to explain complex technical concepts to diverse audiences

• Strong project management and organizational skills

• Experience mentoring and leading technical teams

How We Partner To Protect You: TaskUs will neither solicit money from you during your application process nor require any form of payment in order to proceed with your application. Kindly ensure that you are always in communication with only authorized recruiters of TaskUs.
DEI: In TaskUs we believe that innovation and higher performance are brought by people from all walks of life. We welcome applicants of different backgrounds, demographics, and circumstances. Inclusive and equitable practices are our responsibility as a business. TaskUs is committed to providing equal access to opportunities. If you need reasonable accommodations in any part of the hiring process, please let us know.We invite you to explore all TaskUs career opportunities and apply through the provided URL.
This advertiser has chosen not to accept applicants from your region.

Lead Consultant-Data Scientists with AI and Generative Model experience!

Gurugram, Uttar Pradesh Genpact

Posted today

Job Viewed

Tap Again To Close

Job Description

Ready to shape the future of work?

At Genpact, we don’t just adapt to change—we drive it. AI and digital innovation are redefining industries, and we’re leading the charge. Genpact’s AI Gigafactory, our industry-first accelerator, is an example of how we’re scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI, our breakthrough solutions tackle companies’ most complex challenges. 

If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that’s shaping the future, this is your moment. 

Genpact (NYSE: G) is anadvanced technology services and solutions company that deliverslastingvalue for leading enterprisesglobally.Through ourdeep business knowledge, operational excellence, and cutting-edge solutions – we help companies across industries get ahead and stay ahead.Powered by curiosity, courage, and innovation,our teamsimplementdata, technology, and AItocreate tomorrow, today.Get to know us atgenpact.comand onLinkedIn,X,YouTube, andFacebook. 

Inviting applications for the role of Lead Consultant-Data Scientists with AI and Generative Model experience!

We are currently looking for a talented and experienced Data Scientist with a strong background in AI, specifically in building generative AI models using large language models, to join our team. This individual will play a crucial role in developing and implementing data-driven solutions, AI-powered applications, and generative models that will help us stay ahead of the competition and achieve our ambitious goals.

Responsibilities • Collaborate with cross-functional teams to identify, analyze, and interpret complex datasets to develop actionable insights and drive data-driven decision-making.

• Design, develop, and implement advanced statistical models, machine learning algorithms, AI applications, and generative models using large language models such as GPT-3, BERT and also frameworks like RAG, Knowledge Graphs etc.

• Communicate findings and insights to both technical and non-technical stakeholders through clear and concise presentations, reports, and visualizations.

• Continuously monitor and assess the performance of AI models, generative models, and data-driven solutions, refining and optimizing them as needed.

• Stay up-to-date with the latest industry trends, tools, and technologies in data science, AI, and generative models, and apply this knowledge to improve existing solutions and develop new ones.

• Mentor and guide junior team members, helping to develop their skills and contribute to their professional growth.

Qualifications we seek in you:

Minimum Qualifications• Bachelor's or Master's degree in Data Science, Computer Science, Statistics, or a related field.

• Experience in data science, machine learning, AI applications, and generative AI modelling.

• Strong expertise in Python, R, or other programming languages commonly used in data science and AI, with experience in implementing large language models and generative AI frameworks.

• Proficient in statistical modelling, machine learning techniques, AI algorithms, and generative model development using large language models such as GPT-3, BERT, or similar frameworks like RAG, Knowledge Graphs etc.

• Experience working with large datasets and using various data storage and processing technologies such as SQL, NoSQL, Hadoop, and Spark.

• Strong analytical, problem-solving, and critical thinking skills, with the ability to draw insights from complex data and develop actionable recommendations.

• Excellent communication and collaboration skills, with the ability to work effectively with cross-functional teams and explain complex concepts to non-technical stakeholders.

Preferred Qualifications/ skills• Experience in deploying AI models, generative models, and applications in a production environment using cloud platforms such as AWS, Azure, or GCP.

• Knowledge of industry-specific data sources, challenges, and opportunities relevant to Insurance

• Demonstrated experience in leading data science projects from inception to completion, including project management and team collaboration skills.

Why join Genpact?

  • Be a transformation leader – Work at the cutting edge of AI, automation, and digital innovation 

  • Make an impact – Drive change for global enterprises and solve business challenges that matter 

  • Accelerate your career – Get hands-on experience, mentorship, and continuous learning opportunities 

  • Work with the best – Join 140,000+ bold thinkers and problem-solvers who push boundaries every day 

  • Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress 

  • Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. 

    Let’s build tomorrow together. 

    This advertiser has chosen not to accept applicants from your region.

    Big Data Developer

    Gurugram, Uttar Pradesh Ravant Media

    Posted 8 days ago

    Job Viewed

    Tap Again To Close

    Job Description

    We are seeking a highly skilled Big Data Engineer to join our growing team. In this role, you will be responsible for designing, building, and maintaining robust data pipelines that handle high-volume financial data, including stocks, cryptocurrencies, and third-party data sources. You will play a critical role in ensuring data integrity, scalability, and real-time availability across our platforms.


    Key Responsibilities :-

    • Design, develop, and manage end-to-end data pipelines for stocks, crypto, and other financial datasets.
    • Integrate third-party APIs and data feeds into internal systems.
    • Build and optimize data ingestion, storage, and transformation workflows (batch and real-time).
    • Ensure data quality, consistency, and reliability across all pipelines.
    • Collaborate with data scientists, analysts, and backend engineers to provide clean, structured, and scalable datasets.
    • Monitor, troubleshoot, and optimize pipeline performance.
    • Implement ETL/ELT best practices, data governance, and security protocols.
    • Contribute to the scalability and automation of our data infrastructure.


    Requirements :-

    • Proven experience as a Big Data Engineer / Data Engineer (preferably in financial or crypto domains).
    • Strong expertise in Python, SQL, and distributed data systems.
    • Hands-on experience with data pipeline tools (e.g., Apache Spark, Kafka, Airflow, Flink, Prefect).
    • Experience with cloud platforms (AWS, GCP, or Azure) and data warehousing (Snowflake, BigQuery, Redshift, etc.).
    • Knowledge of API integrations and handling real-time streaming data.
    • Familiarity with databases (relational and NoSQL) and data modeling.
    • Solid understanding of stocks, cryptocurrencies, and financial data structures (preferred).
    • Strong problem-solving skills with the ability to handle large-scale data challenges.
    This advertiser has chosen not to accept applicants from your region.

    Big Data Specialist

    Gurugram, Uttar Pradesh Brillio

    Posted 11 days ago

    Job Viewed

    Tap Again To Close

    Job Description

    Role Overview

    We are seeking a highly skilled Big Data Engineer to join our team. The ideal candidate will have strong experience in building, maintaining, and optimizing large-scale data pipelines and distributed data processing systems. This role involves working closely with cross-functional teams to ensure the reliability, scalability, and performance of data solutions.


    Key Responsibilities

    • Design, develop, and maintain scalable data pipelines and ETL processes.
    • Work with large datasets using Hadoop ecosystem tools (Hive, Spark).
    • Build and optimize real-time and batch data processing solutions using Kafka and Spark Streaming.
    • Write efficient, high-performance SQL queries to extract, transform, and load data.
    • Develop reusable data frameworks and utilities in Python.
    • Collaborate with data scientists, analysts, and product teams to deliver reliable data solutions.
    • Monitor, troubleshoot, and optimize big data workflows for performance and cost efficiency.


    Must-Have Skills

    • Strong hands-on experience with Hive and SQL for querying and data transformation.
    • Proficiency in Python for data manipulation and automation.
    • Expertise in Apache Spark (batch and streaming).
    • Experience working with Kafka for streaming data pipelines.


    Good-to-Have Skills

    • Experience with workflow orchestration tools (Airflow etc.)
    • Knowledge of cloud-based big data platforms (AWS EMR, GCP Dataproc, Azure HDInsight).
    • Familiarity with CI/CD pipelines and version control (Git).
    This advertiser has chosen not to accept applicants from your region.

    Sn. Data Scientists- AI/ML- GEN AI- Work location : Across india | EXP: 4 - 12 years

    Gurugram, Uttar Pradesh Capgemini Engineering

    Posted 4 days ago

    Job Viewed

    Tap Again To Close

    Job Description

    Data Scientists- AI/ML- GEN AI- Across india | EXP: 4 - 10 years


    data scientists with total of around 4-10 years of experience and atleast 4-10 years of relevant data science, analytics, and AI/ML Python; data science; AI/ML; GEN AI


    Primary Skills :


    - Excellent understanding and hand-on experience of data-science and machine learning techniques & algorithms for supervised & unsupervised problems, NLP and computer vision and GEN AI. Good applied statistics skills, such as distributions, statistical inference & testing, etc.


    - Excellent understanding and hand-on experience on building Deep-learning models for text & image analytics (such as ANNs, CNNs, LSTM, Transfer Learning, Encoder and decoder, etc).


    - Proficient in coding in common data science language & tools such as R, Python.


    - Experience with common data science toolkits, such as NumPy, Pandas, Matplotlib, StatsModel, Scikitlearn, SciPy, NLTK, Spacy, OpenCV etc.


    - Experience with common data science frameworks such as Tensorflow, Keras, PyTorch, XGBoost,etc.


    - Exposure or knowledge in cloud (Azure/AWS).


    - Experience on deployment of model in production.

    This advertiser has chosen not to accept applicants from your region.

    Big Data GCP Engineer

    Gurugram, Uttar Pradesh Boolean Staffing Recruitment Solutions Pvt Ltd

    Posted today

    Job Viewed

    Tap Again To Close

    Job Description

    Position Summary – Big Data GCP Developer

    GCP + Bigdata (any programming )

    Location: Gurgaon

    Experience: 3.5-6.5 yrs

    Work Mode-Hybrid

    Interview Mode: virtual

    Joining: Immediate joiners preferred

    Mandatory Skills:

    • Strong experience in Big Data, PySpark, Hive,GCP

    • Expertise in Spark optimization and performance tuning

    Role Responsibilities:

    • Build and optimize large-scale data processing pipelines using PySpark

    • Tune Spark jobs for better efficiency and cost optimization

    • Collaborate with cross-functional teams to develop and deploy scalable solutions

    • Ensure best practices in coding, version control, and Agile

    . Skillset Required: Pyspark , GCP, Big Data, GCP, SQL, Pyspark
    This advertiser has chosen not to accept applicants from your region.

    Big Data Solution Architect

    Gurugram, Uttar Pradesh Epam

    Posted today

    Job Viewed

    Tap Again To Close

    Job Description

    Description

    EPAM is a leading global provider of digital platform engineering and development services. We are committed to having a positive impact on our customers, our employees, and our communities. We embrace a dynamic and inclusive culture. Here you will collaborate with multi-national teams, contribute to a myriad of innovative projects that deliver the most creative and cutting-edge solutions, and have an opportunity to continuously learn and grow. No matter where you are located, you will join a dedicated, creative, and diverse community that will help you discover your fullest potential.

    We are looking for Solution Architects for data-driven projects to join our Data Practice team in India. Together we design and drive lots of solutions that generate value from data, taking advantage of scalable platforms, cutting-edge technologies, and machine learning algorithms. We provide a solid architecture framework, educational programs, and a strong SA community to support our new Architects in a deep dive into the data domain.

    #LI-DNI #REF-IN-WOMEN

    Responsibilities

  • Design data analytics solutions by utilising the big data technology stack
  • Create and present solution architecture documents with deep technical details
  • Work closely with business in identifying solution requirements and key case-studies/scenarios for future solutions
  • Conduct solution architecture review/audit, calculate and present ROI
  • Lead implementation of the solutions from establishing project requirements and goals to solution "go-live"
  • Participate in the full cycle of pre-sale activities: direct communications with customers, RFP processing, the development of proposals for implementation and design of the solution, presentation for proposed solution architecture to the customer and participate in technical meetings with customer representatives
  • Create and follow personal education plan in the technology stack and solution architecture
  • Maintain a strong understanding of industry trends and best practices
  • Get involved in engaging new clients to further drive EPAM business in the big data space
  • Requirements

  • Should have minimum experience of 12+ yrs
  • Strong hands-on experience as a Big Data Architect with a solid design/development background with Java, Scala, or Python
  • Experience delivering data analytics projects and architecture guidelines
  • Experience in big data solutions on premises and on the cloud (Amazon Web Services, Microsoft Azure, Google Cloud)
  • Production project experience in at least one of the big data technologies
  • Batch processing: Hadoop and MapReduce/Spark/Hive
  • NoSQL databases: Cassandra/HBase/Accumulo/Kudu
  • Knowledge of Agile development methodology, Scrum in particular
  • Experience in direct customer communications and pre-selling business-consulting engagements to clients within large enterprise environments
  • Experience working within a consulting business and pre-sales experience would be highly attractive
  • We offer

  • Opportunity to work on technical challenges that may impact across geographies
  • Vast opportunities for self-development: online university, knowledge sharing opportunities globally, learning opportunities through external certifications
  • Opportunity to share your ideas on international platforms
  • Sponsored Tech Talks & Hackathons
  • Unlimited access to LinkedIn learning solutions
  • Possibility to relocate to any EPAM office for short and long-term projects
  • Focused individual development
  • Benefit package: Health benefits Retirement benefits Paid time off Flexible benefits
  • Forums to explore beyond work passion (CSR, photography, painting, sports, etc.)
  • This advertiser has chosen not to accept applicants from your region.
    Be The First To Know

    About the latest Data scientists Jobs in Gurugram !

    Java Big Data Developer

    Gurugram, Uttar Pradesh Virtusa

    Posted today

    Job Viewed

    Tap Again To Close

    Job Description

    Java Big Data Developer - CREQ Description

    Responsibilities include, but are not limited to - Develops and tests software, including ongoing refactoring of code, and drives continuous improvement in code structure and quality Primary focus is spent writing code, API specs, conducting code reviews and testing in ongoing sprints, or doing proof of concepts/automation tools. Applies visualization and other techniques to fast track concepts. Functions as a core member of an Agile team driving user story analysis and elaboration, design and development of software applications, testing and builds automation tools. Works on a specific platform/product or as part of a dynamic resource pool assigned to projects based on demand and business priority. Identifies opportunities to adopt innovative technologies & build reusable components. Ensures timely & effective communication with the reporting manager.

    Strong programming knowledge in Java Solid understanding of data structures, Algorithms & Design Patters is required

    Strong programming knowledge in Java Solid understanding of data structures, Algorithms & Design Patters is required

    Strong SQL knowledge is required
    Hands-on experience or Knowledge on Big Data technologies (at least MapReduce, Hive and Hbase)
    Understanding and experience with UNIX / Shell / Python scripting Database query optimization and indexing Web services design and implementation using REST / SOAP

    Primary as Java API and SQL
    Secondary -Big Data

    Primary Location Gurgaon, Haryana, India Job Type Experienced Primary Skills JavaScript Development Years of Experience 6 Travel No
    This advertiser has chosen not to accept applicants from your region.

    Big Data Engineer - Scala

    Gurugram, Uttar Pradesh Idyllic Services

    Posted 1 day ago

    Job Viewed

    Tap Again To Close

    Job Description

    Job Title: Big Data Engineer – Scala

    Location: Bangalore, Chennai, Gurgaon, Pune, Mumbai.

    Experience: 7–10 Years (Minimum 3+ years in Scala)

    Notice Period: Immediate to 30 Days

    Mode of Work: Hybrid


    Role Overview

    We are looking for a highly skilled Big Data Engineer (Scala) with strong expertise in Scala, Spark, Python, NiFi, and Apache Kafka to join our data engineering team. The ideal candidate will have a proven track record in building, scaling, and optimizing big data pipelines , and hands-on experience in distributed data systems and cloud-based solutions.


    Key Responsibilities

    - Design, develop, and optimize large-scale data pipelines and distributed data processing systems.

    - Work extensively with Scala, Spark (PySpark), and Python for data processing and transformation.

    - Develop and integrate streaming solutions using Apache Kafka and orchestration tools like NiFi / Airflow .

    - Write efficient queries and perform data analysis using Jupyter Notebooks and SQL .

    - Collaborate with cross-functional teams to design scalable cloud-based data architectures .

    - Ensure delivery of high-quality code through code reviews, performance tuning, and best practices .

    - Build monitoring and alerting systems leveraging Splunk or equivalent tools .

    - Participate in CI/CD workflows using Git, Jenkins, and other DevOps tools.

    - Contribute to product development with a focus on scalability, maintainability, and performance.


    Mandatory Skills

    - Scala – Minimum 3+ years of hands-on experience.

    - Strong expertise in Spark (PySpark) and Python .

    - Hands-on experience with Apache Kafka .

    - Knowledge of NiFi / Airflow for orchestration.

    - Strong experience in Distributed Data Systems (5+ years) .

    - Proficiency in SQL and query optimization.

    - Good understanding of Cloud Architecture .


    Preferred Skills

    - Exposure to messaging technologies like Apache Kafka or equivalent.

    - Experience in designing intuitive, responsive UIs for data analytics visualization.

    - Familiarity with Splunk or other monitoring/alerting solutions .

    - Hands-on experience with CI/CD tools (Git, Jenkins).

    - Strong grasp of software engineering concepts, data modeling, and optimization techniques .

    This advertiser has chosen not to accept applicants from your region.

    Lead Big Data Software Engineer

    Gurugram, Uttar Pradesh Epam

    Posted today

    Job Viewed

    Tap Again To Close

    Job Description

    Description

    EPAM is a leading global provider of digital platform engineering and development services. We are committed to having a positive impact on our customers, our employees, and our communities. We embrace a dynamic and inclusive culture. Here you will collaborate with multi-national teams, contribute to a myriad of innovative projects that deliver the most creative and cutting-edge solutions, and have an opportunity to continuously learn and grow. No matter where you are located, you will join a dedicated, creative, and diverse community that will help you discover your fullest potential.

    If you are a seasoned Lead Big Data Software Engineer looking for a challenging and rewarding opportunity, we invite you to apply and become an integral part of our dynamic team. As a Lead Engineer, you will play a crucial role in designing and implementing Big Data solutions for our diverse range of projects. The ideal candidate should have extensive experience in Big Data and related technologies, with a focus on Apache Spark, Java, Scala, and AWS.

    #LI-DNI #REF-IN-WOMEN

    Responsibilities

  • Design and implement end-to-end Big Data solutions for complex business requirements
  • Collaborate with cross-functional teams to understand project requirements and deliver high-quality software solutions
  • Utilize your expertise in Apache Spark, Java, Scala, and AWS to develop scalable and efficient data processing systems
  • Ensure the performance, security, and scalability of Big Data applications
  • Stay current with industry trends and advancements in Big Data technologies, contributing to the continuous improvement of our development processes
  • Requirements

  • 8-12 years of hands-on experience in Big Data and Data-related technologies
  • Expert-level knowledge and practical experience with Apache Spark
  • Proficient programming skills in Java and/or Scala
  • Strong proficiency with Hadoop and Hive
  • Experience working with native Cloud data services, specifically AWS
  • We offer

  • Opportunity to work on technical challenges that may impact across geographies
  • Vast opportunities for self-development: online university, knowledge sharing opportunities globally, learning opportunities through external certifications
  • Opportunity to share your ideas on international platforms
  • Sponsored Tech Talks & Hackathons
  • Unlimited access to LinkedIn learning solutions
  • Possibility to relocate to any EPAM office for short and long-term projects
  • Focused individual development
  • Benefit package: Health benefits Retirement benefits Paid time off Flexible benefits
  • Forums to explore beyond work passion (CSR, photography, painting, sports, etc.)
  • This advertiser has chosen not to accept applicants from your region.
     

    Nearby Locations

    Other Jobs Near Me

    Industry

    1. request_quote Accounting
    2. work Administrative
    3. eco Agriculture Forestry
    4. smart_toy AI & Emerging Technologies
    5. school Apprenticeships & Trainee
    6. apartment Architecture
    7. palette Arts & Entertainment
    8. directions_car Automotive
    9. flight_takeoff Aviation
    10. account_balance Banking & Finance
    11. local_florist Beauty & Wellness
    12. restaurant Catering
    13. volunteer_activism Charity & Voluntary
    14. science Chemical Engineering
    15. child_friendly Childcare
    16. foundation Civil Engineering
    17. clean_hands Cleaning & Sanitation
    18. diversity_3 Community & Social Care
    19. construction Construction
    20. brush Creative & Digital
    21. currency_bitcoin Crypto & Blockchain
    22. support_agent Customer Service & Helpdesk
    23. medical_services Dental
    24. medical_services Driving & Transport
    25. medical_services E Commerce & Social Media
    26. school Education & Teaching
    27. electrical_services Electrical Engineering
    28. bolt Energy
    29. local_mall Fmcg
    30. gavel Government & Non Profit
    31. emoji_events Graduate
    32. health_and_safety Healthcare
    33. beach_access Hospitality & Tourism
    34. groups Human Resources
    35. precision_manufacturing Industrial Engineering
    36. security Information Security
    37. handyman Installation & Maintenance
    38. policy Insurance
    39. code IT & Software
    40. gavel Legal
    41. sports_soccer Leisure & Sports
    42. inventory_2 Logistics & Warehousing
    43. supervisor_account Management
    44. supervisor_account Management Consultancy
    45. supervisor_account Manufacturing & Production
    46. campaign Marketing
    47. build Mechanical Engineering
    48. perm_media Media & PR
    49. local_hospital Medical
    50. local_hospital Military & Public Safety
    51. local_hospital Mining
    52. medical_services Nursing
    53. local_gas_station Oil & Gas
    54. biotech Pharmaceutical
    55. checklist_rtl Project Management
    56. shopping_bag Purchasing
    57. home_work Real Estate
    58. person_search Recruitment Consultancy
    59. store Retail
    60. point_of_sale Sales
    61. science Scientific Research & Development
    62. wifi Telecoms
    63. psychology Therapy
    64. pets Veterinary
    View All Data Scientists Jobs View All Jobs in Gurugram