270 Hadoop jobs in Delhi

Data Processing Agency

Delhi, Delhi Satyam Drugs

Posted today

Job Viewed

Tap Again To Close

Job Description

We are looking for a reliable agency that can provide a team of 20 Virtual Assistants to help with a large-scale website data review and processing project. The ideal agency should have a team ready to start immediately, with experience in data entry, web research, and bulk processing tasks.

Project Details:

- Task: Reviewing and processing data from a website
- Team Size Needed: 20 Virtual Assistants
- Workload: High-volume tasks requiring speed and accuracy
- Estimated Hours: Flexible, but each VA should be available for at least 20-30 hours per week
- Tools: Google Sheets, website logins (credentials provided), and web-based tools
- Training: Brief training will be provided before starting

Requirements for the Agency:
Ability to quickly deploy a team of 20 VAs
Experience handling large-scale data processing or similar tasks
Strong quality control processes to ensure accuracy
Project manager or team lead to oversee work and ensure deadlines are met
Proven track record with similar high-volume projects

Pay: ₹9,291.20 - ₹20,000.00 per month

Schedule:

- Evening shift
- Monday to Friday
- Morning shift
- Night shift

Supplemental Pay:

- Performance bonus

Application Question(s):

- Are you an agency ?
- Can you provide 20 Data entry executives

Work Location: In person
This advertiser has chosen not to accept applicants from your region.

Data Scientist - Natural Language Processing

East Of Kailash, Delhi Esri

Posted today

Job Viewed

Tap Again To Close

Job Description

Overview

Esri is the world leader in geographic information systems (GIS) and developer of ArcGIS, the leading mapping and analytics software used in 75 percent of Fortune companies. At the Esri R&D Center-New Delhi, we are applying cutting-edge AI and deep learning techniques to revolutionize geospatial analysis and derive insight from imagery and location data. We are passionate about applying data science and artificial intelligence to solve some of the worlds biggest challenges.

Our team develops tools, APIs, and AI models for geospatial analysts and data scientists, enabling them to leverage the latest research in spatial data science, AI and geospatial deep learning.

As a Data Scientist, you will develop deep learning models using libraries such as PyTorch and create APIs and tools for training and deploying them on satellite imagery. If you are passionate about deep learning applied to remote sensing and GIS, developing AI and deep learning models, and love maps or geospatial datasets/imagery, this is the place to be!

Responsibilities

  • Develop tools, APIs and pretrained models for geospatial AI
  • Integrate ArcGIS with popular deep learning libraries such as PyTorch
  • Fine-tune large language models (LLMs) for geospatial AI tasks and develop AI agents and assistants
  • Develop APIs and model architectures for natural language processing and deep learning on unstructured text
  • Author and maintain geospatial data science samples using ArcGIS and machine learning/deep learning libraries
  • Curate and pre/post-process data for deep learning models and transform it into geospatial information
  • Perform comparative studies of various deep learning model architectures
  • Requirements

  • 2 to 6 years of experience with Python, in data science and deep learning
  • Self-learner with coursework in and extensive knowledge of machine learning and deep learning
  • Experience with Python machine learning and deep learning libraries such as PyTorch, Scikit-learn, NumPy, Pandas
  • Expertise in one or more of these areas: Experience with transformer-based models Large language models and experience building applications using them Experience of working on NLP based tasks such as recommender system, summarization, and more
  • Experience in data visualization in Jupyter Notebooks using matplotlib and other libraries
  • Experience with hyperparameter-tuning and training models to a high level of accuracy
  • Bachelor's in computer science, engineering, or related disciplines from IITs and other top-tier engineering colleges
  • Existing work authorization for India
  • Recommended Qualifications

  • Familiarity with ArcGIS suite of products and concepts of GIS
  • Familiarity and experience using langchain/AutoGPT/BabyAGI
  • #LI-Onsite

    #LI-PK1

    About Esri

    At Esri, diversity is more than just a word on a map. When employees of different experiences, perspectives, backgrounds, and cultures come together, we are more innovative and ultimately a better place to work. We believe in having a diverse workforce that is unified under our mission of creating positive global change. We understand that diversity, equity, and inclusion is not a destination but an ongoing process. We are committed to the continuation of learning, growing, and changing our workplace so every employee can contribute to their lifes best work. Our commitment to these principles extends to the global communities we serve by creating positive change with GIS technology. For more information on Esris Racial Equity and Social Justice initiatives, please visit our website .

    If you dont meet all of the preferred qualifications for this position, we encourage you to still apply!

    Esri is an equal opportunity employer (EOE) and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability status, protected veteran status, or any other characteristic protected by law. If you need reasonable accommodation for any part of the employment process, please email and let us know the nature of your request and your contact information. Please note that only those inquiries concerning a request for reasonable accommodation will be responded to from this e-mail address.

    Esri takes our responsibility to protect your privacy seriously. We are committed to respecting your privacy by providing transparency in how we acquire and use your information, giving you control of your information and preferences, and holding ourselves to the highest national and international standards, including CCPA and GDPR compliance.

    Requisition ID: -

    This advertiser has chosen not to accept applicants from your region.

    Big Data Engineer - Scala

    Narela, Delhi Idyllic Services

    Posted 1 day ago

    Job Viewed

    Tap Again To Close

    Job Description

    Job Title: Big Data Engineer – Scala

    Location: Bangalore, Chennai, Gurgaon, Pune, Mumbai.

    Experience: 7–10 Years (Minimum 3+ years in Scala)

    Notice Period: Immediate to 30 Days

    Mode of Work: Hybrid


    Role Overview

    We are looking for a highly skilled Big Data Engineer (Scala) with strong expertise in Scala, Spark, Python, NiFi, and Apache Kafka to join our data engineering team. The ideal candidate will have a proven track record in building, scaling, and optimizing big data pipelines , and hands-on experience in distributed data systems and cloud-based solutions.


    Key Responsibilities

    - Design, develop, and optimize large-scale data pipelines and distributed data processing systems.

    - Work extensively with Scala, Spark (PySpark), and Python for data processing and transformation.

    - Develop and integrate streaming solutions using Apache Kafka and orchestration tools like NiFi / Airflow .

    - Write efficient queries and perform data analysis using Jupyter Notebooks and SQL .

    - Collaborate with cross-functional teams to design scalable cloud-based data architectures .

    - Ensure delivery of high-quality code through code reviews, performance tuning, and best practices .

    - Build monitoring and alerting systems leveraging Splunk or equivalent tools .

    - Participate in CI/CD workflows using Git, Jenkins, and other DevOps tools.

    - Contribute to product development with a focus on scalability, maintainability, and performance.


    Mandatory Skills

    - Scala – Minimum 3+ years of hands-on experience.

    - Strong expertise in Spark (PySpark) and Python .

    - Hands-on experience with Apache Kafka .

    - Knowledge of NiFi / Airflow for orchestration.

    - Strong experience in Distributed Data Systems (5+ years) .

    - Proficiency in SQL and query optimization.

    - Good understanding of Cloud Architecture .


    Preferred Skills

    - Exposure to messaging technologies like Apache Kafka or equivalent.

    - Experience in designing intuitive, responsive UIs for data analytics visualization.

    - Familiarity with Splunk or other monitoring/alerting solutions .

    - Hands-on experience with CI/CD tools (Git, Jenkins).

    - Strong grasp of software engineering concepts, data modeling, and optimization techniques .

    This advertiser has chosen not to accept applicants from your region.

    Big Data Engineer - Scala

    Delhi, Delhi Idyllic Services

    Posted 1 day ago

    Job Viewed

    Tap Again To Close

    Job Description

    Job Title: Big Data Engineer – Scala

    Location: Bangalore, Chennai, Gurgaon, Pune, Mumbai.

    Experience: 7–10 Years (Minimum 3+ years in Scala)

    Notice Period: Immediate to 30 Days

    Mode of Work: Hybrid


    Role Overview

    We are looking for a highly skilled Big Data Engineer (Scala) with strong expertise in Scala, Spark, Python, NiFi, and Apache Kafka to join our data engineering team. The ideal candidate will have a proven track record in building, scaling, and optimizing big data pipelines , and hands-on experience in distributed data systems and cloud-based solutions.


    Key Responsibilities

    - Design, develop, and optimize large-scale data pipelines and distributed data processing systems.

    - Work extensively with Scala, Spark (PySpark), and Python for data processing and transformation.

    - Develop and integrate streaming solutions using Apache Kafka and orchestration tools like NiFi / Airflow .

    - Write efficient queries and perform data analysis using Jupyter Notebooks and SQL .

    - Collaborate with cross-functional teams to design scalable cloud-based data architectures .

    - Ensure delivery of high-quality code through code reviews, performance tuning, and best practices .

    - Build monitoring and alerting systems leveraging Splunk or equivalent tools .

    - Participate in CI/CD workflows using Git, Jenkins, and other DevOps tools.

    - Contribute to product development with a focus on scalability, maintainability, and performance.


    Mandatory Skills

    - Scala – Minimum 3+ years of hands-on experience.

    - Strong expertise in Spark (PySpark) and Python .

    - Hands-on experience with Apache Kafka .

    - Knowledge of NiFi / Airflow for orchestration.

    - Strong experience in Distributed Data Systems (5+ years) .

    - Proficiency in SQL and query optimization.

    - Good understanding of Cloud Architecture .


    Preferred Skills

    - Exposure to messaging technologies like Apache Kafka or equivalent.

    - Experience in designing intuitive, responsive UIs for data analytics visualization.

    - Familiarity with Splunk or other monitoring/alerting solutions .

    - Hands-on experience with CI/CD tools (Git, Jenkins).

    - Strong grasp of software engineering concepts, data modeling, and optimization techniques .

    This advertiser has chosen not to accept applicants from your region.

    Big Data Engineer - Scala

    New Delhi, Delhi Idyllic Services

    Posted 1 day ago

    Job Viewed

    Tap Again To Close

    Job Description

    Job Title: Big Data Engineer – Scala

    Location: Bangalore, Chennai, Gurgaon, Pune, Mumbai.

    Experience: 7–10 Years (Minimum 3+ years in Scala)

    Notice Period: Immediate to 30 Days

    Mode of Work: Hybrid


    Role Overview

    We are looking for a highly skilled Big Data Engineer (Scala) with strong expertise in Scala, Spark, Python, NiFi, and Apache Kafka to join our data engineering team. The ideal candidate will have a proven track record in building, scaling, and optimizing big data pipelines , and hands-on experience in distributed data systems and cloud-based solutions.


    Key Responsibilities

    - Design, develop, and optimize large-scale data pipelines and distributed data processing systems.

    - Work extensively with Scala, Spark (PySpark), and Python for data processing and transformation.

    - Develop and integrate streaming solutions using Apache Kafka and orchestration tools like NiFi / Airflow .

    - Write efficient queries and perform data analysis using Jupyter Notebooks and SQL .

    - Collaborate with cross-functional teams to design scalable cloud-based data architectures .

    - Ensure delivery of high-quality code through code reviews, performance tuning, and best practices .

    - Build monitoring and alerting systems leveraging Splunk or equivalent tools .

    - Participate in CI/CD workflows using Git, Jenkins, and other DevOps tools.

    - Contribute to product development with a focus on scalability, maintainability, and performance.


    Mandatory Skills

    - Scala – Minimum 3+ years of hands-on experience.

    - Strong expertise in Spark (PySpark) and Python .

    - Hands-on experience with Apache Kafka .

    - Knowledge of NiFi / Airflow for orchestration.

    - Strong experience in Distributed Data Systems (5+ years) .

    - Proficiency in SQL and query optimization.

    - Good understanding of Cloud Architecture .


    Preferred Skills

    - Exposure to messaging technologies like Apache Kafka or equivalent.

    - Experience in designing intuitive, responsive UIs for data analytics visualization.

    - Familiarity with Splunk or other monitoring/alerting solutions .

    - Hands-on experience with CI/CD tools (Git, Jenkins).

    - Strong grasp of software engineering concepts, data modeling, and optimization techniques .

    This advertiser has chosen not to accept applicants from your region.

    Data Engineer

    Delhi, Delhi Deloitte

    Posted 4 days ago

    Job Viewed

    Tap Again To Close

    Job Description

    Your potential, unleashed.


    India’s impact on the global economy has increased at an exponential rate and Deloitte presents an opportunity to unleash and realise your potential amongst cutting edge leaders, and organisations shaping the future of the region, and indeed, the world beyond.


    At Deloitte, your whole self to work, every day. Combine that with our drive to propel with purpose and you have the perfect playground to collaborate, innovate, grow, and make an impact that matters.


    The team

    As a member of the Operations Transformations team you will embark on an exciting and fulfilling journey with a group of intelligent and innovative globally aware individuals.

    We work in conjuncture with various institutions solving key business problems across a broad-spectrum roles and functions, all set against the backdrop of constant industry change.


    Your work profile


    Job Title: Database Engineer

    Experience: 3+ Years

    Skills

    • Design, develop, and maintain efficient and scalable
    • ETL/ELT data pipelines using Python or PySpark.
    • Collaborate with data engineers, analysts, and stakeholders to understand data requirements and translate them into technical solutions.
    • Perform data cleansing, transformation, and validation to ensure data quality and integrity.
    • Optimize and troubleshoot performance issues in data processing jobs.
    • Implement data integration solutions for various sources including databases, APIs, and file systems.
    • Participate in code reviews, testing, and deployment processes.
    • Maintain proper documentation for data workflows, systems, and best practices.


    Qualifications:

    • Bachelor’s degree in computer science, Engineering, or a related field.
    • 3 to 5 years of hands-on experience as a Data Developer
    • Proficient in Python and/or PySpark for data processing.
    • Experience working with big data platforms such as Hadoop, Spark, or Databricks.
    • Strong understanding of relational databases and SQL.
    • Familiarity with data warehousing concepts and tools(e.g., Snowflake, Redshift, BigQuery) is a plus.
    • Knowledge of cloud platforms (AWS, Azure, or GCP) is an advantage.


    How you’ll grow


    Connect for impact


    Our exceptional team of professionals across the globe are solving some of the world’s most complex business problems, as well as directly supporting our communities, the planet, and each other. Know more in our Global Impact Report and our India Impact Report .


    Empower to lead


    You can be a leader irrespective of your career level. Our colleagues are characterised by their ability to inspire, support, and provide opportunities for people to deliver their best and grow both as professionals and human beings. Know more about Deloitte and our One Young World partnership.


    Inclusion for all


    At Deloitte, people are valued and respected for who they are and are trusted to add value to their clients, teams and communities in a way that reflects their own unique capabilities. Know more about everyday steps that you can take to be more inclusive. At Deloitte, we believe in the unique skills, attitude and potential each and every one of us brings to the table to make an impact that matters.





    Drive your career


    At Deloitte, you are encouraged to take ownership of your career. We recognise there is no one size fits all career path, and global, cross-business mobility and up / re-skilling are all within the range of possibilities to shape a unique and fulfilling career. Know more about Life at Deloitte.



    Everyone’s welcome… entrust your happiness to us

    Our workspaces and initiatives are geared towards your 360-degree happiness. This includes specific needs you may have in terms of accessibility, flexibility, safety and security, and caregiving. Here’s a glimpse of things that are in store for you.


    Interview tips


    We want job seekers exploring opportunities at Deloitte to feel prepared, confident and comfortable. To help you with your interview, we suggest that you do your research, know some background about the organisation and the business area you’re applying to. Check out recruiting tips from Deloitte professionals.

    This advertiser has chosen not to accept applicants from your region.

    Data engineer

    Delhi, Delhi Incept Labs

    Posted today

    Job Viewed

    Tap Again To Close

    Job Description

    permanent
    Position: Software Engineer (Data)Location: Remote, IndiaAt Incept Labs, we believe the future of education and research lies in humans and AI workingtogether side by side. AI brings the ability to process knowledge at scale, while peoplecontribute imagination, values, and lived experience. When combined, they create a partnershipwhere each strengthens the other, opening new ways to discover, adapt, and grow.We are a small team of scientists, engineers, and builders who are passionate about buildingdomain-specific, next-generation AI solutions to enhance education and research.About This RoleWe're looking for a Software Engineer with deep expertise in large-scale data processing forLLM development. Data engineering is critical to successful model training and evaluation. You'll work directly with researchers to accelerate experiments, develop new datasets, improve infrastructure efficiency, and enable key insights across our data assets.You’ll join a high-impact, compact team responsible for both architecture and scaling of Incept’sdata and model development infrastructure, and work with highly complex, multi-modal data.ResponsibilitiesDesign, build, and operate scalable, fault-tolerant data infrastructure to supportdistributed computing and data orchestration for LLM ResearchDevelop and maintain high-throughput systems for data ingestion, processing, andtransformation to support LLM model developmentDevelop synthetic datasets using state-of-the-art solutionsCollaborate with research teams to deliver critical data assets for model development and evaluationImplement and maintain monitoring and alerting to support platform reliability and performanceBuild systems for traceability, reproducibility, and robust quality control to ensure adherence to industry compliance standardsRequired Qualifications5+ years of experience in data infrastructure, ideally supporting high-scale applications or research platforms.Fluent in distributed computing frameworks.Deeply familiar with cloud infrastructure, data storage architectures, and batch + streaming pipelines.Experience with specialized hardware (GPUs, TPUs) computing and GPU cluster.Strong knowledge of databases, storage systems, and how architecture choices impact performance at scale.Familiar with microservices architectures, containerization and orchestration, and both synchronous and asynchronous processing. Extensive experience with performance optimization and memory management in high-volume data systems.Proactive about documentation, automation, testing, and empowering your teammates with good tooling.This role is fully remote, India-based. Compensation and benefits will vary based on background, skills, and experience levels.
    This advertiser has chosen not to accept applicants from your region.
    Be The First To Know

    About the latest Hadoop Jobs in Delhi !

    Data engineer

    Delhi, Delhi Astreya

    Posted today

    Job Viewed

    Tap Again To Close

    Job Description

    permanent
    Data EngineerAstreya offers comprehensive IT support and managed services. These services include DataCenter and Network Management, Digital Workplace Services (like Service Desk, Audio Visual, andIT Asset Management), as well as Next-Gen Digital Engineering services encompassing SoftwareEngineering, Data Engineering, and cybersecurity solutions. Astreya's expertise lies in creatingseamless interactions between people and technology to help organizations achieve operationalexcellence and growth.Job DescriptionWe are seeking experienced Data Engineer to join our analytics division.You will be aligned with our Data Analytics and BI vertical. You will have to conceptualize and own the build out of problem-solving data marts for consumption by data science and BI teams, evaluating design and operational tradeoffs within systems.Design, develop, and maintain robust data pipelines and ETL processes using data platforms for the organization's centralized data warehouse.Create or contribute to frameworks that improve the efficacy of logging data, while working with the Engineering team to triage issues and resolve them.Validate data integrity throughout the collection process, performing data profiling to identify and comprehend data anomalies.Influence product and cross-functional (engineering, data science, operations, strategy) teams to identify data opportunities to drive impact.RequirementsExperience & EducationBachelor's degree in Computer Science, Mathematics, a related field, or equivalent practical experience.5 years of experience coding with SQL or one or more programming languages (e.g., Python, Java, R, etc.) for data manipulation, analysis, and automation5 years of experience designing data pipelines (ETL) and dimensional data modeling for synchronous and asynchronous system integration and implementation.Experience in managing troubleshooting technical issues, and working with Engineering and Sales Services teams.Preferred qualifications:Master’s degree in Engineering, Computer Science, Business, or a related field.Experience with cloud-based services relevant to data engineering, data storage, data processing, data warehousing, real-time streaming, and serverless computing.Experience with experimentation infrastructure, and measurement approaches in a technology platform.Experience with data processing software (e.g., Hadoop, Spark) and algorithms (e.g., Map Reduce, Flume).
    This advertiser has chosen not to accept applicants from your region.

    Data engineer

    Delhi, Delhi Bahwan CyberTek

    Posted today

    Job Viewed

    Tap Again To Close

    Job Description

    permanent
    Job Title: Data Engineer – Google Cloud Platform (GCP) Job Summary We are seeking a skilled and motivated Data Engineer with hands-on experience in building scalable data pipelines and cloud-native data solutions on Google Cloud Platform. The ideal candidate will be proficient in GCP services like Pub/Sub, Dataflow, Cloud Storage, and Big Query, with a foundational understanding of AI/ML workflows using Vertex AI. Key Responsibilities Design, develop, and optimize robust data ingestion pipelines using GCP services such as Pub/Sub, Dataflow, and Cloud Storage. Architect and manage scalable Big Query data warehouses to support analytics, reporting, and business intelligence needs. Collaborate with data scientists and ML engineers to support AI/ML workflows using Vertex AI (AO Vertex), including model training and deployment. Ensure data quality, reliability, and performance across all pipeline components. Work closely with cross-functional teams to understand data requirements and deliver efficient solutions. Maintain documentation and contribute to best practices in cloud data engineering. Required Skills & Qualifications 3–6 years of experience in data engineering, with strong exposure to GCP. Proficiency in GCP services: Pub/Sub, Dataflow, Cloud Storage, and Big Query. Solid understanding of data modeling, ETL/ELT processes, and performance optimization. Experience with Python, SQL, and cloud-native development practices. Familiarity with CI/CD pipelines and version control (e.g., Git). Bachelor’s or Master’s degree in Computer Science, Data Engineering, or related field. Secondary Skills (Interview-Ready Knowledge) Basic understanding of AI/ML workflows and tools within Vertex AI. Ability to discuss model lifecycle, deployment strategies, and integration with data pipelines. Awareness of MLOps principles and cloud-based ML orchestration.
    This advertiser has chosen not to accept applicants from your region.

    Data engineer

    Delhi, Delhi Firstsource

    Posted today

    Job Viewed

    Tap Again To Close

    Job Description

    permanent
    Key Skills and Responsibilities:We are seeking a Senior Data Engineer with a strong background in cloud-native data engineering, primarily on Microsoft Azure, and familiarity with AWS and GCP. The ideal candidate will have deep expertise in building scalable data pipelines, implementing enterprise-grade data governance, security, and AI-powered engineering automation.This role will play a pivotal part in designing, developing, and optimizing data ingestion, transformation, and governance frameworks, enabling real-time and batch data analytics across our data platform.Kindly click on the below link to apply: Uh P5s PEJKey Responsibilities:● Design and implement scalable, robust, and secure data pipelines using:● Azure Data Factory, Databricks, Synapse Analytics● Azure Data Lake Gen2, Event Hub, Azure Functions● Develop and maintain ETL/ELT processes for structured and unstructured data● Implement Change Data Capture (CDC), streaming pipelines, and batch ingestion● Work with AWS Glue, S3, Redshift or GCP Big Query, Dataflow as needed● Optimize performance and cost across data workloads and cloud environments● Establish and maintain data quality frameworks, observability tools, and monitoring dashboards● Define and enforce data rules, validations, anomaly detection and reconciliation● Implement data lineage and metadata tracking using Azure Purview● Drive adoption of data cataloguing, profiling, and classification tools● Collaborate with data stewards and compliance teams to ensure governance alignment● Implement role-based access control (RBAC), data masking, encryption, and tokenization● Responsible for technical design, coding, unit testing, test case documentation, and walkthroughsfor all assigned Azure related projects to support company business and operational needs.● Ensures software developed follows the defined programming standards and follows the code /design review processes. ● Critically evaluate information gathered from multiple sources, reconcile conflicts, decompose high-level information into details, abstract up from low level information to a general understanding, and distinguish user requests from the underlying true needs. ● Collaborate with developers and subject matter experts to establish the technical vision and analyse trade-offs between usability and performance needs. ● Mentor junior data engineers and contribute to technical best practicesQualification & Experience Technical Skills: ● 4+ years in data engineering with at least 2+ years on Azure cloud platform ● Hands-on with Azure Data Factory, Azure Data Lake Gen2, Databricks, Synapse, Purview ● Proficiency in SQL, Py Spark, Python, and data orchestration tools ● Strong understanding of data architecture patterns – Lakehouse, medallion, delta architecture ● Familiarity with Snowflake, Big Query, Redshift, or AWS Glue ● Experience with data versioning and Git Ops for data ● Working knowledge of data observability, lineage, cataloguing, data quality ● Exposure to privacy-enhancing techniques, access control, security auditing ● Exposure to machine learning use cases in data engineering pipelines, data quality, anomaly detection, and schema change detection ● Exposure to Gen AI or agentic AI to Automate data cataloguing and metadata enrichment etc. ● Experience with NLP or LLMs in metadata extraction or data classification is a plus Soft Skills: - ● Strong problem-solving, communication, and stakeholder collaboration skills ● Ability to lead data initiatives and mentor team members ● Proactive in learning and adopting emerging technologies in data & AI Qualification: - ·Candidate must be BE/BTech/MCA with 4 to 8 years’ experience in Data Engineering.Personal Attributes/Traits • Consultative • Socially confident • Achievement oriented • Decisive and action oriented • Creative • Eager to learn • Resilient Competencies •Business Foresight • Influencing Others • Fostering Partnerships With Customers • Managing Transformation • Driving Excellence • Leading Teams • Working Across Boundaries
    This advertiser has chosen not to accept applicants from your region.
     

    Nearby Locations

    Other Jobs Near Me

    Industry

    1. request_quote Accounting
    2. work Administrative
    3. eco Agriculture Forestry
    4. smart_toy AI & Emerging Technologies
    5. school Apprenticeships & Trainee
    6. apartment Architecture
    7. palette Arts & Entertainment
    8. directions_car Automotive
    9. flight_takeoff Aviation
    10. account_balance Banking & Finance
    11. local_florist Beauty & Wellness
    12. restaurant Catering
    13. volunteer_activism Charity & Voluntary
    14. science Chemical Engineering
    15. child_friendly Childcare
    16. foundation Civil Engineering
    17. clean_hands Cleaning & Sanitation
    18. diversity_3 Community & Social Care
    19. construction Construction
    20. brush Creative & Digital
    21. currency_bitcoin Crypto & Blockchain
    22. support_agent Customer Service & Helpdesk
    23. medical_services Dental
    24. medical_services Driving & Transport
    25. medical_services E Commerce & Social Media
    26. school Education & Teaching
    27. electrical_services Electrical Engineering
    28. bolt Energy
    29. local_mall Fmcg
    30. gavel Government & Non Profit
    31. emoji_events Graduate
    32. health_and_safety Healthcare
    33. beach_access Hospitality & Tourism
    34. groups Human Resources
    35. precision_manufacturing Industrial Engineering
    36. security Information Security
    37. handyman Installation & Maintenance
    38. policy Insurance
    39. code IT & Software
    40. gavel Legal
    41. sports_soccer Leisure & Sports
    42. inventory_2 Logistics & Warehousing
    43. supervisor_account Management
    44. supervisor_account Management Consultancy
    45. supervisor_account Manufacturing & Production
    46. campaign Marketing
    47. build Mechanical Engineering
    48. perm_media Media & PR
    49. local_hospital Medical
    50. local_hospital Military & Public Safety
    51. local_hospital Mining
    52. medical_services Nursing
    53. local_gas_station Oil & Gas
    54. biotech Pharmaceutical
    55. checklist_rtl Project Management
    56. shopping_bag Purchasing
    57. home_work Real Estate
    58. person_search Recruitment Consultancy
    59. store Retail
    60. point_of_sale Sales
    61. science Scientific Research & Development
    62. wifi Telecoms
    63. psychology Therapy
    64. pets Veterinary
    View All Hadoop Jobs View All Jobs in Delhi