1,931 Python Pyspark jobs in India

Python Pyspark

Chennai, Tamil Nadu Virtusa

Job Viewed

Tap Again To Close

Job Description

Overall Eight plus years of experience with min of five plus years experience in required technical skills

Required technical skills: Python, PySpark, SQL

AWS Cloud experience and Healthcare domain knowledge is desired

**About Virtusa**

Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 36,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us.

Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence.

Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
This advertiser has chosen not to accept applicants from your region.

Job No Longer Available

This position is no longer listed on WhatJobs. The employer may be reviewing applications, filled the role, or has removed the listing.

However, we have similar jobs available for you below.

Python pyspark

Hyderabad, Andhra Pradesh Virtusa

Posted today

Job Viewed

Tap Again To Close

Job Description

Python pyspark - CREQ190721 Description Design and develop on Hadoop applications.
Hands on in developing Jobs in pySpark with Python/ SCALA (Preferred) or Java/ SCALA.
Experience on Core Java, Experience on Map Reduce programs, Hive programming, Hive queries performance concepts.
Experience on source code management with Git repositories.
Secondary skills
Exposure to AWS Ecosystem with hands-on knowledge of ec2, S3 and services.
Basic SQL programming.
Knowledge of agile methodology for delivering software solutions.
Build scripting with Maven / Cradle, Exposure to Jenkins.
Primary Location Hyderabad, Andhra Pradesh, India Job Type Experienced Primary Skills Python, PySpark Years of Experience 4 Travel No
This advertiser has chosen not to accept applicants from your region.

Python+Pyspark

Bengaluru, Karnataka Wissen

Posted today

Job Viewed

Tap Again To Close

Job Description

Wissen Technology is now hiringPython+Pyspark

We are seeking a skilled Python Developer with a strong background in PySpark to develop and optimize data processing applications. The ideal candidate will be responsible for building robust and scalable data processing solutions using Python and PySpark.

Experience–4-8years

Location:Bangalore

Requirements:

  • Develop, maintain, and optimize scalable data processing applications using Python and PySpark.

  • Design and implement data solutions that meet performance and reliability requirements.

  • Collaborate with data engineers, data scientists, and other stakeholders to gather requirements and deliver high-quality solutions.

  • Write clean, efficient, and maintainable code following best practices and coding standards.

  •  Perform data analysis and ensure data quality and integrity.

  • Monitor and troubleshoot performance issues in the data processing pipelines.

  •  Implement and maintain CI/CD pipelines for automated testing and deployment.

  • Stay up-to-date with the latest industry trends and technologies in Python and PySpark

  • Required Skills and Qualifications:

  • Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.

  • Proven experience as a Python Developer with expertise in PySpark.

  • Strong knowledge of Python and its libraries (, Pandas, NumPy).

  •  Experience with Apache Spark, including Spark SQL, DataFrames, and Spark Streaming.

  • Proficiency in SQL and experience with relational databases.

  • Familiarity with big data tools and frameworks.

  •  Experience with version control systems such as Git.

  • Strong problem-solving skills and attention to detail.

  • Excellent communication and teamwork skills.

  • About Wissen Technology:

    The Wissen Group was founded in the year 2000. Wissen Technology, a part of Wissen Group, was established in the year 2015. Wissen Technology is a specialized technology company that delivers high-end consulting for organizations in the Banking & Finance, Telecom, and Healthcare domains. We help clients build world class products.

    We offer an array of services including Core Business Application Development, Artificial Intelligence & Machine Learning, Big Data & Analytics, Visualization & Business Intelligence, Robotic Process Automation, Cloud Adoption, Mobility, Digital Adoption, Agile & DevOps, Quality Assurance & Test Automation.

    Over the years, Wissen Group has successfully delivered $1 billion worth of projects for more than 20 of the Fortune 500 companies. Wissen Technology provides exceptional value in mission critical projects for its clients, through thought leadership, ownership, and assured on-time deliveries that are always ‘first time right’.

    The technology and thought leadership that the company commands in the industry is the direct result of the kind of people Wissen has been able to attract. Wissen is committed to providing them with the best possible opportunities and careers, which extends to providing the best possible experience and value to our clients.

    We have been certified as a Great Place to Work company for two consecutive years (2020-2022) and voted as the Top 20 AI/ML vendor by CIO Insider. Great Place to Work Certification is recognized world over by employees and employers alike and is considered the ‘Gold Standard’. Wissen Technology has created a Great Place to Work by excelling inall dimensions - High-Trust, High-Performance Culture, Credibility, Respect, Fairness, Pride and Camaraderie.

    This advertiser has chosen not to accept applicants from your region.

    Python Pyspark Lead

    Bengaluru, Karnataka Virtusa

    Posted today

    Job Viewed

    Tap Again To Close

    Job Description

    Python Pyspark Lead - CREQ187226 Description Design and develop on Hadoop applications,

    Hands on in developing Jobs in pySpark with Python/ SCALA (Preferred) or Java/ SCALA

    Experience on Core Java, Experience on Map Reduce programs, Hive programming, Hive queries performance concepts

    Experience on source code management with Git repositories

    Secondary skills

    Exposure to AWS Ecosystem with hands-on knowledge of ec2, S3 and services

    Basic SQL programming

    Knowledge of agile methodology for delivering software solutions

    Build scripting with Maven / Cradle, Exposure to Jenkins Primary Location Bangalore, Karnataka, India Job Type Experienced Primary Skills PySpark, Hive Years of Experience 7 Travel No
    This advertiser has chosen not to accept applicants from your region.

    Python Pyspark Data Engineer

    Bengaluru, Karnataka DXC Technology

    Posted today

    Job Viewed

    Tap Again To Close

    Job Description

    Job Description:

    Python Pyspark Data Engineer

    Job Location: Hyderabad / Bangalore / Chennai / Kolkata / Noida/ Gurgaon / Pune / Indore / Mumbai

    We are seeking a skilled Lead Data Engineer with strong programming and SQL skills to join our team. The ideal candidate will have hands-on experience with Python and Pyspark Data Analytics services and a basic understanding of general AWS services.

    Key Responsibilities:

  • Design, develop, and optimize data pipelines using Python, Pyspark, AWS Data Analytics services such as RDS, DMS, Glue, Lambda, Redshift, and Athena .
  • Implement data migration and transformation processes using AWS DMS and Glue .
  • Work with SQL (Oracle & Postgres) to query, manipulate, and analyse large datasets.
  • Develop and maintain ETL/ELT workflows for data ingestion and transformation.
  • Utilize AWS services like S3, IAM, CloudWatch, and VPC to ensure secure and efficient data operations.
  • Write clean and efficient Python scripts for automation and data processing.
  • Collaborate with DevOps teams using Azure DevOps for CI/CD pipelines and infrastructure management.
  • Monitor and troubleshoot data workflows to ensure high availability and performance.
  • At DXC Technology, we believe strong connections and community are key to our success. Our work model prioritizes in-person collaboration while offering flexibility to support wellbeing, productivity, individual work styles, and life circumstances. We’re committed to fostering an inclusive environment where everyone can thrive.

    This advertiser has chosen not to accept applicants from your region.

    Data Analyst - Python & PySpark

    111045 Idyllic Services Pvt Ltd

    Posted 266 days ago

    Job Viewed

    Tap Again To Close

    Job Description

    Permanent

    Job Title: Data Analyst - Python & PySpark

    Requirement:

    1. Ability of coding in PySpark, Python & SQL are must in Data analytics project
    2. Good understanding of Data Ingestion into On prem and Cloud. Exposure on the same is preferred.
    3. Ability to analyze or understand data analysis requirements
    4. Strong attention to details and communication skills to both technical and non-technical audiences required
    5. Demonstrated track record of taking ownership, being organized, and leading projects with the ability to set and achieve the highest standards for the team
    6. Proactive self-starter, with the ability to identify problems and follow them through to resolution
    7. Good communication & Stakeholder management skills
    8. Agile Project delivery managed preferably on JIRA

    This advertiser has chosen not to accept applicants from your region.

    Senior Software Engineer (Python & Pyspark)

    Bengaluru, Karnataka The Nielsen Company

    Posted today

    Job Viewed

    Tap Again To Close

    Job Description

    About the role: Join the team as a Senior Python Software Engineer to design and develop the software used internationally for Television Audience Measurement in TV and Radio markets.As part of our modernization initiative we are implementing new features into our product. The environment is mainly based on AWS cloud solutions. Python is the language of choice for the development of features and tools for everyday use. Pandas and Spark are the technologies used for data transformation.Tech stack: Airflow, AWS, Bash, Python, Docker, Kubernetes, Linux, Pandas, Spark, CI/CD, Confluence, GitLab, Jira. Responsibilities: Consult with stakeholders to determine the scope of software development projects.Design, develop, test, and document new software features as per business requirements.Contribute to the process of migration of the current solutions to the AWS cloud.Investigate application incidents for missing or incorrect functionality.Perform code reviews.Supervise the software development team.Demonstrate patience and use effective explanations when mentoring junior software developers. Required competencies: Strong knowledge of Python.Experience with Pandas and Spark.Basic Linux administration skills.At least intermediate level of English, both written and verbal.8 years of working experience in a field related to a similar position.Good communication skills. Nice to have competencies: Experience in AWS cloud application development.Scrum Agile development experience.Experience in writing Airflow DAGs.Experience in writing GitLab CI/CD pipelines.
    This advertiser has chosen not to accept applicants from your region.

    Data Scientist (Python, Pyspark, SQL)

    Bengaluru, Karnataka Nielsen

    Posted today

    Job Viewed

    Tap Again To Close

    Job Description

    At Nielsen, we believe that career growth is a partnership. You ultimately own, fuel and set the journey. By joining our team of nearly 14,000 associates, you will become part of a community that will help you to succeed. We champion you because when you succeed, we do too. Embark on a new initiative, explore a fresh approach, and take license to think big, so we can all continuously improve. We enable your best to power our future.

    **About the Role**

    You enjoy bridging the fields of data science, software development, and data engineering. You’re equally excited about building models at scale and writing production-ready software that can run in the cloud. You understand machine learning and know how to implement best software development practices across a team. You’re intellectually curious and prepared to learn from your peers.

    **Responsibilities**:

    - Build measurement and planning solutions for publishers, advertisers, and agencies.
    - Support reproducible data science projects end-to-end.
    - Deploy and maintain data pipelines and models in a production environment.
    - Work with cross-functional teams to productionize, validate, and optimize methodologies.
    - Communicate methodology and research findings to varying audiences.
    - Support research on methodology changes to cross-platform audience measurement. The primary research areas include trend analyses, imputing missing data, representation/ sampling, bias reduction, indirect estimation, data integration, and automation.
    - Address quality escapes and fix issues in production code.
    - Document new methodologies and code.

    **Technical Skills**:

    - 0-3 years work experience in Python, Spark, SQL.
    - Degree in data science, statistics, engineering, applied mathematics, operations research, information sciences, or another biological/physical science.
    - Strength in code documentation.
    - Proficiency in Git and code versioning tools (Gitlab).
    - Proficiency in Atlassian Suite such as JIRA and Confluence.
    - Familiarity with cloud computing (AWS, Goolge Cloud preferred).
    - Knowledge of statistics and machine learning.
    - Ability to manipulate, analyze, and interpret large datasets.
    - Knowledge of dashboarding and visualization tools like Spotfire/Tableau.

    **Business Skills**:

    - Excellent oral and written communication.
    - Self-motivation and an ability to handle multiple competing priorities in a fast-paced environment.
    - Strong interpersonal skills and the ability to develop effective relationships with other team members, including remotely.
    This advertiser has chosen not to accept applicants from your region.
    Be The First To Know

    About the latest Python pyspark Jobs in India !

    Senior Software Engineer (Python, PySpark, Airflow, AWS)

    Pune, Maharashtra Autodesk

    Posted today

    Job Viewed

    Tap Again To Close

    Job Description

    **Job Requisition ID #**
    24WD82721
    **Position Overview**
    Autodesk is looking for a Senior Software Engineer to join the Data Ingestion team within the Analytics Data organization.
    The Enterprise Data Integration (EDI) team is a collection of systems and integrations focused on servicing the enterprise data needs of all data scientists, data analysts and data engineers throughout the organization.
    As a Senior Software Engineer, you will be responsible for developing best practices and making architectural choices to rapidly improve critical data processing & analytics pipelines. You will collaborate with highly motivated and wonderful software engineers. You will lead and support innovative solutions to sophisticated and modern engineering problems. As part of the team, you will learn, teach, grow, and help bring data closer to our users. You will make critical choices, tackle hard problems and improve the platform's reliability, resiliency, and scalability.
    We are looking for someone who is enthusiastic about working in a team, can own and deliver long-term projects to completion. You are detail and quality oriented, and excited about the prospects of having a big impact with data at Autodesk.
    **Responsibilities**
    + Contribute to the team's vision and articulate strategies to have fundamental impact at our massive scale
    + You will need a product-focused mindset. It is essential for you to understand business requirements and architect systems that will scale and extend to accommodate those needs
    + Diagnose and solve complex problems in distributed systems, develop and document technical solutions and sequence work to make fast, iterative deliveries and improvements
    + Build and maintain high-performance, fault-tolerant and scalable distributed systems that can handle our massive scale
    + Ideate and drive innovative projects that will improve user experience
    + Provide solid leadership within your very own problem space, through data-driven approach, robust software designs, and effective delegation
    + Participate in, or spearhead design reviews with peers and stakeholders to adopt what's best suited amongst available technologies.
    + Review code developed by other developers and provide feedback to ensure best practices (e.g., style guidelines, checking code in, accuracy, testability, and efficiency).
    + Automate cloud infrastructure, services, and observability
    + Develop CI/CD pipelines and testing automation
    + Establish and uphold best engineering practices through thorough code and design reviews and improved processes and tools
    + Groom junior engineers through mentoring and delegation
    + Drive a culture of trust, respect and inclusion within your team.
    **Minimum Qualifications**
    + 5+ years of relevant industry experience in large back-end distributed systems and cloud computing.
    + Strong overall programming skills, able to write modular, maintainable code, preferably Python & SQL
    + Experience with Spark & Airflow Mandatory
    + Experience building code-driven infrastructure on public cloud platforms, preferably AWS
    + Understanding of SQL, dimensional modeling, and at least one relational database
    + Experience with automation frameworks/tools like Git, Jenkins, Ansible, and Terraform
    + Familiarity with containers and infrastructure-as-code fundamentals
    + Solid Proficiency with Amazon Web Services
    + Problem solver with excellent written and interpersonal skills; ability to make sound, complex recommendations in a fast-paced, technical environment
    + Humble, collaborative, team player, willing to step up and support your colleagues
    + Effective communication, problem solving and interpersonal skills
    + Commit to grow deeper in the knowledge and understanding of how to improve our existing applications
    + Enthusiasm for cutting edge technologies, complex problems, and building things
    + Familiar with non-functional testing such as load, performance and resiliency testing
    + Good command of English (Speaking, Writing and Reading)
    + Working in an agile environment using test driven methodologies.
    + Bachelor's degree in Computer Science, Engineering or related field, or equivalent training, fellowship or work experience
    **Desired Qualifications**
    + Experience with data processing and SQL databases
    + Experience with Hadoop / Spark Source Code
    + Experience with Map Reduce
    + Experience with Hive and/or Snowflake
    + Strong knowledge and experience in Hadoop 2.0 and its ecosystem.
    + Experience with Airflow
    + Experience with data processing and SQL databases and DBT
    + Experience in microservices based architecture
    **Learn More**
    **About Autodesk**
    Welcome to Autodesk! Amazing things are created every day with our software - from the greenest buildings and cleanest cars to the smartest factories and biggest hit movies. We help innovators turn their ideas into reality, transforming not only how things are made, but what can be made.
    We take great pride in our culture here at Autodesk - it's at the core of everything we do. Our culture guides the way we work and treat each other, informs how we connect with customers and partners, and defines how we show up in the world.
    When you're an Autodesker, you can do meaningful work that helps build a better world designed and made for all. Ready to shape the world and your future? Join us!
    **Salary transparency**
    Salary is one part of Autodesk's competitive compensation package. Offers are based on the candidate's experience and geographic location. In addition to base salaries, our compensation package may include annual cash bonuses, commissions for sales roles, stock grants, and a comprehensive benefits package.
    **Diversity & Belonging**
    We take pride in cultivating a culture of belonging where everyone can thrive. Learn more here: you an existing contractor or consultant with Autodesk?**
    Please search for open jobs and apply internally (not on this external site).
    This advertiser has chosen not to accept applicants from your region.

    Data Architecture Intmd Anlyst -Python, Pyspark, SQL

    Chennai, Tamil Nadu Citigroup

    Posted 5 days ago

    Job Viewed

    Tap Again To Close

    Job Description

    The Data Architecture Intmd Anlyst is a developing professional role. Deals with most problems independently and has some latitude to solve complex problems. Integrates in-depth specialty area knowledge with a solid understanding of industry standards and practices. Good understanding of how the team and area integrate with others in accomplishing the objectives of the subfunction/ job family. Applies analytical thinking and knowledge of data analysis tools and methodologies. Requires attention to detail when making judgments and recommendations based on the analysis of factual information. Typically deals with variable issues with potentially broader business impact. Applies professional judgment when interpreting data and results. Breaks down information in a systematic and communicable manner. Developed communication and diplomacy skills are required in order to exchange potentially complex/sensitive information. Moderate but direct impact through close contact with the businesses' core activities. Quality and timeliness of service provided will affect the effectiveness of own team and other closely related teams.
    **Responsibilities:**
    + Prepares materials for Monthly Operating Reviews (MORs), Portfolio Reviews, Horizontal meetings, Town Halls and Staff Meetings.
    + Performs analysis of data quality issues and deliver metrics reporting.
    + Supports Managers with status reports and presentation content. Guide data analysis and reporting processes that include collection from multiple sources, validation of data and assembly and presentation of required data.
    + Develops new data collection and evaluation methodologies, including format design, data compilation, relevancy and metrics.
    + Helps to define and ongoing management of target data architecture for risk information.
    + Liaises with other Citi risk organizations to identify and maintain appropriate alignment, specifically with Citi Data Standards.
    + Works in conjunction with information owners and technology partners to define and implement the roadmap.
    + Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency.
    **Qualifications:**
    + **4-8 years Banking or Financial Services experience**
    + **Technical Lead who handles the Design and Development**
    + **Expertise in Application Development using technology like ETL tools, Hadoop environment (Python, PySpark, SQL, Unix)**
    + Experience in analyzing and defining risk management data structures and architecture
    + Demonstrated influencing, facilitation and partnering skills
    + Track record of interfacing with and presenting results to senior management
    + Analytical, flexible, team-oriented and have good interpersonal/communication skills
    **Education:**
    + Bachelor's/University degree or equivalent experience
    This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required.
    ---
    **Job Family Group:**
    Technology
    ---
    **Job Family:**
    Data Architecture
    ---
    **Time Type:**
    Full time
    ---
    **Most Relevant Skills**
    Please see the requirements listed above.
    ---
    **Other Relevant Skills**
    For complementary skills, please see above and/or contact the recruiter.
    ---
    _Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law._
    _If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review_ _Accessibility at Citi ( _._
    _View Citi's_ _EEO Policy Statement ( _and the_ _Know Your Rights ( _poster._
    Citi is an equal opportunity and affirmative action employer.
    Minority/Female/Veteran/Individuals with Disabilities/Sexual Orientation/Gender Identity.
    This advertiser has chosen not to accept applicants from your region.

    Sr. Software Engineer - AWS+Python+Pyspark Job

    Pune, Maharashtra YASH Technologies

    Posted today

    Job Viewed

    Tap Again To Close

    Job Description

    YASH Technologies is a leading technology integrator specializing in helping clients reimagine operating models, enhance competitiveness, optimize costs, foster exceptional stakeholder experiences, and drive business transformation.

    At YASH, we’re a cluster of the brightest stars working with cutting-edge technologies. Our purpose is anchored in a single truth – bringing real positive changes in an increasingly virtual world and it drives us beyond generational gaps and disruptions of the future.

    We are looking forward to hire AWS Professionals in the following areas :

    AWS Data Engineer JD as below:

  • Primary skillsets :AWS services including Glue, Pyspark, SQL, Databricks, Python
  • Secondary skillset- Any ETL Tool, Github, DevOPs(CI-CD)
  • Experience: 3-4yrs
  • Degree in computer science, engineering, or similar fields
  • Mandatory Skill Set: Python, PySpark , SQL, AWS with Designing , developing, testing and supporting data pipelines and applications.
  • 3+ years working experience in data integration and pipeline development.
  • 3+ years of Experience with AWS Cloud on data integration with a mix of Apache Spark, Glue, Kafka, Kinesis, and Lambda in S3 Redshift, RDS, MongoDB/DynamoDB ecosystems Databricks, Redshift experience is a major plus.
  • 3+ years of experience using SQL in related development of data warehouse projects/applications (Oracle & amp; SQL Server)
  • Strong real-life experience in python development especially in PySpark in AWS Cloud environment
  • Strong SQL and NoSQL databases like MySQL, Postgres, DynamoDB, Elasticsearch
  • Workflow management tools like Airflow
  • AWS cloud services: RDS, AWS Lambda, AWS Glue, AWS Athena, EMR (equivalent tools in the GCP stack will also suffice)
  • Good to Have : Snowflake, Palantir Foundry
  • At YASH, you are empowered to create a career that will take you to where you want to go while working in an inclusive team environment. We leverage career-oriented skilling models and optimize our collective intelligence aided with technology for continuous learning, unlearning, and relearning at a rapid pace and scale.

    Our Hyperlearning workplace is grounded upon four principles

  • Flexible work arrangements, Free spirit, and emotional positivity
  • Agile self-determination, trust, transparency, and open collaboration
  • All Support needed for the realization of business goals,
  • Stable employment with a great atmosphere and ethical corporate culture
  • This advertiser has chosen not to accept applicants from your region.
     

    Nearby Locations

    Other Jobs Near Me

    Industry

    1. request_quote Accounting
    2. work Administrative
    3. eco Agriculture Forestry
    4. smart_toy AI & Emerging Technologies
    5. school Apprenticeships & Trainee
    6. apartment Architecture
    7. palette Arts & Entertainment
    8. directions_car Automotive
    9. flight_takeoff Aviation
    10. account_balance Banking & Finance
    11. local_florist Beauty & Wellness
    12. restaurant Catering
    13. volunteer_activism Charity & Voluntary
    14. science Chemical Engineering
    15. child_friendly Childcare
    16. foundation Civil Engineering
    17. clean_hands Cleaning & Sanitation
    18. diversity_3 Community & Social Care
    19. construction Construction
    20. brush Creative & Digital
    21. currency_bitcoin Crypto & Blockchain
    22. support_agent Customer Service & Helpdesk
    23. medical_services Dental
    24. medical_services Driving & Transport
    25. medical_services E Commerce & Social Media
    26. school Education & Teaching
    27. electrical_services Electrical Engineering
    28. bolt Energy
    29. local_mall Fmcg
    30. gavel Government & Non Profit
    31. emoji_events Graduate
    32. health_and_safety Healthcare
    33. beach_access Hospitality & Tourism
    34. groups Human Resources
    35. precision_manufacturing Industrial Engineering
    36. security Information Security
    37. handyman Installation & Maintenance
    38. policy Insurance
    39. code IT & Software
    40. gavel Legal
    41. sports_soccer Leisure & Sports
    42. inventory_2 Logistics & Warehousing
    43. supervisor_account Management
    44. supervisor_account Management Consultancy
    45. supervisor_account Manufacturing & Production
    46. campaign Marketing
    47. build Mechanical Engineering
    48. perm_media Media & PR
    49. local_hospital Medical
    50. local_hospital Military & Public Safety
    51. local_hospital Mining
    52. medical_services Nursing
    53. local_gas_station Oil & Gas
    54. biotech Pharmaceutical
    55. checklist_rtl Project Management
    56. shopping_bag Purchasing
    57. home_work Real Estate
    58. person_search Recruitment Consultancy
    59. store Retail
    60. point_of_sale Sales
    61. science Scientific Research & Development
    62. wifi Telecoms
    63. psychology Therapy
    64. pets Veterinary
    View All Python Pyspark Jobs