Big Data Engineer

Hyderabad, Andhra Pradesh Anicalls (Pty) Ltd

Posted today

Job Viewed

Tap Again To Close

Job Description

Big Data Engineer 

Specification-



• 2 – 7 years of recent experience in
data engineering.


• Bachelor's Degree or more in Computer
Science or a related field.
• A solid track record of data management showing your flawless execution
and attention to detail.


• Strong knowledge of and experience with statistics.


• Programming experience, ideally in , Spark, Kafka, or , and a willingness
to learn new programming languages to meet goals and objectives.
• Experience in C, Perl, Javascript, or other
programming languages is a plus 
• Knowledge of data cleaning, wrangling,
visualization, and reporting, with an understanding of the best, most efficient
use of associated tools and applications to complete these tasks.

• In-depth knowledge of data mining, machine
learning, natural language processing, or information retrieval.


• Experience processing large amounts of
structured and unstructured data, including integrating data from multiple
sources.


• Experience with machine learning tool kits including, H2O, Spark-ml or Mahout.
This advertiser has chosen not to accept applicants from your region.

Big data engineer

Hyderabad, Andhra Pradesh Virtusa

Posted today

Job Viewed

Tap Again To Close

Job Description

Big data engineer - CREQ Description
  • Position: Big data engineer
  • Primary: big data concepts, aws, pyspark
  • Location: HYD
  • Create Trigger based automation framework for Data Migration
    Identify roles/access needed for data migration from federated bucket to managed bucket and Build APIs for the same
    Integrate CDMS framework with Lake and Data bridge API
    Data migration from S3 Managed to Hadoop On prem
    Jobs for Daily and Bulk loads
    Test support for AVRO to test lake features
    Test support for Compression types like LZO, .ENC to test lake features
    ABINITIO integration: Build feature to create operation trigger for ABI pipeline
    Movement to new datacenter -SQL server migration
    Carlstadt to Ashburn (DR switchover)

    Develop and maintain data platforms using Python.
    Work with AWS and Big Data, design and implement data pipelines, and ensure data quality and integrity.
    Collaborate with cross functional teams to understand data requirements and design solutions that meet business needs .
    Implement and manage agents for monitoring, logging, and automation within AWS environments.
    Handling migration from PySpark to AWS.
    (Secondary) Resource must have hands on development experience with various Ab Initio components such as Rollup Scan join Partition by key Partition by Round Robin Gather Merge Interleave Lookup etc.
    Must have experience with SQL database programming SQL performance tuning relational model analysis.
    Good knowledge in developing UNIX scripts Oracle SQLPLSQL.
    Leverage internal tools and SDKs, utilize AWS services such as S3, Athena, and Glue, and integrate with our internal Archival Service Platform for efficient data purging.
    Lead the integration efforts with the internal Archival Service Platform for seamless data purging and lifecycle management.
    Collaborate with the data engineering team to continuously improve data integration pipelines, ensuring adaptability to evolving business needs.
  • Primary Location Hyderabad, Andhra Pradesh, India Job Type Experienced Primary Skills Big Data, Python, Spark Years of Experience 12 Qualification
  • Education: Any degree or equivalent
  • Experience: 6+ years
  • Travel No
    This advertiser has chosen not to accept applicants from your region.

    Big Data Engineer

    Hyderabad, Andhra Pradesh Saaki Argus & Averil Consulting

    Posted today

    Job Viewed

    Tap Again To Close

    Job Description

    Hiring for Big Data:


    LOCATION: Chennai, Bengaluru, Hyderabad.


    EXPERIENCE: 7-10


    Notice Period : Immediate Joiner or 30 Day


    Key Skills


    • Hands-on experience on technologies like Python, SQL, Snowflake, HDFS , Hive, Scala, Spark, AWS, HBase and Cassandra.
    • Good knowledge in Data Warehousing concepts.
    • Proficient in Hadoop distributions such as Cloudera, Hortonworks.
    • Good working experience on technologies like Python , Scala, SQL & PL/SQL
    • Developers design and build the foundational architecture to manage massive-scale data storage, processing, and analysis using distributed, cloud-based systems and platforms.
    • Coding Big Data Pipelines.
    • Managing Big Data Infrastructure and Pipelines.

    This advertiser has chosen not to accept applicants from your region.

    Data Engineer (Big Data)

    Hyderabad, Andhra Pradesh DYNE IT Services

    Posted today

    Job Viewed

    Tap Again To Close

    Job Description

    Job Description:
     
    The role: As a senior engineer in the Data team you will build and run production-grade data and machine learning pipelines and products at scale in an agile setup. You will work closely with our data engineers, architects and product managers to create the technology that generates and transforms data into applications, insights and experiences for our users.
    Responsibilities:

    Design, productionize and own end-to-end solutions that solve our customer's problems.
    Define, plan and execute on strategic projects.
    Communicate and align with peers and cross functional stakeholders.
    Drive for technical excellence and pick the right balance between quality and speed of delivery.
    Consistently steer the target architecture by identifying areas of critical need based on future growth.
    Ensure code quality and maintainability by tackling tech debt, conducting code reviews, initiating refactoring and improving build and test systems.

    What we are looking for:

    You have expertise in Python and Java/Scala programming languages.
    You have experience designing and productionizing large-scale distributed systems built around big data.
    You have experience with batch and streaming technologies: e.g Apache Flink, Apache Spark, Apache Beam, Google DataFlow.
    You have expertise with distributed data stores (Cassandra, Google BigTable, Redis, ClickHouse, Elasticsearch) and messaging systems (Kafka, Google PubSub) at scale.
    You have experience with Linux, Docker, and public cloud (GCP, AWS, Azure).
    You have a strong focus on execution, delivery and customer impact and craft code that is understandable, simple and clean.
    You are an excellent communicator who can explain complex problems in clear and concise language to both business and technical audiences.

     
     
    Job Requirements:
     

    Kafka (2 - 3 yrs) - required
    Data Engineering (4 - 6 yrs) - required
    SQL (4 - 6 yrs) - required
    Apache Spark (2 - 3 yrs) - required
    Big Data (2 - 3 yrs) - required
    Scala (2 - 3 yrs) - optional
    AWS/GCP (2 - 3 yrs) - required

     
    Time zone requirements:
     
    FLEXIBLE WORKING HOURS
    This advertiser has chosen not to accept applicants from your region.

    AWS Big Data Engineer

    Hyderabad, Andhra Pradesh Tiger Analytics

    Posted today

    Job Viewed

    Tap Again To Close

    Job Description

    Job Description

    We are looking for a Senior Data Engineer to be based out of our Chennai, Bangalore & Hyderabad offices. This role involves a combination of hands-on contribution, customer engagement, and technical team management. As a Senior Data Engineer, you will Design and build solutions for near real-time stream processing as well as batch processing on the Big Data platform.● Set up and run Hadoop development frameworks.● Collaborate with a team of business domain experts, data scientists, and application developers to identify relevant data for analysis and develop the Big Data solution.● Explore and learn new technologies for creative business problem-solving.#LI-UNPostJob Requirement Ability to develop and manage scalable Hadoop cluster environments● Ability to design solutions for Big Data applications● Experience in Big Data technologies like HDFS, Hadoop, Hive, Yarn, Pig, HBase, Sqoop, Flume, etc● Working experience on Big Data services in any cloud-based environment.● Experience in Spark, Pyspark, Python or Scala, Kafka, Akka, core or advanced Java, and Databricks● Knowledge of how to create and debug Hadoop and Spark jobs● Experience in NoSQL technologies like HBase, Cassandra, MongoDB, Cloudera, or Hortonworks Hadoop distribution● Familiar with data warehousing concepts, distributed systems, data pipelines, and ETL● Familiar with data visualization tools like Tableau● Good communication and interpersonal skills● Minimum 6+ years of Professional experience with 3+ years of Big Data project experience● B.Tech/B.E from reputed institute preferred#LI-UNPost
    This advertiser has chosen not to accept applicants from your region.

    GCP Big Data Engineer

    Hyderabad, Andhra Pradesh Tiger Analytics

    Posted today

    Job Viewed

    Tap Again To Close

    Job Description

    Job Description

    .css-ylb{color:var(--chakra-colors-black-);}.css-l0deym h1{margin:0;padding:0;font-size:26px;}.css-l0deym h2{margin:0;padding:0;font-size:20px;}.css-l0deym h3{margin:0;padding:0;font-size:15px;}.css-l0deym h4{margin:0;padding:0;font-size:13px;}.css-l0deym ul{list-style-type:disc;margin:0.5rem;margin-bottom:1.5rem;}.css-l0deym ul ul{list-style-type:circle;}.css-l0deym ul ul ul{list-style-type:square;}.css-l0deym ol{margin:0.5rem;margin-bottom:1.5rem;list-style-type:decimal;}.css-l0deym ol ol{list-style-type:lower-alpha;}.css-l0deym ol ol ol{list-style-type:lower-roman;}.css-l0deym ol ol ol ol{list-style-type:decimal;}.css-l0deym strong{font-weight:bold;}.css-l0deym blockquote{border-left:5px solid #eee;color:#;font-family:'Hoefler Text','Georgia',serif;font-style:italic;margin:16px 0;padding:10px 20px;}.css-l0deym p code{font-family:'Courier New',monospace,'Lucida Console';}About the role:We are looking for a Senior Data Engineer to be based out of our Chennai office. This role involves a combination of hands-on contribution, customer engagement, and technical team management. As a Senior Data Engineer, you will● Design and build solutions for near real-time stream processing as well as batch processing on the Big Data platform.● Set up and run Hadoop development frameworks.● Collaborate with a team of business domain experts, data scientists, and application developers to identify relevant data for analysis and develop the Big Data solution.● Explore and learn new technologies for creative business problem-solving.#LI-UNPost.css-vfo6{padding-top:24px;padding-bottom:12px;font-size:20px;font-weight:;}Job RequirementRequired Experience, Skills & Competencies:● Ability to develop and manage scalable Hadoop cluster environments● Ability to design solutions for Big Data applications● Experience in Big Data technologies like HDFS, Hadoop, Hive, Yarn, Pig, HBase, Sqoop, Flume, etc● Working experience on Big Data services in any cloud-based environment.● Experience in Spark, Pyspark, Python or Scala, Kafka, Akka, core or advanced Java, and Databricks● Knowledge of how to create and debug Hadoop and Spark jobs● Experience in NoSQL technologies like HBase, Cassandra, MongoDB, Cloudera, or Hortonworks Hadoop distribution● Familiar with data warehousing concepts, distributed systems, data pipelines, and ETL● Familiar with data visualization tools like Tableau● Good communication and interpersonal skills● Minimum 6-8 years of Professional experience with 3+ years of Big Data project experience● B.Tech/B.E from reputed institute preferred#LI-UNPost.css-1uw51ih{padding:28px 0;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;-webkit-box-flex-wrap:wrap;-webkit-flex-wrap:wrap;-ms-flex-wrap:wrap;flex-wrap:wrap;-webkit-align-items:baseline;-webkit-box-align:baseline;-ms-flex-align:baseline;align-items:baseline;-webkit-box-pack:justify;-webkit-justify-content:space-between;justify-content:space-between;padding-bottom:16px;}.css-12oy1kx{font-size:1rem;font-weight:inherit;display:inline-block;padding-right:var(--chakra-space-5);}.css-jkbrc8{position:relative;width:%;margin:0 auto;box-sizing:border-box;}.css-1ylu0bo{display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:column;-ms-flex-direction:column;flex-direction:column;-webkit-box-flex:1;-webkit-flex-grow:1;-ms-flex-positive:1;flex-grow:1;}.css-1n7cw71{width:%;-webkit-margin-start:auto;margin-inline-start:auto;-webkit-margin-end:auto;margin-inline-end:auto;max-width:var(--chakra-sizes-container-lg);-webkit-padding-start:1rem;padding-inline-start:1rem;-webkit-padding-end:1rem;padding-inline-end:1rem;-webkit-box-pack:center;-ms-flex-pack:center;-webkit-justify-content:center;justify-content:center;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;}.css-1yrkcw4{transition-property:var(--chakra-transition-property-common);transition-duration:var(--chakra-transition-duration-fast);transition-timing-function:var(--chakra-transition-easing-ease-out);cursor:pointer;-webkit-text-decoration:none;text-decoration:none;outline:2px solid transparent;outline-offset:2px;color:inherit;font-weight:;font-size:var(--chakra-fontSizes-md);margin:20px;}.css-1yrkcw4:hover,.css-1yrkcw4(data-hover){-webkit-text-decoration:underline;text-decoration:underline;cursor:pointer;}.css-1yrkcw4:focus,.css-1yrkcw4(data-focus){box-shadow:var(--chakra-shadows-outline);}
    This advertiser has chosen not to accept applicants from your region.

    Big Data Engineer - Scala

    Hyderabad, Andhra Pradesh Idyllic Services

    Posted today

    Job Viewed

    Tap Again To Close

    Job Description

    Job Title: Big Data Engineer – Scala

    Location: Bangalore, Chennai, Gurgaon, Pune, Mumbai.

    Experience: 7–10 Years (Minimum 3+ years in Scala)

    Notice Period: Immediate to 30 Days

    Mode of Work: Hybrid


    Click the link below to learn more about the role and take the AI Interview to begin your application journey:


    Role Overview

    We are looking for a highly skilled Big Data Engineer (Scala) with strong expertise in Scala, Spark, Python, NiFi, and Apache Kafka to join our data engineering team. The ideal candidate will have a proven track record in building, scaling, and optimizing big data pipelines , and hands-on experience in distributed data systems and cloud-based solutions.


    Key Responsibilities

    - Design, develop, and optimize large-scale data pipelines and distributed data processing systems.

    - Work extensively with Scala, Spark (PySpark), and Python for data processing and transformation.

    - Develop and integrate streaming solutions using Apache Kafka and orchestration tools like NiFi / Airflow .

    - Write efficient queries and perform data analysis using Jupyter Notebooks and SQL .

    - Collaborate with cross-functional teams to design scalable cloud-based data architectures .

    - Ensure delivery of high-quality code through code reviews, performance tuning, and best practices .

    - Build monitoring and alerting systems leveraging Splunk or equivalent tools .

    - Participate in CI/CD workflows using Git, Jenkins, and other DevOps tools.

    - Contribute to product development with a focus on scalability, maintainability, and performance.


    Mandatory Skills

    - Scala – Minimum 3+ years of hands-on experience.

    - Strong expertise in Spark (PySpark) and Python .

    - Hands-on experience with Apache Kafka .

    - Knowledge of NiFi / Airflow for orchestration.

    - Strong experience in Distributed Data Systems (5+ years) .

    - Proficiency in SQL and query optimization.

    - Good understanding of Cloud Architecture .


    Preferred Skills

    - Exposure to messaging technologies like Apache Kafka or equivalent.

    - Experience in designing intuitive, responsive UIs for data analytics visualization.

    - Familiarity with Splunk or other monitoring/alerting solutions .

    - Hands-on experience with CI/CD tools (Git, Jenkins).

    - Strong grasp of software engineering concepts, data modeling, and optimization techniques .

    This advertiser has chosen not to accept applicants from your region.
    Be The First To Know

    About the latest Senior data engineer Jobs in Hyderabad !

    Big Data Engineer - Scala

    Hyderabad, Andhra Pradesh Idyllic Services

    Posted 3 days ago

    Job Viewed

    Tap Again To Close

    Job Description

    Job Title: Big Data Engineer – Scala

    Location: Bangalore, Chennai, Gurgaon, Pune, Mumbai.

    Experience: 7–10 Years (Minimum 3+ years in Scala)

    Notice Period: Immediate to 30 Days

    Mode of Work: Hybrid


    Click the link below to learn more about the role and take the AI Interview to begin your application journey:


    Role Overview

    We are looking for a highly skilled Big Data Engineer (Scala) with strong expertise in Scala, Spark, Python, NiFi, and Apache Kafka to join our data engineering team. The ideal candidate will have a proven track record in building, scaling, and optimizing big data pipelines , and hands-on experience in distributed data systems and cloud-based solutions.


    Key Responsibilities

    - Design, develop, and optimize large-scale data pipelines and distributed data processing systems.

    - Work extensively with Scala, Spark (PySpark), and Python for data processing and transformation.

    - Develop and integrate streaming solutions using Apache Kafka and orchestration tools like NiFi / Airflow .

    - Write efficient queries and perform data analysis using Jupyter Notebooks and SQL .

    - Collaborate with cross-functional teams to design scalable cloud-based data architectures .

    - Ensure delivery of high-quality code through code reviews, performance tuning, and best practices .

    - Build monitoring and alerting systems leveraging Splunk or equivalent tools .

    - Participate in CI/CD workflows using Git, Jenkins, and other DevOps tools.

    - Contribute to product development with a focus on scalability, maintainability, and performance.


    Mandatory Skills

    - Scala – Minimum 3+ years of hands-on experience.

    - Strong expertise in Spark (PySpark) and Python .

    - Hands-on experience with Apache Kafka .

    - Knowledge of NiFi / Airflow for orchestration.

    - Strong experience in Distributed Data Systems (5+ years) .

    - Proficiency in SQL and query optimization.

    - Good understanding of Cloud Architecture .


    Preferred Skills

    - Exposure to messaging technologies like Apache Kafka or equivalent.

    - Experience in designing intuitive, responsive UIs for data analytics visualization.

    - Familiarity with Splunk or other monitoring/alerting solutions .

    - Hands-on experience with CI/CD tools (Git, Jenkins).

    - Strong grasp of software engineering concepts, data modeling, and optimization techniques .

    This advertiser has chosen not to accept applicants from your region.

    Big Data Engineer - Scala

    Secunderabad, Andhra Pradesh Idyllic Services

    Posted 3 days ago

    Job Viewed

    Tap Again To Close

    Job Description

    Job Title: Big Data Engineer – Scala

    Location: Bangalore, Chennai, Gurgaon, Pune, Mumbai.

    Experience: 7–10 Years (Minimum 3+ years in Scala)

    Notice Period: Immediate to 30 Days

    Mode of Work: Hybrid


    Click the link below to learn more about the role and take the AI Interview to begin your application journey:


    Role Overview

    We are looking for a highly skilled Big Data Engineer (Scala) with strong expertise in Scala, Spark, Python, NiFi, and Apache Kafka to join our data engineering team. The ideal candidate will have a proven track record in building, scaling, and optimizing big data pipelines , and hands-on experience in distributed data systems and cloud-based solutions.


    Key Responsibilities

    - Design, develop, and optimize large-scale data pipelines and distributed data processing systems.

    - Work extensively with Scala, Spark (PySpark), and Python for data processing and transformation.

    - Develop and integrate streaming solutions using Apache Kafka and orchestration tools like NiFi / Airflow .

    - Write efficient queries and perform data analysis using Jupyter Notebooks and SQL .

    - Collaborate with cross-functional teams to design scalable cloud-based data architectures .

    - Ensure delivery of high-quality code through code reviews, performance tuning, and best practices .

    - Build monitoring and alerting systems leveraging Splunk or equivalent tools .

    - Participate in CI/CD workflows using Git, Jenkins, and other DevOps tools.

    - Contribute to product development with a focus on scalability, maintainability, and performance.


    Mandatory Skills

    - Scala – Minimum 3+ years of hands-on experience.

    - Strong expertise in Spark (PySpark) and Python .

    - Hands-on experience with Apache Kafka .

    - Knowledge of NiFi / Airflow for orchestration.

    - Strong experience in Distributed Data Systems (5+ years) .

    - Proficiency in SQL and query optimization.

    - Good understanding of Cloud Architecture .


    Preferred Skills

    - Exposure to messaging technologies like Apache Kafka or equivalent.

    - Experience in designing intuitive, responsive UIs for data analytics visualization.

    - Familiarity with Splunk or other monitoring/alerting solutions .

    - Hands-on experience with CI/CD tools (Git, Jenkins).

    - Strong grasp of software engineering concepts, data modeling, and optimization techniques .

    This advertiser has chosen not to accept applicants from your region.

    Big Data Engineer, Data Modeling

    Hyderabad, Andhra Pradesh data.ai

    Posted today

    Job Viewed

    Tap Again To Close

    Job Description

    What can you tell your friends

    when they ask you what you do?

    We’re looking for an experienced Big Data Engineer who can create innovative new products in the analytics and data space. You will participate in the development that creates the world's #1 mobile app analytics service. Together with the team, you will build out new product features and applications using agile methodologies and open-source technologies. You will work directly with Data Scientists, Data Engineers, Product Managers, and Software Architects, and will be on the front lines of coding new and exciting analytics and data mining products. You should be passionate about what you do and excited to join an entrepreneurial start-­up.

    To ensure we execute on our values we are looking for someone who has a passion for:

    As a Big Data Engineer, we will need you to be in charge of model implementation and maintenance, and to build a clean, robust, and maintainable data processing program that can support these projects on huge amounts of data, this includes

  • Able to design and implement complex data product components based on requirements with possible technical solutions.
  • Write data programs using Python (e.g., pyspark) with a commitment to maintaining high-quality work while being confident in dealing with data mining challenges.
  • Discover any feasible new technologies lying in the Big Data ecosystem, for example, the Hadoop ecosystem, and share them with to team with your professional perspectives.
  • Get up to speed in the data science and machine learning domain, implementing analysis components in a distributed computing environment (e.g., MapReduce implementation) with instruction from Data Scientists.
  • Be comfortable conducting detailed discussions with Data Scientists regarding specific questions related to specific data models.
  • You should be a strong problem solver with proven experience in big data.
  • You should recognize yourself in the following…

  • Hands-on experience and deep knowledge of the Hadoop ecosystem.
  • Must: PySpark, MapReduce, HDFS.
  • Plus: Storm, Kafka.
  • Must have 2+ years of Linux environment development experience.
  • Proficient with programming in Python & Scala, experience in Pandas, Sklearn or Other data science and data analysis toolset is a big plus.
  • Experience in data pipeline design & automation.
  • Having a background in data mining, analytics & data science components implementation, and machine learning domain, familiarity with common algorithms and libs is a plus.
  • Passion for cloud computing (AWS in particular) and distributed systems.
  • You must be a great problem solver with the ability to dive deeply into complex problems and emerge with clear and pragmatic solutions.
  • Good communication, and cooperation globally.
  • Major in Math or Computer Science.
  • You are driven by passion for innovation that pushes us closer to our vision in everything we do. Centering around our purpose and our hunger for new innovations is the foundation that allows us to grow and unlock the potential in AI.
  • You are an Ideal Team Player: You are hungry and no, we are not talking about food here. You are humble, yet love to succeed, especially as a team! You are smart, and not just book smart, you have a great read on people.
  • This position is located in Hyderabad, India.

    We are hiring for our engineering team at our data.ai India subsidiary entity, which is in the process of getting established . As we are awaiting approval from the Indian government, they shall be interim employees at Innova Solutions who is our Global Employer of Record.

    This advertiser has chosen not to accept applicants from your region.
     

    Nearby Locations

    Other Jobs Near Me

    Industry

    1. request_quote Accounting
    2. work Administrative
    3. eco Agriculture Forestry
    4. smart_toy AI & Emerging Technologies
    5. school Apprenticeships & Trainee
    6. apartment Architecture
    7. palette Arts & Entertainment
    8. directions_car Automotive
    9. flight_takeoff Aviation
    10. account_balance Banking & Finance
    11. local_florist Beauty & Wellness
    12. restaurant Catering
    13. volunteer_activism Charity & Voluntary
    14. science Chemical Engineering
    15. child_friendly Childcare
    16. foundation Civil Engineering
    17. clean_hands Cleaning & Sanitation
    18. diversity_3 Community & Social Care
    19. construction Construction
    20. brush Creative & Digital
    21. currency_bitcoin Crypto & Blockchain
    22. support_agent Customer Service & Helpdesk
    23. medical_services Dental
    24. medical_services Driving & Transport
    25. medical_services E Commerce & Social Media
    26. school Education & Teaching
    27. electrical_services Electrical Engineering
    28. bolt Energy
    29. local_mall Fmcg
    30. gavel Government & Non Profit
    31. emoji_events Graduate
    32. health_and_safety Healthcare
    33. beach_access Hospitality & Tourism
    34. groups Human Resources
    35. precision_manufacturing Industrial Engineering
    36. security Information Security
    37. handyman Installation & Maintenance
    38. policy Insurance
    39. code IT & Software
    40. gavel Legal
    41. sports_soccer Leisure & Sports
    42. inventory_2 Logistics & Warehousing
    43. supervisor_account Management
    44. supervisor_account Management Consultancy
    45. supervisor_account Manufacturing & Production
    46. campaign Marketing
    47. build Mechanical Engineering
    48. perm_media Media & PR
    49. local_hospital Medical
    50. local_hospital Military & Public Safety
    51. local_hospital Mining
    52. medical_services Nursing
    53. local_gas_station Oil & Gas
    54. biotech Pharmaceutical
    55. checklist_rtl Project Management
    56. shopping_bag Purchasing
    57. home_work Real Estate
    58. person_search Recruitment Consultancy
    59. store Retail
    60. point_of_sale Sales
    61. science Scientific Research & Development
    62. wifi Telecoms
    63. psychology Therapy
    64. pets Veterinary
    View All Senior Data Engineer Jobs View All Jobs in Hyderabad