19 Databases jobs in Hyderabad

AWS/Azure - Cassandra and Scylla DataBases

Hyderabad, Andhra Pradesh Saaki Argus & Averil Consulting

Posted today

Job Viewed

Tap Again To Close

Job Description

AWS/Azure Cassandra & Scylla DB (5+ Years) - Hyderabad


Job Description:

Be the DB expert to reach out regarding NoSQL DBs.

Stay up to date with the latest developments in the NoSQL space.

Review data models created by application teams.

Manage our Cassandra and Scylla clusters with a focus on capacity management, cost optimization, high availability and performance.

Work with large database clusters handling 1M+ IOPS in aggregate.

Create automations to reduce toil.

Create automations for monitoring and alerting.

Create runbooks for alert handling.

Setup backup and restore mechanisms.

Troubleshoot and resolve various issues with cluster related to nodes, table data, high load, connectivity issues from clients, etc.

Be the on-call support engineer on a rotational basis.



Qualifications Mandatory:

Bachelors/master s in engineering from reputed institutions.

Overall 5+ years of experience in SQL and NoSQL databases.

Experience managing large-scale Cassandra and Scylla clusters within depth understanding of architecture, storage, replication, schema design, system tables, logs, DB processes, tools and CQL.

Experience with installation, configuration, upgrades, OS patching, certificate management, scaling (out/in) for Cassandra and Scylla clusters.

Experience in setting up backup and restore mechanisms with short RTO and RPO objectives.

Experience with infrastructure automation and scripting using Terraform, Python or Bash.

Experience with monitoring tools like Grafana, Prometheus, New Relic, Datadog, etc

Experience with managed Cassandra solutions (InstaClustr, Datastax Astra) is a plus.

Experience with Cloud native distributed databases (eg: TiDB, CockroachDB) is a plus.

Experience with MySQL and/or Postgres on Linux is a plus.

Experience working with AWS, with AWS certification preferred.

Excellent communication skills.

This advertiser has chosen not to accept applicants from your region.

Software Engineer II (Managed Databases (DBaaS))

Hyderabad, Andhra Pradesh DigitalOcean

Posted today

Job Viewed

Tap Again To Close

Job Description

What You’ll Be Doing:
  • Developing external customer-facing Managed Database products (MySQL, PostgreSQL, Caching, Kafka, OpenSearch, MongoDB, and more).
  • Developing and maintaining distributed systems.
  • Contributing to design and discussion on technical architecture.
  • Developing APIs with Go Lang.
  • Improving testing, maintenance, and deployment automation to increase development velocity and resiliency.
  • Providing the last-line of support for DBaaS products.
  • Collaborating with an agile, self-managed team of peers.
  • Leveraging technologies such as Temporal, gRPC, REST, Kubernetes, Docker, Kafka, Grafana and more.
  • Integrating with many components across the DigitalOcean stack.
  • What We’ll Expect From You:

  • At least 2 years of experience developing infrastructure and products from ideation to deployment, with a focus on backend engineering and services.
  • A solid understanding of building and maintaining microservices within distributed systems.
  • Working knowledge of microservices using container workload engines and frameworks such as Docker and Kubernetes.
  • Experience with resilience engineering, fault tolerance, and failure domains as they relate to database backends.
  • Strong spoken and written capabilities for communicating technical designs and code changes to other engineers, designers, and product teams.
  • Proficiency in developing APIs with Golang.
  • Experience with concurrency patterns in Golang.
  • Experience with CI/CD pipelines and frameworks.
  • A strong background and exposure to different SQL and NoSQL databases.
  • Familiarity with common message bus and queuing technologies for asynchronous processing.
  • Passion for collaborating directly with customers to discover their Jobs To Be Done, and developing simple delightful solutions to solve those jobs.
  • Ability to work closely with front-end developers.
  • Why You’ll Like Working for DigitalOcean

  • We innovate with purpose.  You’ll be a part of a cutting-edge technology company with an upward trajectory, who are proud to simplify cloud and AI so builders can spend more time creating software that changes the world. As a member of the team, you will be a Shark who thinks big, bold, and scrappy, like an owner with a bias for action and a powerful sense of responsibility for customers, products, employees, and decisions. 
  • We prioritize career development.  At DO, you’ll do the best work of your career. You will work with some of the smartest and most interesting people in the industry. We are a high-performance organization that will always challenge you to think big. Our organizational development team will provide you with resources to ensure you keep growing. We provide employees with reimbursement for relevant conferences, training, and education. All employees have access to LinkedIn Learning's 10,000+ courses to support their continued growth and development.
  • We care about your well-being.  Regardless of your location, we will provide you with a competitive array of benefits to support you from our Employee Assistance Program to Local Employee Meetups to flexible time off policy, to name a few. While the philosophy around our benefits is the same worldwide, specific benefits may vary based on local regulations and preferences.
  • We reward our employees.  The salary range for this position based on market data, relevant years of experience, and skills. You may qualify for a bonus in addition to base salary; bonus amounts are determined based on company and individual performance. We also provide equity compensation to eligible employees, including equity grants upon hire and the option to participate in our Employee Stock Purchase Program.
  • We value diversity and inclusion.  We are an equal-opportunity employer, and recognize that diversity of thought and background builds stronger teams and products to serve our customers. We approach diversity and inclusion seriously and thoughtfully. We do not discriminate on the basis of race, religion, color, ancestry, national origin, caste, sex, sexual orientation, gender, gender identity or expression, age, disability, medical condition, pregnancy, genetic makeup, marital status, or military service.
  • *This is role located in Hyderabad, India

    #LI-Hybrid

    This advertiser has chosen not to accept applicants from your region.

    Data Modeling

    Hyderabad, Andhra Pradesh Virtusa

    Posted today

    Job Viewed

    Tap Again To Close

    Job Description

    Data Modeling - CREQ Description GDT WPB DF Data Expert JD:

    Mandatory Skills
    Strong Data Analysis and/or Data Modeling experience of 8-12 years.
    Strong financial domain and data analysis skills with experience covering activities like requirement gathering, elicitation skills, gap analysis, data analysis, effort estimation, reviews, ability to translate the high-level functional data or business requirements into technical solution, database designing and data mapping.
    Comprehensive understanding of the Data Modeling conceptual, logical, and physical, create and deliver high-quality data models, by following agreed data governance and standards.Maintain quality metadata and data related artefacts which should be accurate, complete, consistent, unambiguous, reliable, accessible, traceable, and valid.
    Should be an individual contributor with good understanding of the SDLC & Agile methodologies.
    A team player with self-starter approach, having a sense of ownership, should be a problem solver with solution-seeking approach and an ability to work in fast paced and continuously changing environment.
    Excellent communication & stakeholder management skills and should be capable of building rapport and relationships.
    Act as a liaison between business and technical teams to bridge any gaps and assist the business teams & technical teams to successfully deliver the projects.
    Other Skills and Tools SQL, MS Office tools, GCP Big Query, Erwin, Visual Paradigm preferable.
    Responsibilities
    Support the delivery of complex transformation program development within data portfolio.
    Work within agile multi-skilled teams to create world class products to serve our customers needs.
    Perform the elicitation and analysis of business change, functional and non functional requirements across a range of stakeholders, work with cross asset IT teams to interface those requirements and finally deliver a working reconciliation solution.
    Own and produce relevant analysis and modeling artefacts that enable development teams to develop working products.
    Understand the user journey end to end which goes beyond the system.
    Provide advanced business knowledge and technical support for requirements development.
    Create/enhance logical and physical data models by adhering to the agreed standards, to fulfil both business as well as technical requirements and expectations.
    Undertake the metadata analysis which includes but not limited to naming of the logical entity and attributes and physical table and columns, definitions and appropriate data type and length etc.
    Create and maintain the data models.

    Primary Location Hyderabad, Andhra Pradesh, India Job Type Experienced Primary Skills Data Modelling for Analytical (OLAP) Years of Experience 6 Qualification

    GDT WPB DF Data Expert JD:

    Mandatory Skills
    Strong Data Analysis and/or Data Modeling experience of 8-12 years.
    Strong financial domain and data analysis skills with experience covering activities like requirement gathering, elicitation skills, gap analysis, data analysis, effort estimation, reviews, ability to translate the high-level functional data or business requirements into technical solution, database designing and data mapping.
    Comprehensive understanding of the Data Modeling conceptual, logical, and physical, create and deliver high-quality data models, by following agreed data governance and standards.Maintain quality metadata and data related artefacts which should be accurate, complete, consistent, unambiguous, reliable, accessible, traceable, and valid.
    Should be an individual contributor with good understanding of the SDLC & Agile methodologies.
    A team player with self-starter approach, having a sense of ownership, should be a problem solver with solution-seeking approach and an ability to work in fast paced and continuously changing environment.
    Excellent communication & stakeholder management skills and should be capable of building rapport and relationships.
    Act as a liaison between business and technical teams to bridge any gaps and assist the business teams & technical teams to successfully deliver the projects.
    Other Skills and Tools SQL, MS Office tools, GCP Big Query, Erwin, Visual Paradigm preferable.
    Responsibilities
    Support the delivery of complex transformation program development within data portfolio.
    Work within agile multi-skilled teams to create world class products to serve our customers needs.
    Perform the elicitation and analysis of business change, functional and non functional requirements across a range of stakeholders, work with cross asset IT teams to interface those requirements and finally deliver a working reconciliation solution.
    Own and produce relevant analysis and modeling artefacts that enable development teams to develop working products.
    Understand the user journey end to end which goes beyond the system.
    Provide advanced business knowledge and technical support for requirements development.
    Create/enhance logical and physical data models by adhering to the agreed standards, to fulfil both business as well as technical requirements and expectations.
    Undertake the metadata analysis which includes but not limited to naming of the logical entity and attributes and physical table and columns, definitions and appropriate data type and length etc.
    Create and maintain the data models.

    Travel No
    This advertiser has chosen not to accept applicants from your region.

    Data Modeling Advisor - HIH - Evernorth

    Hyderabad, Andhra Pradesh The Cigna Group

    Posted 1 day ago

    Job Viewed

    Tap Again To Close

    Job Description

    Data Modeling Advisor - HIH - Evernorth
    About Evernorth:
    Evernorth Health Services, a division of The Cigna Group (NYSE: CI), creates pharmacy, care, and benefits solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention, and treatment of illness and disease more accessible to millions of people.
    Data Modeling Advisor
    Position Summary:
    The Health Services Data Design and Metadata Management team is hiring an Architecture Senior Advisor to work across all projects. The work involves understanding and driving data design best practices, including data modeling, mapping, and analysis, and helping others to apply them across strategic data assets. The data models are wide-ranging and must include the appropriate metadata to support and improve our data intelligence. Data design centers around standard health care data (eligibility, claim, clinical, and provider data) across structured and unstructured data platforms.
    Job Description & Responsibilities:
    + Perform data analysis, data modeling, and data mapping following industry and Evernorth data design standards for analytics/data warehouses and operational data stores across various DBMS types, including Teradata, Oracle, Cloud, and Hadoop, Databricks and datalake.
    + Perform data analysis, profiling and validation, contributing to data quality efforts to understand data characteristics and ensure data correctness/condition for use.
    + Participate in and coordinate data model metadata development processes to support ongoing development efforts (data dictionary, NSM, and FET files), maintenance of data model/data mapping metadata, and linking of our data design metadata to business terms, data models, mapping documents, ETL jobs, and data model governance operations (policies, standards, best practices).
    + Facilitate and actively participate in data model/data mapping reviews and audits, fostering collaborative working relationships and partnerships with multidisciplinary teams.
    + Provide guidance, mentoring, and training as needed in data modeling, data lineage, ddl code, and the associated toolsets (Erwin Data Modeler, Erwin Web Portal, Erwin model mart, Erwin Data Intelligence Suite, Alation). Assist with the creation, documentation, and maintenance of Evernorth data design standards and best practices involving data modeling, data mapping, and metadata capture including data sensitivity, data quality rules, and reference data usage. Develop and facilitate strong partnerships and working relationships with Data Governance, delivery, and other data partners. Continuously improve operational processes for data design metadata management for global and strategic data.
    + Interact with Business stakeholders and IT in defining and managing data design. Coordination, collaboration, and innovation with Solution Verticals, Data Lake teams, IT & Business Portfolios to ensure alignment of data design metadata and related information with ongoing programs (cyber risk and security) and development efforts.
    Experience Required:
    + 11 to 13 years' experience with data modeling (logical / physical data model, canonical structures, etc.) and SQL code;
    + Experience Desired:
    + Subject matter expertise level experience preferred
    + Experience executing data model / data lineage governance across business and technical data.
    + Experience utilizing data model / lineage / mapping / analysis management tools for business, technical and operational metadata (Erwin Data Modeler, Erwin Web Portal, Erwin Model Mart, Erwin Data Intelligence Suite, Alation
    + Experience working in an Agile delivery environment (Jira, Confluence, SharePoint, Git, etc.)
    Education and Advanced Training Required:
    + degree in Computer Science or a related discipline and at least six, typically eight or more years' experience in all phases of data modeling, data warehousing, data mining, data entity analysis, logical data base design and relational data base definition, or an equivalent combination of education and work experience.
    Primary Skills:
    + Physical Data Modeling, Data Warehousing, Metadata, Reference Data, Data Mapping
    + Data Mining, Teradata, Data Quality, Excellent Communication Skills, Data Analysis, Oracle
    + Data Governance, Database Management System, Jira, DDL, Data Integration, Microsoft, SharePoint, Database Modeling, Confluence, Agile, Marketing Analysis, SharePoint, Operations, Topo, Data Lineage, Data Warehouses, Documentation
    + Big Data, Web Portal, Maintenance, Erwin, SQL, Unstructured Data, Audit, Git, Pharmacy
    DBMS, Databricks, AWS
    + Evernorth Health Services, a division of The Cigna Group, creates pharmacy, care and benefit solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention and treatment of illness and disease more accessible to millions of people. Join us in driving growth and improving lives.
    Location & Hours of Work:
    + Full-time position, working 40 hours per week. Expected overlap with US hours as appropriate.
    + Primarily based in the Innovation Hub in Hyderabad, India, with flexibility to work remotely as required.
    + Equal Opportunity Statement:
    + Evernorth is an Equal Opportunity Employer actively encouraging and supporting organization-wide involvement of staff in diversity, equity, and inclusion efforts to educate, inform and advance both internal practices and external work with diverse client populations.
    + General Shift (11:30 AM - 8:30 PM IST / 1:00 AM - 10:00 AM EST / 2:00 AM - 11:00 AM EDT)
    Equal Opportunity Statement
    + Evernorth is an Equal Opportunity Employer actively encouraging and supporting organization-wide involvement of staff in diversity, equity, and inclusion efforts to educate, inform and advance both internal practices and external work with diverse client populations.
    **About Evernorth Health Services**
    Evernorth Health Services, a division of The Cigna Group, creates pharmacy, care and benefit solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention and treatment of illness and disease more accessible to millions of people. Join us in driving growth and improving lives.
    This advertiser has chosen not to accept applicants from your region.

    Technical Architect - Data Modeling Job

    Hyderabad, Andhra Pradesh YASH Technologies

    Posted today

    Job Viewed

    Tap Again To Close

    Job Description

    Job Description

  • Design and develop conceptual, logical, and physical data models  for OLTP and OLAP systems.
  • Collaborate with business analysts and data engineers to understand data requirements.
  • Ensure models support data governance data quality , and data integration  standards.
  • Maintain and update data models as business needs evolve.
  • Create and manage metadata repositories  and data dictionaries .
  • Optimize data models for performance scalability , and security .
  • Work with database administrators to implement models in relational and NoSQL databases.
  • Support ETL processes  and data warehousing  initiatives.
  • Proven experience in data modeling tools  (e.g., Erwin, SAP PowerDesigner, IBM InfoSphere).
  • Strong understanding of relational databases dimensional modeling , and normalization techniques .
  • Experience with SQL data warehousing , and ETL tools .
  • Familiarity with cloud platforms (e.g., AWS, Azure) is a plus.
  • Excellent communication and documentation skills.
  • Required Technical/ Functional Competencies

    Domain/ Industry Knowledge: 

  • Basic knowledge of customer's business processes- relevant technology platform or product.
  • Able to prepare process maps, workflows, business cases and simple business models in line with customer requirements with assistance from SME and apply industry standards/ practices in implementation with guidance from experienced team members.
  • Requirement Gathering and Analysis:

  • Working knowledge of requirement management processes and requirement analysis processes, tools & methodologies.
  • Able to analyse the impact of change requested/ enhancement/ defect fix and identify dependencies or interrelationships among requirements & transition requirements for engagement.
  • Product/ Technology Knowledge:

  • Working knowledge of technology product/platform standards and specifications.
  • Able to implement code or configure/customize products and provide inputs in design and architecture adhering to industry standards/ practices in implementation.
  • Analyze various frameworks/tools, review the code and provide feedback on improvement opportunities.
  • Architecture tools and frameworks:

  • Working knowledge of architecture Industry tools & frameworks.
  • Able to identify pros/ cons of available tools & frameworks in market and use those as per Customer requirement and explore new tools/ framework for implementation.
  • Architecture concepts and principles :

  • Working knowledge of architectural elements, SDLC, methodologies.
  • Able to provides architectural design/ documentation at an application or function capability level and implement architectural patterns in solution & engagements and communicates architecture direction to the business.
  • Analytics Solution Design:

  • Knowledge of statistical & machine learning techniques like classification, linear regression modelling, clustering & decision trees.
  • Able to identify the cause of errors and their potential solutions.
  • Tools & Platform Knowledge:

  • Familiar with wide range of mainstream commercial & open-source data science/analytics software tools, their constraints, advantages, disadvantages, and areas of application.
  • Required Behavioral Competencies

    Accountability:

  • Takes responsibility for and ensures accuracy of own work, as well as the work and deadlines of the team.
  • Collaboration:

  • Shares information within team, participates in team activities, asks questions to understand other points of view.
  • Agility:

  • Demonstrates readiness for change, asking questions and determining how changes could impact own work.
  • Customer Focus:

  • Identifies trends and patterns emerging from customer preferences and works towards customizing/ refining existing services to exceed customer needs and expectations.
  • Communication:

  • Targets communications for the appropriate audience, clearly articulating and presenting his/her position or decision.
  • Drives Results:

  • Sets realistic stretch goals for self & others to achieve and exceed defined goals/targets.
  • Resolves Conflict:

  • Displays sensitivity in interactions and strives to understand others’ views and concerns.
  • Certifications

    Mandatory At YASH, you are empowered to create a career that will take you to where you want to go while working in an inclusive team environment. We leverage career-oriented skilling models and optimize our collective intelligence aided with technology for continuous learning, unlearning, and relearning at a rapid pace and scale. Our Hyperlearning workplace is grounded upon four principles
  • Flexible work arrangements, Free spirit, and emotional positivity
  • Agile self-determination, trust, transparency, and open collaboration
  • All Support needed for the realization of business goals,
  • Stable employment with a great atmosphere and ethical corporate culture
  • This advertiser has chosen not to accept applicants from your region.

    Big Data Engineer, Data Modeling

    Hyderabad, Andhra Pradesh data.ai

    Posted today

    Job Viewed

    Tap Again To Close

    Job Description

    What can you tell your friends

    when they ask you what you do?

    We’re looking for an experienced Big Data Engineer who can create innovative new products in the analytics and data space. You will participate in the development that creates the world's #1 mobile app analytics service. Together with the team, you will build out new product features and applications using agile methodologies and open-source technologies. You will work directly with Data Scientists, Data Engineers, Product Managers, and Software Architects, and will be on the front lines of coding new and exciting analytics and data mining products. You should be passionate about what you do and excited to join an entrepreneurial start-­up.

    To ensure we execute on our values we are looking for someone who has a passion for:

    As a Big Data Engineer, we will need you to be in charge of model implementation and maintenance, and to build a clean, robust, and maintainable data processing program that can support these projects on huge amounts of data, this includes

  • Able to design and implement complex data product components based on requirements with possible technical solutions.
  • Write data programs using Python (e.g., pyspark) with a commitment to maintaining high-quality work while being confident in dealing with data mining challenges.
  • Discover any feasible new technologies lying in the Big Data ecosystem, for example, the Hadoop ecosystem, and share them with to team with your professional perspectives.
  • Get up to speed in the data science and machine learning domain, implementing analysis components in a distributed computing environment (e.g., MapReduce implementation) with instruction from Data Scientists.
  • Be comfortable conducting detailed discussions with Data Scientists regarding specific questions related to specific data models.
  • You should be a strong problem solver with proven experience in big data.
  • You should recognize yourself in the following…

  • Hands-on experience and deep knowledge of the Hadoop ecosystem.
  • Must: PySpark, MapReduce, HDFS.
  • Plus: Storm, Kafka.
  • Must have 2+ years of Linux environment development experience.
  • Proficient with programming in Python & Scala, experience in Pandas, Sklearn or Other data science and data analysis toolset is a big plus.
  • Experience in data pipeline design & automation.
  • Having a background in data mining, analytics & data science components implementation, and machine learning domain, familiarity with common algorithms and libs is a plus.
  • Passion for cloud computing (AWS in particular) and distributed systems.
  • You must be a great problem solver with the ability to dive deeply into complex problems and emerge with clear and pragmatic solutions.
  • Good communication, and cooperation globally.
  • Major in Math or Computer Science.
  • You are driven by passion for innovation that pushes us closer to our vision in everything we do. Centering around our purpose and our hunger for new innovations is the foundation that allows us to grow and unlock the potential in AI.
  • You are an Ideal Team Player: You are hungry and no, we are not talking about food here. You are humble, yet love to succeed, especially as a team! You are smart, and not just book smart, you have a great read on people.
  • This position is located in Hyderabad, India.

    We are hiring for our engineering team at our data.ai India subsidiary entity, which is in the process of getting established . As we are awaiting approval from the Indian government, they shall be interim employees at Innova Solutions who is our Global Employer of Record.

    This advertiser has chosen not to accept applicants from your region.

    Senior Data Scientist (Advanced Modeling & Machine Learning)

    Secunderabad, Andhra Pradesh Megovation

    Posted 14 days ago

    Job Viewed

    Tap Again To Close

    Job Description

    Job Title: Senior Data Scientist (Advanced Modeling & Machine Learning)

    Location: Remote

    Location Preference: We are specifically looking to hire talented individuals from Tier 2 and Tier 3 cities for this opportunity.

    Job Type: Full-time


    About the role

    We are seeking a highly motivated and experienced Senior Data Scientist with a strong background in statistical modeling, machine learning, and natural language processing (NLP). This individual will work on advanced attribution models and predictive algorithms that power strategic decision-making across the business. The ideal candidate will have a Master’s degree in a quantitative field, 4–6 years of hands-on experience, and demonstrated expertise in building models from linear regression to cutting-edge deep learning and large language models (LLMs). A Ph.D. is strongly preferred.

    Responsibilities

    • Responsible for analyzing the data, identifying patterns, and do a detailed EDA. 
    • Build and refine predictive models using techniques such as linear/logistic regression, XGBoost, and neural networks.
    • Leverage machine learning and NLP methods to analyze large-scale structured and unstructured datasets.
    • Apply LLMs and transformers to develop solutions in content understanding, summarization, classification, and retrieval.
    • Collaborate with data engineers and product teams to deploy scalable data pipelines and model production systems.
    • Interpret model results, generate actionable insights, and present findings to technical and non-technical stakeholders.
    • Stay abreast of the latest research and integrate cutting-edge techniques into ongoing projects

    Required Qualifications

    • Master’s degree in Computer Science, Statistics, Applied Mathematics, or a related field.
    • 4–6 years of industry experience in data science or machine learning roles.
    • Strong statistical foundation, with practical experience in regression modeling, hypothesis testing, and A/B testing.
    • Hands-on knowledge of:

    > Programming languages : Python (primary), SQL, R (optional)

    > Libraries : pandas, NumPy, scikit-learn, TensorFlow, PyTorch, XGBoost, LightGBM, spaCy, Hugging Face Transformers

    > Distributed computing : PySpark, Dask

    > Big Data and Cloud Platforms : Databricks, AWS Sagemaker, Google Vertex AI, Azure ML

    > Data Engineering Tools : Apache Spark, Delta Lake, Airflow

    > ML Workflow & Visualization : MLflow, Weights & Biases, Plotly, Seaborn, Matplotlib

    > Version control and collaboration : Git, GitHub, Jupyter, VSCode

    Preferred Qualifications

    • Masters or Ph.D. in a quantitative or technical field.
    • Experience with deploying machine learning pipelines in production using CI/CD tools.
    • Familiarity with containerization (Docker) and orchestration (Kubernetes) in ML workloads.
    • Understanding of MLOps and model lifecycle management best practices.
    • Experience in real-time data processing (Kafka, Flink) and high-throughput ML systems.


    What We Offer

    • Competitive salary and performance bonuses
    • Flexible working hours and remote options
    • Opportunities for continued learning and research
    • Collaborative, high-impact team environment
    • Access to cutting-edge technology and compute resources


    To apply, send your resume to to be part of a team pushing the boundaries of data-driven innovation.

    This advertiser has chosen not to accept applicants from your region.
    Be The First To Know

    About the latest Databases Jobs in Hyderabad !

    Senior Data Scientist (Advanced Modeling & Machine Learning)

    Hyderabad, Andhra Pradesh Megovation

    Posted 14 days ago

    Job Viewed

    Tap Again To Close

    Job Description

    Job Title: Senior Data Scientist (Advanced Modeling & Machine Learning)

    Location: Remote

    Location Preference: We are specifically looking to hire talented individuals from Tier 2 and Tier 3 cities for this opportunity.

    Job Type: Full-time


    About the role

    We are seeking a highly motivated and experienced Senior Data Scientist with a strong background in statistical modeling, machine learning, and natural language processing (NLP). This individual will work on advanced attribution models and predictive algorithms that power strategic decision-making across the business. The ideal candidate will have a Master’s degree in a quantitative field, 4–6 years of hands-on experience, and demonstrated expertise in building models from linear regression to cutting-edge deep learning and large language models (LLMs). A Ph.D. is strongly preferred.

    Responsibilities

    • Responsible for analyzing the data, identifying patterns, and do a detailed EDA. 
    • Build and refine predictive models using techniques such as linear/logistic regression, XGBoost, and neural networks.
    • Leverage machine learning and NLP methods to analyze large-scale structured and unstructured datasets.
    • Apply LLMs and transformers to develop solutions in content understanding, summarization, classification, and retrieval.
    • Collaborate with data engineers and product teams to deploy scalable data pipelines and model production systems.
    • Interpret model results, generate actionable insights, and present findings to technical and non-technical stakeholders.
    • Stay abreast of the latest research and integrate cutting-edge techniques into ongoing projects

    Required Qualifications

    • Master’s degree in Computer Science, Statistics, Applied Mathematics, or a related field.
    • 4–6 years of industry experience in data science or machine learning roles.
    • Strong statistical foundation, with practical experience in regression modeling, hypothesis testing, and A/B testing.
    • Hands-on knowledge of:

    > Programming languages : Python (primary), SQL, R (optional)

    > Libraries : pandas, NumPy, scikit-learn, TensorFlow, PyTorch, XGBoost, LightGBM, spaCy, Hugging Face Transformers

    > Distributed computing : PySpark, Dask

    > Big Data and Cloud Platforms : Databricks, AWS Sagemaker, Google Vertex AI, Azure ML

    > Data Engineering Tools : Apache Spark, Delta Lake, Airflow

    > ML Workflow & Visualization : MLflow, Weights & Biases, Plotly, Seaborn, Matplotlib

    > Version control and collaboration : Git, GitHub, Jupyter, VSCode

    Preferred Qualifications

    • Masters or Ph.D. in a quantitative or technical field.
    • Experience with deploying machine learning pipelines in production using CI/CD tools.
    • Familiarity with containerization (Docker) and orchestration (Kubernetes) in ML workloads.
    • Understanding of MLOps and model lifecycle management best practices.
    • Experience in real-time data processing (Kafka, Flink) and high-throughput ML systems.


    What We Offer

    • Competitive salary and performance bonuses
    • Flexible working hours and remote options
    • Opportunities for continued learning and research
    • Collaborative, high-impact team environment
    • Access to cutting-edge technology and compute resources


    To apply, send your resume to to be part of a team pushing the boundaries of data-driven innovation.

    This advertiser has chosen not to accept applicants from your region.

    Senior Data Scientist - Insurance Risk Modeling

    500001 Shaikpet, Andhra Pradesh ₹160000 Annually WhatJobs

    Posted 10 days ago

    Job Viewed

    Tap Again To Close

    Job Description

    full-time
    Our client, a prominent player in the insurance sector, is seeking an experienced Senior Data Scientist specializing in Insurance Risk Modeling to join their team in Hyderabad, Telangana . This role is instrumental in developing advanced statistical and machine learning models to assess and manage various types of insurance risks, including underwriting, pricing, and claims. You will be responsible for the end-to-end lifecycle of modeling projects, from data exploration and feature engineering to model deployment and performance monitoring. This includes leveraging large datasets to identify patterns, predict outcomes, and provide actionable insights that enhance decision-making and drive business performance. Key responsibilities encompass building predictive models for customer behavior, fraud detection, risk segmentation, and portfolio optimization. You will work with diverse data sources, including policy data, claims history, and external demographic information. The ideal candidate will possess a strong academic background in a quantitative field, coupled with extensive hands-on experience in data science techniques applied within the insurance industry. Proficiency in programming languages like Python or R, along with expertise in machine learning libraries (e.g., scikit-learn, TensorFlow, PyTorch) and database technologies (SQL, NoSQL), is essential. Experience with cloud platforms (AWS, Azure, GCP) and big data technologies (Spark, Hadoop) is highly valued. You will collaborate closely with actuaries, underwriters, and business analysts to understand their needs and deliver impactful data-driven solutions. This position requires excellent communication skills to present complex findings to both technical and non-technical audiences. This role is fully remote, offering flexibility and the chance to work with cutting-edge technologies.

    Key Responsibilities:
    • Develop and implement advanced statistical and machine learning models for insurance risk assessment.
    • Analyze large datasets to identify trends, patterns, and predict future outcomes.
    • Design and build predictive models for pricing, underwriting, fraud detection, and claims analysis.
    • Perform feature engineering and data preprocessing for model development.
    • Deploy models into production environments and monitor their performance.
    • Collaborate with business stakeholders to understand requirements and deliver insights.
    • Communicate complex findings effectively to technical and non-technical audiences.
    • Stay abreast of the latest advancements in data science and machine learning.
    • Mentor junior data scientists and contribute to knowledge sharing.
    Qualifications:
    • Master's or Ph.D. in Computer Science, Statistics, Mathematics, or a related quantitative field.
    • 5+ years of experience as a Data Scientist, with a significant focus on the insurance industry.
    • Proven expertise in machine learning algorithms, statistical modeling, and data mining techniques.
    • Strong programming skills in Python or R and experience with relevant libraries.
    • Experience with SQL and data warehousing concepts.
    • Familiarity with cloud platforms and big data technologies.
    • Excellent problem-solving, analytical, and communication skills.
    This advertiser has chosen not to accept applicants from your region.
     

    Nearby Locations

    Other Jobs Near Me

    Industry

    1. request_quote Accounting
    2. work Administrative
    3. eco Agriculture Forestry
    4. smart_toy AI & Emerging Technologies
    5. school Apprenticeships & Trainee
    6. apartment Architecture
    7. palette Arts & Entertainment
    8. directions_car Automotive
    9. flight_takeoff Aviation
    10. account_balance Banking & Finance
    11. local_florist Beauty & Wellness
    12. restaurant Catering
    13. volunteer_activism Charity & Voluntary
    14. science Chemical Engineering
    15. child_friendly Childcare
    16. foundation Civil Engineering
    17. clean_hands Cleaning & Sanitation
    18. diversity_3 Community & Social Care
    19. construction Construction
    20. brush Creative & Digital
    21. currency_bitcoin Crypto & Blockchain
    22. support_agent Customer Service & Helpdesk
    23. medical_services Dental
    24. medical_services Driving & Transport
    25. medical_services E Commerce & Social Media
    26. school Education & Teaching
    27. electrical_services Electrical Engineering
    28. bolt Energy
    29. local_mall Fmcg
    30. gavel Government & Non Profit
    31. emoji_events Graduate
    32. health_and_safety Healthcare
    33. beach_access Hospitality & Tourism
    34. groups Human Resources
    35. precision_manufacturing Industrial Engineering
    36. security Information Security
    37. handyman Installation & Maintenance
    38. policy Insurance
    39. code IT & Software
    40. gavel Legal
    41. sports_soccer Leisure & Sports
    42. inventory_2 Logistics & Warehousing
    43. supervisor_account Management
    44. supervisor_account Management Consultancy
    45. supervisor_account Manufacturing & Production
    46. campaign Marketing
    47. build Mechanical Engineering
    48. perm_media Media & PR
    49. local_hospital Medical
    50. local_hospital Military & Public Safety
    51. local_hospital Mining
    52. medical_services Nursing
    53. local_gas_station Oil & Gas
    54. biotech Pharmaceutical
    55. checklist_rtl Project Management
    56. shopping_bag Purchasing
    57. home_work Real Estate
    58. person_search Recruitment Consultancy
    59. store Retail
    60. point_of_sale Sales
    61. science Scientific Research & Development
    62. wifi Telecoms
    63. psychology Therapy
    64. pets Veterinary
    View All Databases Jobs View All Jobs in Hyderabad