1973 Data Scientist jobs in Bengaluru

Data Scientist/ Sr. Data Scientist

Bengaluru, Karnataka ₹2000000 - ₹2500000 Y Lufthansa Technik AG

Posted today

Job Viewed

Tap Again To Close

Job Description

Role & responsibilities:

Delivery of Key Projects:

Successfully deliver advanced analytics and data science projects within predefined timelines and budget constraints.

Collaboration:

Collaborate effectively with data engineers and ML engineers to comprehend data and models, utilizing various advanced analytics capabilities.

Solution Development:

Develop efficient solutions for complex problems, showcasing expertise in data structures like Delta/Parquet, databases, data analytics, Python programming, and statistics.

End-to-End Pipeline Management:

Manage the entire data science pipeline, including problem scoping, data gathering, modeling, insights generation, visualizations, monitoring, and maintenance.

Dashboard Design:

Design dashboards using Python Dash (plotly) or Flask framework for effective data visualization and reporting.

Technological Expertise:

Utilize CI/CD tools, data pipeline technologies, and visualization/data storytelling tools proficiently.

ETL Pipeline Building:

Construct ETL pipelines using Python or Azure Synapse to facilitate efficient data processing and management.

Data Procurement and Management:

Independently procure and process data, ensuring seamless integration within the company's overall data management strategy.

Technology Exploration:

Continuously monitor advancements in artificial intelligence technology and identify applications for problem-solving within the company.

Technical Skills:

  • Proficiency in Python and common libraries (Dash, Pandas, Scikit-Learn, Pydantic, TensorFlow).
  • Strong Knowledge in Statistics and Probability.
  • Set up cloud alerts, monitors, dashboards, and logging systems, and troubleshoot data platform infrastructure as needed.
  • Experience in OpenShift/Kubernetes, Azure ML is desirable.
  • Proficiency in programming languages: Python (Dash) and database query languages SQL or KQL.
  • Good applied statistical skills, including hands-on experience in time series modeling, predictive modeling, distributions, and regression.
  • Knowledge of DevOps/MLOps.
  • Exceptional analytical and problem-solving skills.

Desired candidate profile :

  • Bachelor's or Master's degree in Computer Science or a related technical field, or equivalent experience.
  • Total of 6 to 10 years of professional experience.
  • Minimum 4 years of experience in data science/machine learning.
  • Minimum 3 years of experience with SQL.
  • Experience in DevOps, preferably with hands-on experience in one or more cloud service providers, with Azure being preferred.
  • If the above has triggered your interest, we are looking forward to receiving your application

** Data Analyst/ Data Engineer profiles are not suitable for this role. Looking for core experience with a Data Scientist role only.

This advertiser has chosen not to accept applicants from your region.

Data Scientist/Senior Data Scientist

Bengaluru, Karnataka ₹1200000 - ₹3600000 Y Fractal

Posted today

Job Viewed

Tap Again To Close

Job Description

It's fun to work in a company where people truly BELIEVE in what they are doing

We're committed to bringing passion and customer focus to the business.
Job Title:
Data Scientist/Senior Data Scientist

Location:
Bangalore/Mumbai/Gurgaon/Chennai/Pune/Noida/Hyderabad

Responsibilities

  • Design and implement advanced solutions utilizing Large Language Models (LLMs).
  • Demonstrate self-driven initiative by taking ownership and creating end-to-end solutions.
  • Conduct research and stay informed about the latest developments in generative AI and LLMs.
  • Develop and maintain code libraries, tools, and frameworks to support generative AI development.
  • Participate in code reviews and contribute to maintaining high code quality standards.
  • Engage in the entire software development lifecycle, from design and testing to deployment and maintenance.
  • Collaborate closely with cross-functional teams to align messaging, contribute to roadmaps, and integrate software into different repositories for core system compatibility.
  • Possess strong analytical and problem-solving skills.
  • Demonstrate excellent communication skills and the ability to work effectively in a team environment.

Primary Skills

  • Natural Language Processing (NLP): Hands-on experience in use case classification, topic modeling, Q&A and chatbots, search, Document AI, summarization, and content generation.

AND/OR

  • Computer Vision and Audio: Hands-on experience in image classification, object detection, segmentation, image generation, audio, and video analysis.
  • Generative AI:

  • Proficiency with SaaS LLMs, including Lang chain, llama index, vector databases, Prompt engineering (COT, TOT, ReAct, agents). Experience with Azure OpenAI, Google Vertex AI, AWS Bedrock for text/audio/image/video modalities.

  • Familiarity with Open-source LLMs, including tools like TensorFlow/Pytorch and huggingface. Techniques such as quantization, LLM finetuning using PEFT, RLHF, data annotation workflow, and GPU utilization.

  • Cloud: Hands-on experience with cloud platforms such as Azure, AWS, and GCP. Cloud certification is preferred.

  • Application Development: Proficiency in Python, Docker, FastAPI/Django/Flask, and Git.

If you like wild growth and working with happy, enthusiastic over-achievers, you'll enjoy your career with us

Not the right fit? Let us know you're interested in a future opportunity by clicking
Introduce Yourself
in the top-right corner of the page or create an account to set up email alerts as new job postings become available that meet your interest

This advertiser has chosen not to accept applicants from your region.

Data Scientist

Bangalore, Karnataka IBM

Posted 1 day ago

Job Viewed

Tap Again To Close

Job Description

**Introduction**
IBM CIO Technology Platform Transformation
The IBM CIO Technology Platform Transformation team plays a crucial role in modernizing and optimizing IBM's technology infrastructure and platforms. The team aims to power AI-enabled experiences through AI-first technology platforms that enable and streamline existing processes, enhance security, and improve user experience by leveraging cutting-edge technologies such as artificial intelligence, machine learning, and cloud computing.
As part of IBM's CIO TPT team, we will contribute to transformative projects that redefine enterprise operations using AI. We will directly influence productivity, user experience, and strategic decision-making across IBM.
Some of the key responsibilities of this group include:
* Driving the adoption of emerging technologies to optimize and automate various business functions, keeping an AI-first approach with a digital experience.
* Enable best in class IT with enhanced cybersecurity measures to protect sensitive information and maintain regulatory compliance.
* Modernizing legacy systems and integrating disparate applications to improve interoperability and reduce technical debt.
* Collaborating with other departments and teams to align technology efforts with broader corporate objectives.
* Providing guidance and expertise on technology trends, best practices, and standards.
This team comprises professionals with diverse backgrounds in software engineering, data science, network architecture, and security. By fostering a culture of innovation and continuous improvement, the team strives to achieve its mission of making IBM the most productive company in the world.
**Your role and responsibilities**
Role Overview:
As an AI Engineer/SW Developer, you will be in a unique position to combine your strategic thinking with your technical skills in AI, machine learning, and data analytics. You will apply your skills to help implement data-driven solutions that align with business goals. You will steer enterprise projects that improve decision-making, solve complex problems, and drive business growth. This role involves working with team members and stakeholders to translate data insights into actionable recommendations that deliver meaningful business impact. 
Key Responsibilities:
1. Implement AI, Data Science, and Technical Execution:
* Support the design, implementation and optimization of AI-driven strategies per business stakeholder requirements.
* Design and implement machine learning solutions and statistical models, from problem formulation through deployment, to analyze complex datasets and generate actionable insights.
* Apply GenAI, traditional AI, ML, NLP, computer vision, or predictive analytics where applicable.
* Collect, clean, and preprocess structured and unstructured datasets.
* Help refine data-driven methodologies for transformation projects.
* Learn and utilize cloud platforms to ensure the scalability of AI solutions.
* Leverage reusable assets and apply IBM standards for data science and development.
* Apply ML Ops and AI ethics.
2. Strategic Planning & Execution
* Translate business requirements into technical strategies.
* Ensure alignment to stakeholders' strategic direction and tactical needs.
* Apply business acumen to analyze business problems and develop solutions.
* Collaborate with stakeholders and team to prioritize work.
3. Project Management and Delivering Business Outcomes:
* Manage and contribute to various stages of AI and data science projects, from data exploration to model development to solution implementation and deployment.
* Use agile strategies to manage and execute work.
* Monitor project timelines and help resolve technical challenges.
* Design and implement measurement frameworks to benchmark AI solutions, quantifying business impact through KPIs.
4. Communication and Collaboration:
* Communicate regularly and present findings to collaborators and stakeholders, including technical and non-technical audiences.
* Create compelling data visualizations and dashboards.
* Work with data engineers, software developers, and other team members to integrate AI solutions into existing systems.
**Required technical and professional expertise**
Experience:
Hands-on Experience with AI/ML technologies and statistical modelling through coursework, projects, or past internships or full-time positions. Participation in AI/Data-related summits will be an added advantage ( eg. Kaggle/Hackathons)
* Experience with prompt engineering or fine-tuning LLMs.
* Familiarity with tools like Lang Chain, Hugging Face Transformers, or OpenAI APIs.
* Understanding of model evaluation metrics specific to LLMs
Technical Skills:
* Proficiency in SQL and Python for performing data analysis and developing machine learning models.
* Experience and/or coursework in statistics, machine learning, generative and traditional AI.
* Knowledge of common machine learning algorithms and frameworks: linear regression, decision trees, random forests, gradient boosting (e.g., XGBoost, LightGBM), neural networks, and deep learning frameworks such as TensorFlow and PyTorch.
* Familiarity with cloud-based platforms and data processing frameworks.
* Understanding of large language models (LLMs).
* Familiarity with object-oriented programming.
* Experience and/or coursework with common Python libraries used by data scientists (e.g., NumPy, Pandas, SciPy, scikit-learn, matplotlib, Seaborn, etc.)
* Knowledge of APIs, Docker, Flask, or model serving technologies
* Experience with tools like Jupyter, Git, or cloud platforms (AWS, Azure, IBM Cloud)
Strategic and Analytical Skills:
* Strategic thinking and business acumen.
* Strong problem-solving abilities and eagerness to learn.
* Ability to work with datasets and derive insights.
* Attention to detail.
Communications and Soft Skills:
* Excellent communication skills, with the ability to explain technical concepts clearly.
* Independent and team oriented.
* Understands AI Ethics principles.
* Works openly and inclusively.
* Adaptable to fast-paced environments.
* Enthusiasm for learning and applying new technologies.
* Growth mindset.
* Ability to balance multiple initiatives, prioritize tasks effectively, and meet deadlines in a fast-paced environment.
IBM is committed to creating a diverse environment and is proud to be an equal-opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, gender, gender identity or expression, sexual orientation, national origin, caste, genetics, pregnancy, disability, neurodivergence, age, veteran status, or other characteristics. IBM is also committed to compliance with all fair employment practices regarding citizenship and immigration status.
This advertiser has chosen not to accept applicants from your region.

Data Scientist

Bangalore, Karnataka Caterpillar, Inc.

Posted 8 days ago

Job Viewed

Tap Again To Close

Job Description

**Career Area:**
Technology, Digital and Data
**Job Description:**
**Your Work Shapes the World at Caterpillar Inc.**
When you join Caterpillar, you're joining a global team who cares not just about the work we do - but also about each other. We are the makers, problem solvers, and future world builders who are creating stronger, more sustainable communities. We don't just talk about progress and innovation here - we make it happen, with our customers, where we work and live. Together, we are building a better world, so we can all enjoy living in it.
Performs analytical tasks and initiatives on huge amounts of data to support data-driven business decision and development & own the Freight Audit process.
**Responsibilities:**
- Directing data gathering, data mining, and data processing processes in huge volume; creating appropriate data models.
- Exploring, promoting, and implementing semantic data capabilities through Natural Language Processing, text analysis and machine learning techniques.
- Leading to define requirements and scope of data analyses, presenting and reporting possible business insights to management using data visualization technologies.
- Conducting research on data model optimization and algorithms to improve effectiveness and accuracy on data analyses.
**Essential Skills, Characteristics, & Experience :**
- 5+ years of experience in Analytics, Machine Learning, AI, Predictive Modeling
- Master's in computer science/mathematics/supply chain/Statistics preferred
- Strong quantitative analysis, statistics, programming, and statistical modeling skills.
- Understanding of core ML concepts and its application in solving real world problems
- High level of comfort/expertise in SAS & Python
- Visualization using Power-BI/Tableau.
- Capable problem solver who uses logic to create effective solutions to complex customer problems
- Possess a Lean Mindset for executing continuous improvements/automations in the process which is being delivered
- Previous experience in Supply chain, Logistics and IT Systems experience preferred
- Must demonstrate superior innovation, leadership, initiative, judgment, interpersonal skills and the ability to communicate quantitative information effectively.
**Skill Descriptors:**
**Business Statistics:** Knowledge of the statistical tools, processes, and practices to describe business results in measurable scales; ability to use statistical tools and processes to assist in making business decisions.
Level Basic Understanding:
- Identifies and describes basic statistical measures.
- Summarizes the underlying uncertainties associated with statistical analysis and reporting.
- Describes the relationship between statistical measurement and continuous improvement.
- Cites examples and meaning of statistics used in own area.
**Accuracy and Attention to Detail:** Understanding the necessity and value of accuracy; ability to complete tasks with high levels of precision.
Level Working Knowledge:
- Accurately gauges the impact and cost of errors, omissions, and oversights.
- Utilizes specific approaches and tools for checking and cross-checking outputs.
- Processes limited amounts of detailed information with good accuracy.
- Learn from mistakes and apply lessons learned.
- Develops and uses checklists to ensure that information goes out error-free.
**Analytical Thinking:** Knowledge of techniques and tools that promote effective analysis; ability to determine the root cause of organizational problems and create alternative solutions that resolve these problems.
Level Basic Understanding:
- Names specific tools or techniques that can be used to support the analytical thinking process.
- Describes specific software applications or products used for business analytics.
- Gives examples of how analytical thinking has been used to resolve problems.
- Helps others research and learn more about business analytics tools and applications.
**Machine Learning:** Knowledge of principles, technologies and algorithms of machine learning; ability to develop, implement and deliver related systems, products and services.
Level Basic Understanding:
- Explains the definition and objectives of machine learning.
- Describes the algorithms and logic of machine learning.
- Distinguishes between machine learning and deep learning.
- Gives several examples on the implementation of machine learning.
**Programming Languages:** Knowledge of basic concepts and capabilities of programming; ability to use tools, techniques and platforms in order to write and modify programming languages.
Level Basic Understanding:
- Describes the basic concepts of programming and program construction activities.
- Uses programming documentation including program specifications in order to maintain standards.
- Describes the capabilities of major programming languages.
- Identifies locally relevant programming tools.
**Query and Database Access Tools:** Knowledge of data management systems; ability to use, support and access facilities for searching, extracting and formatting data for further use.
Level Basic Understanding:
- Identifies query facilities that are available in one's own environment.
- Explains organization's standards, procedures and practices for building queries.
- Describes the functions and features of query and database access tools.
- Identifies the key benefits and drawbacks of query languages.
**Requirements Analysis:** Knowledge of tools, methods, and techniques of requirement analysis; ability to elicit, analyze and record required business functionality and non-functionality requirements to ensure the success of a system or software development project.
Level Working Knowledge:
- Follows policies, practices and standards for determining functional and informational requirements.
- Confirms deliverables associated with requirements analysis.
-Communicates with customers and users to elicit and gather client requirements.
- Participate in the preparation of detailed documentation and requirements.
- Utilizes specific organizational methods, tools and techniques for requirements analysis.
**Posting Dates:**
September 19, 2025 - October 2, 2025
Caterpillar is an Equal Opportunity Employer. Qualified applicants of any age are encouraged to apply
Not ready to apply? Join our Talent Community ( .
This advertiser has chosen not to accept applicants from your region.

Data Scientist

Bengaluru, Karnataka NetApp

Posted 10 days ago

Job Viewed

Tap Again To Close

Job Description

**Job Summary**
We are looking for a talented Data Scientist to join our team. The ideal candidate will have a strong foundation in data analysis, statistical models, and machine learning algorithms. You will work closely with the team to solve complex problems and drive business decisions using data. This role requires strategic thinking, problem-solving skills, and a passion for data.
**Job Responsibilities**
+ Analyse large, complex datasets to extract insights and determine appropriate techniques to use.
+ Build predictive models, machine learning algorithms and conduct A/B tests to assess the effectiveness of models.
+ Present information using data visualization techniques.
+ Collaborate with different teams (e.g., product development, marketing) and stakeholders to understand business needs and devise possible solutions.
+ Stay updated with the latest technology trends in data science.
+ Develop and implement real-time machine learning models for various projects.
+ Engage with clients and consultants to gather and understand project requirements and expectations.
+ Write well-structured, detailed, and compute-efficient code in Python to facilitate data analysis and model development.
+ Utilize IDEs such as Jupyter Notebook, Spyder, and PyCharm for coding and model development.
+ Apply agile methodology in project execution, participating in sprints, stand-ups, and retrospectives to enhance team collaboration and efficiency.
**Education**
IC - Typically requires a minimum of 5 years of related experience.Mgr & Exec - Typically requires a minimum of 3 years of related experience.

At NetApp, we embrace a hybrid working environment designed to strengthen connection, collaboration, and culture for all employees. This means that most roles will have some level of in-office and/or in-person expectations, which will be shared during the recruitment process.
**Equal Opportunity Employer:**
NetApp is firmly committed to Equal Employment Opportunity (EEO) and to compliance with all laws that prohibit employment discrimination based on age, race, color, gender, sexual orientation, gender identity, national origin, religion, disability or genetic information, pregnancy, and any protected classification.
**Why NetApp?**
We are all about helping customers turn challenges into business opportunity. It starts with bringing new thinking to age-old problems, like how to use data most effectively to run better - but also to innovate. We tailor our approach to the customer's unique needs with a combination of fresh thinking and proven approaches.
We enable a healthy work-life balance. Our volunteer time off program is best in class, offering employees 40 hours of paid time off each year to volunteer with their favourite organizations. We provide comprehensive benefits, including health care, life and accident plans, emotional support resources for you and your family, legal services, and financial savings programs to help you plan for your future. We support professional and personal growth through educational assistance and provide access to various discounts and perks to enhance your overall quality of life.
If you want to help us build knowledge and solve big problems, let's talk.
This advertiser has chosen not to accept applicants from your region.

Data Scientist

Bangalore, Karnataka Huron Consulting Group

Posted 10 days ago

Job Viewed

Tap Again To Close

Job Description

Huron helps its clients drive growth, enhance performance and sustain leadership in the markets they serve. We help healthcare organizations build innovation capabilities and accelerate key growth initiatives, enabling organizations to own the future, instead of being disrupted by it. Together, we empower clients to create sustainable growth, optimize internal processes and deliver better consumer outcomes.
Health systems, hospitals and medical clinics are under immense pressure to improve clinical outcomes and reduce the cost of providing patient care. Investing in new partnerships, clinical services and technology is not enough to create meaningful and substantive change. To succeed long-term, healthcare organizations must empower leaders, clinicians, employees, affiliates and communities to build cultures that foster innovation to achieve the best outcomes for patients.
Joining the Huron team means you'll help our clients evolve and adapt to the rapidly changing healthcare environment and optimize existing business operations, improve clinical outcomes, create a more consumer-centric healthcare experience, and drive physician, patient and employee engagement across the enterprise.
Join our team as the expert you are now and create your future.
A highly motivated and detail-oriented Associate - Data Scientist with a strong foundation in statistics, machine learning, classification, regression, clustering, recommendation, anomaly detection, Natural Language Processing (NLP) and healthcare data analytics. This role requires hands-on experience in building and deploying predictive models, conducting clustering and segmentation analysis, applying Market Basket Analysis (MBA) for root cause pattern discovery, and utilizing supervised and unsupervised ML techniques to uncover insights from complex datasets. Coordinate with the stakeholders to develop data-driven solutions that address challenges in denials, collections, write-offs, and payment forecasting.
Job Title: Data Scientist
Practice: Healthcare
Level: Associate
Location: Bangalore
Key Responsibilities:
- Analyze large-scale healthcare claims and transaction data (charges, payments, denials, write-offs, etc.)
- Develop and implement predictive models, such as XGBoost, Random Forest, or Logistic Regression, for denial prediction, transaction classification, and payment forecasting
- Apply unsupervised learning techniques (e.g., clustering, MBA) to detect denial root causes, payer patterns, and operational inefficiencies.
- Identify trends and anomalies to support root cause analysis in denials and underpayments
- Build and validate machine learning models likw classification, forecasting, clustering for Denial prediction and pattern recognition, Cash collection forecasting and Write-off root cause analysis
- Use tools such as Python (scikit-learn, XGBoost, pandas), SQL, and AWS services like SageMaker, Athena
- Translate business problems into machine learning problems and deliver solutions with clear, measurable outcomes.
- Collaborate with business stakeholders to define use cases and translate them into analytical models and interactive insights
- Work with large datasets from AWS Redshift, S3, Oracle, SageMaker, Excel, and other sources to preprocess and prepare training datasets
- Provide statistical analysis and model validation to ensure accuracy and reliability on unseen RCM data
- Automate and replace manual Excel-based reports with AI-powered analytics and decision support tools
- Collaborate with data visualization teams to integrate model outputs into business-friendly dashboards using AWS QuickSight or Power BI
- Assist in integrating models into production environments and monitoring performance
- Work closely with domain experts, operations leaders, and client teams to translate business questions into analytical solutions
- Participate in brainstorming sessions for new use cases and innovations
Required Qualifications & Skills:
- 5+ Yrs of experience in analytics and Data science
- Proficient in Python and SQL for data transformation and model building
- Hands-on experience with supervised and unsupervised ML techniques, including clustering, classification, and association rule mining
- Exposure to statistics, hypothesis testing, and model performance evaluation techniques (e.g., ROC-AUC, precision/recall, F1)
- Experience with AWS tools such as SageMaker, Redshift, Athena, S3; familiarity with Snowflake is a plus
- Preferred knowledge on Revenue Cycle Management
- Exposure to Python, SQL and data querying for extracting insights and Excel formulae
- Good communication skills and ability to work with business teams.
- Eagerness to learn cloud-based data tools (AWS, S3, Redshift, Snowflake, etc.)
**Position Level**
Associate
**Country**
India
At Huron, we're redefining what a consulting organization can be. We go beyond advice to deliver results that last. We inherit our client's challenges as if they were our own. We help them transform for the future. We advocate. We make a difference. And we intelligently, passionately, relentlessly do great work.together.
Are you the kind of person who stands ready to jump in, roll up your sleeves and transform ideas into action? Then come discover Huron.
Whether you have years of experience or come right out of college, we invite you to explore our many opportunities. Find out how you can use your talents and develop your skills to make an impact immediately. Learn about how our culture and values provide you with the kind of environment that invites new ideas and innovation. Come see how we collaborate with each other in a culture of learning, coaching, diversity and inclusion. And hear about our unwavering commitment to make a difference in partnership with our clients, shareholders, communities and colleagues.
Huron Consulting Group offers a competitive compensation and benefits package including medical, dental, and vision coverage to employees and dependents; a 401(k) plan with a generous employer match; an employee stock purchase plan; a generous Paid Time Off policy; and paid parental leave and adoption assistance. Our Wellness Program supports employee total well-being by providing free annual health screenings and coaching, bank at work, and on-site workshops, as well as ongoing programs recognizing major events in the lives of our employees throughout the year. All benefits and programs are subject to applicable eligibility requirements.
Huron is fully committed to providing equal employment opportunity to job applicants and employees in recruitment, hiring, employment, compensation, benefits, promotions, transfers, training, and all other terms and conditions of employment. Huron will not discriminate on the basis of age, race, color, gender, marital status, sexual orientation, gender identity, pregnancy, national origin, religion, veteran status, physical or mental disability, genetic information, creed, citizenship or any other status protected by laws or regulations in the locations where we do business. We endeavor to maintain a drug-free workplace.
This advertiser has chosen not to accept applicants from your region.

Data Scientist

Bengaluru, Karnataka LTIMindtree

Posted 1 day ago

Job Viewed

Tap Again To Close

Job Description

We are hiring for Data Scientist

Experience - 10 to 15 Years

Location : Bangalore, Chennai, Pnune

Notice period - immediate to 30 days


Key Responsibilities

Lead the design, architecture, and deployment of large-scale ML, Deep Learning, and Generative AI systems.

Drive AI/ML strategy in alignment with organizational goals and Industry 4.0 initiatives.

Architect and implement state-of-the-art Generative AI solutions including LLMs, VLMs, diffusion models, and multimodal systems.

Oversee development of scalable ML pipelines, ensuring robustness, reliability, and efficiency.

Provide technical leadership in MLOps, deployment strategies, and governance for AI models across edge and cloud.

Collaborate with executive leadership, product managers, and domain experts to shape AI-driven innovation.

Evaluate emerging research and technologies in AI/ML and recommend adoption strategies for enterprise-scale impact.

Mentor, coach, and grow data scientists, ML engineers, and research teams.

Required Skills & Qualifications

Education: Master’s/PhD in Computer Science, Data Science, AI/ML, or related field.

Experience: 10+ years of experience in Machine Learning and Deep Learning with at least 3+ years in leadership roles.

Proven track record in deploying AI solutions at scale across multiple domains (manufacturing, healthcare, finance, logistics, etc.).

Strong expertise in deep learning frameworks such as TensorFlow, PyTorch, and Hugging Face Transformers.

Hands-on experience with Generative AI models (LLMs, GANs, diffusion models, transformers, VLMs).

Strong knowledge of model fine-tuning, transfer learning, RAG pipelines, and vector databases.

Deep understanding of distributed systems, model optimization, and high-performance computing for AI workloads.

Proven ability to architect AI solutions across cloud (AWS, Azure, GCP) and hybrid/edge deployments.

Experience with MLOps (MLflow, Kubeflow, Docker, Kubernetes, CI/CD) and responsible AI practices.

Preferred Skills

Experience in Industry 4.0 use cases such as digital twins, predictive maintenance, robotics, and computer vision in industrial environments.

Familiarity with Agentic AI frameworks, Model Context Protocol (MCP), and knowledge graph-based reasoning.

Understanding of reinforcement learning, edge AI, and federated learning.

Global experience working across geographies and diverse teams.


If Interested , Kindly share me your updated CV with below details -


Candidate Name -

Candidate Email ID -

Candidate Mobile Number -

Date of Birth(As per Aadhar)-

Current/ Preferred Location-

Total Years of IT experience -

Relevant Years of Experience -

Current Company-

Current CTC -

Expected CTC -

Any Offers in hand or Pipeline -

Notice Period (If serving then LWD) -

This advertiser has chosen not to accept applicants from your region.
Be The First To Know

About the latest Data scientist Jobs in Bengaluru !

Data Scientist

Bengaluru, Karnataka Rakuten Symphony

Posted 1 day ago

Job Viewed

Tap Again To Close

Job Description

Job Title: AI/ML Engineer (Generative AI & ML Ops)

Minimum 5 to 8+ years of experience in AI/ML development.

Location: Bangalore (Hybrid)


Why should you choose us?

Rakuten Symphony is a Rakuten Group company, that provides global B2B services for the mobile telco industry and enables next-generation, cloud-based, international mobile services. Building on the technology Rakuten used to launch Japan’s newest mobile network, we are taking our mobile offering global. To support our ambitions to provide an innovative cloud-native telco platform for our customers, Rakuten Symphony is looking to recruit and develop top talent from around the globe. We are looking for individuals to join our team across all functional areas of our business – from sales to engineering, support functions to product development. Let’s build the future of mobile telecommunications together!


Required Skills and Expertise:

AI/ML Strategy and Leadership :

  • Define the AI/ML strategy and roadmap aligned with the product vision.
  • Identify and prioritize AI/ML use cases, including classical ML, Generative AI, and Agentic AI, relevant to our product offerings.
  • Build and lead a high-performing AI/ML team by mentoring and upskilling existing non-AI/ML team members.
  • Stay updated with the latest advancements in AI/ML, generative AI, Agentic AI and ML Ops, and apply them to solve business problems.


AI/ML Development :

  • Proficient in Python programming and libraries like NumPy, Pandas, Scikit-learn, and Matplotlib.
  • Strong understanding of machine learning algorithms, deep learning architectures, and generative AI models.
  • Design, develop, and deploy classical machine learning models, including supervised, unsupervised, and reinforcement learningtechniques.
  • Hands on Experience in AI/ML Framework like: Scikit-learn, XG Boost, TensorFlow, Py Torch, Keras, Hugging Face, OpenAI APIs (e.g., GPT models).
  • Experience with feature engineering, model evaluation, and hyperparameter tuning.
  • Experience with Lang Chain modules, including Chains, Memory, Tools, and Agents.
  • Build and fine-tune generative AI models (e.g., GPT, DALL-E, Stable Diffusion) for specific use cases.
  • Must Have Exp. in any Agentic AI framework. Leverage LLMs (e.g., GPT, Claude, LLaMA) and multi-modal models to build intelligent agents that can interact with users and systems.
  • Design and develop autonomous AI agents capable of reasoning, planning, and executing tasks in dynamic environments.
  • Implement prompt engineering and fine-tuning of LLMs to optimize agent behaviour for specific tasks.
  • Optimize models for performance, scalability, and cost-efficiency.


ML Ops and DevOps for AI/ML:

  • Establish and maintain an end-to-end MLOps pipeline for model development, deployment, monitoring, and retraining.
  • Automate model training, testing, and deployment workflows using CI/CD pipelines.
  • Implement robust version control for datasets, models, and code.
  • Monitor model performance in production and implement feedback loops for continuous improvement.
  • Proficiency in MLOps tools and platforms such as MLflow, Kubeflow, TFX, SageMaker.
  • Familiarity with CI/CD tools like Jenkins, GitHub Actions, or GitLab CI for AI/ML workflows.
  • Expertise in deploying models on cloud platforms (AWS, Azure, GCP)


RAKUTEN SHUGI PRINCIPLES :

Our worldwide practices describe specific behaviours that make Rakuten unique and united across the world. We expect Rakuten employees to model these 5 Shugi Principles of Success.

Always improve, always advance . Only be satisfied with complete success - Kaizen.

Be passionately professional . Take an uncompromising approach to your work and be determined to be the best.

Hypothesize - Practice - Validate - Shikumika . Use the Rakuten Cycle to success in unknown territory.

Maximize Customer Satisfaction . The greatest satisfaction for workers in a service industry is to see their customers smile.

Speed! Speed! Speed! Always be conscious of time. Take charge, set clear goals, and engage your team.

This advertiser has chosen not to accept applicants from your region.

Data Scientist

Bengaluru, Karnataka CirrusLabs

Posted 1 day ago

Job Viewed

Tap Again To Close

Job Description

Experience : 8+ Years

Mandatory skills: Python, Gen AI, traditional ML, core Data scientist, ML Ops, Agentic AI


Position Overview

We are seeking an experienced AI Architect to join our dynamic team. This role combines deep technical expertise in traditional statistics, classical machine learning, and modern AI with full-stack development capabilities to build end-to-end intelligent systems. You'll work on revolutionary projects involving generative AI, large language models, and advanced data science applications.

Key Responsibilities

  • AI/ML Development & Data Science: Design, develop, and deploy machine learning models ranging from classical algorithms to deep learning for production environments
  • Apply traditional statistical methods including hypothesis testing, regression analysis, time series forecasting, and experimental design
  • Build and optimize large language model applications including fine-tuning, prompt engineering, and model evaluation
  • Implement Retrieval Augmented Generation (RAG) systems for enhanced AI capabilities
  • Conduct advanced data analysis, statistical modeling, A/B testing, and predictive analytics using both classical and modern techniques
  • Research and prototype cutting-edge generative AI solutions
  • Traditional ML & Statistics: Implement classical machine learning algorithms including linear/logistic regression, decision trees, random forests, SVM, clustering, and ensemble methods
  • Perform feature engineering, selection, and dimensionality reduction techniques
  • Conduct statistical inference, confidence intervals, and significance testing
  • Design and analyze controlled experiments and observational studies
  • Apply Bayesian methods and probabilistic modeling approaches
  • Full Stack Development: Develop scalable front-end applications using modern frameworks (React, Vue.js, Angular)
  • Build robust backend services and APIs using Python, Node.js, or similar technologies
  • Design and implement database solutions (SQL/NoSQL) optimized for ML workloads
  • Create intuitive user interfaces for AI-powered applications and statistical dashboards
  • MLOps & Infrastructure: Establish and maintain ML pipelines for model training, validation, and deployment
  • Implement CI/CD workflows for ML models using tools like MLflow, Kubeflow, or similar
  • Monitor model performance, drift detection, and automated retraining systems
  • Deploy and scale ML solutions using cloud platforms (AWS, GCP, Azure)
  • Containerize applications using Docker and orchestrate with Kubernetes
  • Collaboration & Leadership: Work closely with data scientists, product managers, and engineering teams
  • Mentor junior engineers and contribute to technical decision-making
  • Participate in code reviews and maintain high development standards
  • Stay current with latest AI/ML trends and technologies

Required Qualifications

  • Experience & Education: 7-8 years of professional software development experience
  • Bachelor's or Master's degree in Computer Science, Statistics, Mathematics, Machine Learning, Data Science, or related field
  • 6+ years of hands-on AI/ML experience in production environments
  • Technical Skills:Programming: Expert proficiency in Python, strong experience with JavaScript/TypeScript, R is a plus
  • Traditional ML: Scikit-learn, XGBoost, LightGBM, classical algorithms and ensemble methods
  • Statistics: Hypothesis testing, regression analysis, ANOVA, time series analysis, experimental design, Bayesian inference
  • Statistical Tools: Experience with R, SAS, SPSS, or similar statistical software packages
  • Deep Learning: TensorFlow, PyTorch, neural networks, computer vision, NLP
  • LLM Experience: Working with GPT, Claude, Llama, or similar models; experience with fine-tuning and prompt engineering
  • RAG Implementation: Vector databases (Pinecone, Weaviate, Chroma), embedding models, semantic search
  • Data Science: Pandas, NumPy, statistical analysis, data visualization (Matplotlib, Plotly, Seaborn), feature engineering
  • Full Stack: React/Vue.js, Node.js/FastAPI, REST/GraphQL APIs
  • Databases: PostgreSQL, MongoDB, Redis, vector databases
  • MLOps: Docker, Kubernetes, CI/CD, model versioning, monitoring tools
  • Cloud Platforms: AWS/GCP/Azure, serverless architectures
  • Soft Skills: Strong problem-solving and analytical thinking
  • Excellent communication and collaboration abilities
  • Self-motivated with ability to work in fast-paced environments
  • Experience with agile development methodologies
  • Preferred Qualifications Experience with causal inference methods and econometric techniques
  • Knowledge of distributed computing frameworks (Spark, Dask)
  • Experience with edge AI and model optimization techniques
  • Publications in AI/ML/Statistics conferences or journals
  • Open source contributions to ML/statistical projects
  • Experience with advanced statistical modeling and multivariate analysis
  • Familiarity with operations research and optimization techniques
This advertiser has chosen not to accept applicants from your region.
 

Nearby Locations

Other Jobs Near Me

Industry

  1. request_quote Accounting
  2. work Administrative
  3. eco Agriculture Forestry
  4. smart_toy AI & Emerging Technologies
  5. school Apprenticeships & Trainee
  6. apartment Architecture
  7. palette Arts & Entertainment
  8. directions_car Automotive
  9. flight_takeoff Aviation
  10. account_balance Banking & Finance
  11. local_florist Beauty & Wellness
  12. restaurant Catering
  13. volunteer_activism Charity & Voluntary
  14. science Chemical Engineering
  15. child_friendly Childcare
  16. foundation Civil Engineering
  17. clean_hands Cleaning & Sanitation
  18. diversity_3 Community & Social Care
  19. construction Construction
  20. brush Creative & Digital
  21. currency_bitcoin Crypto & Blockchain
  22. support_agent Customer Service & Helpdesk
  23. medical_services Dental
  24. medical_services Driving & Transport
  25. medical_services E Commerce & Social Media
  26. school Education & Teaching
  27. electrical_services Electrical Engineering
  28. bolt Energy
  29. local_mall Fmcg
  30. gavel Government & Non Profit
  31. emoji_events Graduate
  32. health_and_safety Healthcare
  33. beach_access Hospitality & Tourism
  34. groups Human Resources
  35. precision_manufacturing Industrial Engineering
  36. security Information Security
  37. handyman Installation & Maintenance
  38. policy Insurance
  39. code IT & Software
  40. gavel Legal
  41. sports_soccer Leisure & Sports
  42. inventory_2 Logistics & Warehousing
  43. supervisor_account Management
  44. supervisor_account Management Consultancy
  45. supervisor_account Manufacturing & Production
  46. campaign Marketing
  47. build Mechanical Engineering
  48. perm_media Media & PR
  49. local_hospital Medical
  50. local_hospital Military & Public Safety
  51. local_hospital Mining
  52. medical_services Nursing
  53. local_gas_station Oil & Gas
  54. biotech Pharmaceutical
  55. checklist_rtl Project Management
  56. shopping_bag Purchasing
  57. home_work Real Estate
  58. person_search Recruitment Consultancy
  59. store Retail
  60. point_of_sale Sales
  61. science Scientific Research & Development
  62. wifi Telecoms
  63. psychology Therapy
  64. pets Veterinary
View All Data Scientist Jobs View All Jobs in Bengaluru