1354 AI Researchers jobs in Bengaluru
Trainee Intern Data Science
Posted 12 days ago
Job Viewed
Job Description
Company Overview – WhatJobs Ltd
WhatJobs is a global job search engine and career platform operating in over 50 countries. We leverage advanced technology and AI-driven tools to connect millions of job seekers with opportunities, helping businesses and individuals achieve their goals.
Position: Data Science Trainee/Intern
Location: Commercial Street
Duration: 3 Months
Type: Internship/Traineeship (with potential for full-time opportunities)
Role Overview
We are looking for enthusiastic Data Science trainees/interns eager to explore the world of data analytics, machine learning, and business insights. You will work on real-world datasets, apply statistical and computational techniques, and contribute to data-driven decision-making at WhatJobs.
Key Responsibilities
- Collect, clean, and analyze datasets to derive meaningful insights.
- Assist in building and evaluating machine learning models.
- Work with visualization tools to present analytical results.
- Support the team in developing data pipelines and automation scripts.
- Research new tools, techniques, and best practices in data science.
Requirements
- Basic knowledge of Python and data science libraries (Pandas, NumPy, Matplotlib, Scikit-learn).
- Understanding of statistics, probability, and data analysis techniques.
- Familiarity with machine learning concepts.
- Knowledge of Google Data Studio and BigQuery for reporting and data management.
- Strong analytical skills and eagerness to learn.
- Good communication and teamwork abilities.
What We Offer
- Hands-on experience with real-world data science projects.
- Guidance and mentorship from experienced data professionals.
- Opportunity to work with a global technology platform.
- Certificate of completion and potential for full-time role.
Company Details
Data Science
Posted today
Job Viewed
Job Description
JD – Sr. Data Scientist - Media Analytics
Experience:
7–8 years
Location:
Bangalore (Onsite)
Candidates who can join within 15 days may only apply
Job Summary
We are looking for a Senior Data Scientist to Support Media Use Cases with strong expertise in
Media Analytics
and a proven track record in driving marketing measurement and optimisation. The role will be responsible for delivering actionable insights, evaluating marketing effectiveness, and driving data-backed recommendations to optimise media investments.
Required Skills & Experience
- 7–8 years of experience in Data Science/Analytics with a strong focus on
media measurement
,
marketing effectiveness
, and performance optimization. - A minimum of a bachelor's degree in Computer Science, Data Science, Machine Learning, Mathematics, Statistics, Economics, or other related fields with emphasis on quantitative methods and solutions. An advanced degree in a similar field is preferred.
- Experience in developing and implementing advanced analytics models such as Customer Lifetime Value, NLP, Look-Alike models, and Anomaly Detection.
- Familiarity with large language models (LLMs) and generative AI is highly desirable.
- Ability to write (and refactor) robust code in Python, Spark, SQL, employing best practice coding standards and documentation.
- Proven experience with Databricks platform, Snowflake, and Azure cloud services.
- Experience in MLOps techniques, including CI/CD pipelines, automated testing, and model monitoring.
- Experience with BI tools such as Power BI, DOMO and Tableau.
- Strong understanding of machine learning algorithms, data structures, and software design principles.
- Excellent problem-solving skills and the ability to work in a fast-paced, collaborative environment.
- Strong communication and interpersonal skills with the ability to collaborate and partner across diverse global multi-functional teams(GT1)
(GT2)
.
Desired Qualifications:
- Experience with classification models, time series forecasting, customer lifetime value model, large language models (LLMs) and generative AI.
- Preferably from the Retail, eCommerce or CPG industry
Key Responsibilities
- Develop and implement advanced analytics models for customer lifetime value, anomaly detection, and media mix modeling.
- Solve complex media optimisation problems and conduct A/B testing to evaluate proposed solutions.
- Partner with domain leads and business stakeholders to clearly understand business requirements; design and develop analytical and AI solutions to Drive recommendations that are clearly linked to the organization's strategy and OKRs and drive performance improvement.
- Support the domain leads and business teams to solve complex analytical challenges.
- Work closely with cross-functional teams of business partners, data scientists, data engineers, solutions, and data architects to quickly deliver scalable Artificial Intelligence (AI) solutions including DL, ML, NLP, optimization etc.
- Development, deployment, and maintenance of scalable AI solutions including the optimization of data queries, code refactoring, shared library usage and documentation of process and solution artifacts.
- Research and promote latest technologies, design patterns and best practice delivery models that drive optimal business value and ensure continuous improvement of team, processes, and platforms.
- Drive recommendations that are clearly linked to the organization's strategy and OKRs and drive performance improvement.
- Employ innovative thinking and always seeking the best ways of working for our teams and business partners.
(GT1)
(GT2)
Data Science
Posted today
Job Viewed
Job Description
Company Description
FACE Prep is one of India's largest placement-focused skill development companies, specializing in job preparation for the tech sector. Since its inception in 2008, FACE Prep has helped millions of students kickstart their careers. The company offers a variety of programs, including masterclasses, self-paced courses, and workshops to help students acquire the skills needed for top-paying jobs in tech. FACE Prep's alumni work at leading tech companies like Google, Microsoft, Meta, Adobe, PayPal, and many more.
Role Description
This is a full-time, on-site role for an AI & ML Mentor (Faculty) located in Bengaluru. The AI & ML Mentor will be responsible for delivering quality education through workshops, bootcamps, and mentoring sessions. Day-to-day tasks include preparing instructional materials, guiding students through complex AI and ML concepts, and providing individual mentorship. The role involves staying updated with the latest advancements in AI and ML to ensure the curriculum remains current and effective.
Qualifications
- In-depth knowledge and expertise in Artificial Intelligence (AI) and Machine Learning (ML)
- Experience in developing and delivering instructional content, including workshops and bootcamps
- Proficiency in Python and related libraries such as TensorFlow, PyTorch, and scikit-learn
- Excellent communication and presentation skills
- Ability to mentor and guide students at different stages of learning
- Experience in the education sector or similar role is a plus
- Bachelor's or Master's degree in Computer Science, Data Science, or related field
Data Science
Posted today
Job Viewed
Job Description
Job Role- Data Scientist (Sr. Consultant)
At Deloitte, we do not offer you just a job, but a career in the highly sought-after Risk Management field. We are the business leader in the risk market. We work with a vision to make the world more prosperous, trustworthy, and safe. Our clients, primarily based outside of India, are large, complex organizations that constantly evolve and innovate to build better products and services. In the process, they encounter various risks and the work we do to help them address these risks is increasingly important to their success—and to the strength of the economy and public security.
By joining us, you will get to work with diverse teams of professionals who design, manage, and implement risk- centric solutions across a variety of risk domains. In the process, you will gain exposure to the risk-centric challenges faced in today's world by organizations across a range of industry sectors and become subject matter experts in those areas and develop into a well-rounded professional who not only has the depth in few risk domains, but also has width of exposure to wide variety of risk domains.
So, if you are someone who believes in disrupting through innovation and execution of ideas, Deloitte Risk and Financial Advisory is the place to be
Work you'll do
The key job responsibilities will be to:
- Develop database schemas, tables and dictionaries
- Develop, implement and optimize stored procedures, functions and indexes
- Ensure the data quality and integrity in databases
- Create complex functions, scripts, stored procedures and triggers to support application development
- Fix any issues related to database performance and ensuring stability, reliability and security
- Design, create, and implement database systems based on the end user's requirements
- Prepare documentations for database applications
- Memory management for database systems
- Develop best practices for database design and development activities
The Team
Our Financial Technology practice develops and licenses a growing family of proprietary software products (see ) to assist financial institutions with a number of complex topics, such as accounting for credit deteriorated assets and the administration of investments in leveraged loans.
We are looking to add dedicated software engineers to our team. In addition to competitive compensation and benefits, we provide excellent opportunities for growth and learning and invest in our talent development.
Qualifications
Required:
- Bachelor's degree in computer science or related field
- At least 5 to 7 years of experience as a SQL developer, with strong understanding of Microsoft SQL Server database
- Strong experience with Python coding and libraries (Pandas, NumPy, PySpark etc.)
- Hands-on experience with machine learning algorithms and frameworks
- Understanding and implementation of AI and generative AI solutions
- Proficiency in data visualization & data analytics
- Knowledge of best practices when dealing with relational databases
- Capable of troubleshooting common database issues
- Familiar with tools that can aid with profiling server resource usage and optimizing it
- Knowledge in performance optimization techniques
- Excellent verbal and written communication
Our purpose
Deloitte's purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities.
Our people and culture
Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work.
Professional development
At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India.
Benefits to help you thrive
At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you.
Recruiting tips
From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters.
Requisition code:
Data Science
Posted today
Job Viewed
Job Description
Data Science + Gen AI with below mandatory skills.
Must to Have : Agent Framework, RAG Framework, Chunking Strategies, LLMs, AI on cloud
Services, Open Source Frameworks like Langchain, Llama Index, Vector Database, Token
Management, Knowledge Graph, Vision
Exp Range - 6 to 9 years
RequirementsMajor Duties & Responsibilities
• Work with business stakeholders and cross-functional SMEs to deeply understand business context and key business
questions
• Create Proof of concepts (POCs) / Minimum Viable Products (MVPs), then guide them through to production deployment
and operationalization of projects
• Influence machine learning strategy for Digital programs and projects
• Make solution recommendations that appropriately balance speed to market and analytical soundness
• Explore design options to assess efficiency and impact, develop approaches to improve robustness and rigor
• Develop analytical / modelling solutions using a variety of commercial and open-source tools (e.g., Python, R,
TensorFlow)
• Formulate model-based solutions by combining machine learning algorithms with other techniques such as simulations.
• Design, adapt, and visualize solutions based on evolving requirements and communicate them through presentations,
scenarios, and stories.
• Create algorithms to extract information from large, multiparametric data sets.
• Deploy algorithms to production to identify actionable insights from large databases.
• Compare results from various methodologies and recommend optimal techniques.
• Design, adapt, and visualize solutions based on evolving requirements and communicate them through presentations,
scenarios, and stories.
• Develop and embed automated processes for predictive model validation, deployment, and implementation
• Work on multiple pillars of AI including cognitive engineering, conversational bots, and data science
• Ensure that solutions exhibit high levels of performance, security, scalability, maintainability, repeatability, appropriate
reusability, and reliability upon deployment
• Lead discussions at peer review and use interpersonal skills to positively influence decision making
• Provide thought leadership and subject matter expertise in machine learning techniques, tools, and concepts; make
impactful contributions to internal discussions on emerging practices
• Facilitate cross-geography sharing of new ideas, learnings, and best-practices
Required Qualifications
• Bachelor of Science or Bachelor of Engineering at a minimum.
• 6-9 years of work experience as a Data Scientist
• A combination of business focus, strong analytical and problem-solving skills, and programming knowledge to be able to
quickly cycle hypothesis through the discovery phase of a project
• Advanced skills with statistical/programming software (e.g., R, Python) and data querying languages (e.g., SQL,
Hadoop/Hive, Scala)
• Good hands-on skills in both feature engineering and hyperparameter optimization
• Experience producing high-quality code, tests, documentation
• Experience with Microsoft Azure or AWS data management tools such as Azure Data factory, data lake, Azure ML,
Synapse, Databricks
• Understanding of descriptive and exploratory statistics, predictive modelling, evaluation metrics, decision trees, machine
learning algorithms, optimization & forecasting techniques, and / or deep learning methodologies
• Proficiency in statistical concepts and ML algorithms
• Good knowledge of Agile principles and process
• Ability to lead, manage, build, and deliver customer business results through data scientists or professional services team
• Ability to share ideas in a compelling manner, to clearly summarize and communicate data analysis assumptions and
results
• Self-motivated and a proactive problem solver who can work independently and in teams
Work with one of the Big 4's in India
Data Science
Posted today
Job Viewed
Job Description
is on a mission to become India's #1 and largest online expo and community for movie lovers. We are building an integrated platform that combines a comprehensive movie database (like IMDb) with a dynamic social community, creating the ultimate destination for fans to discover, discuss, and celebrate cinema. Our vision is to empower millions of users, connect them with creators, and build a powerful promotional launchpad for the film industry.
We are an early-stage startup in our foundational phase, and we are looking for passionate and driven interns to join our core team and help us build the data engine that will power our entire platform.
The Opportunity
This is not a typical internship. You will be joining us at the ground level and will play a crucial role in building the most comprehensive movie database in the country. You will work directly with our senior development team to collect, scrape, and structure the data that is the lifeblood of This role is perfect for a self-starter who is eager to learn and wants to make a tangible impact on a product from day one.
What You Will Do:
- Data Collection:
Systematically gather information about movies, actors, directors, and more from various online sources. - Data Scraping:
Assist in developing and running scripts to automate the collection of large volumes of data. - Database Management:
Learn to populate and manage our core databases (MySQL/MongoDB), ensuring data integrity and accuracy. - Data Structuring:
Use tools like Microsoft Excel and Google Sheets to clean, organize, and prepare data for database entry. - Collaborate & Learn:
Work alongside a senior developer who will mentor you, while also having the opportunity for self-driven learning on the job.
What We Are Looking For:
- Currently pursuing or a recent graduate in Computer Science, Information Technology, Data Science, or a related field.
- A strong analytical mindset and excellent problem-solving skills.
- Proficient in Microsoft Excel or Google Sheets for data organization.
- Basic understanding of database concepts (familiarity with MySQL or MongoDB is a huge plus).
- A genuine passion for movies and the entertainment industry.
- Most importantly, a strong willingness to learn, adapt, and take initiative.
What We Offer:
- Hands-On Experience:
Get real-world experience in data scraping, database management, and backend processes in a fast-paced startup environment. - Mentorship:
Receive direct guidance and mentorship from our senior development team. - Certificate of Completion:
Receive an official internship certificate recognizing your contribution and skills learned. - Stipend:
A monthly stipend of
₹2,000 - ₹5,000
.
Details:
- Role:
Data Science & Backend Intern - Location:
Jayanagar, Bengaluru (In-Office) - Duration:
3 to 6 months - Start Date:
Immediate
If you are excited about building something massive from scratch and love the world of movies, we want to hear from you
Data Science
Posted today
Job Viewed
Job Description
Industry:
Industrial Applications.
Experience:
7–8 Years
About the Role
We are seeking an experienced
Data Scientist
with strong expertise in statistical analysis, machine learning, and predictive analytics. The ideal candidate should have hands-on experience in applying advanced methodologies and supervised learning techniques to solve real-world business problems in the industrial domain.
Key Responsibilities
- Develop, implement, and optimize machine learning models for industrial applications.
- Apply statistical analysis, predictive modeling, and supervised learning methods to drive insights.
- Work with advanced algorithms, including
XGBoost, Ridge, Elastic Net
, Neural Networks, and Causal Inference models
.
- Collaborate with cross-functional teams to translate business challenges into data-driven solutions.
- Conduct model performance evaluation, tuning, and optimization.
- Leverage AI and ML techniques to support decision-making and business growth.
Required Skills & Expertise
- Strong knowledge of
statistical analysis and predictive analytics
.
- Hands-on experience with
machine learning algorithms and supervised learning methods
.
- Expertise in
XGBoost, Ridge, ElasticNet, Neural Networks, Causal Inference, and Optimization
.
- Proficiency in programming languages such as
Python/R
and frameworks for ML/AI.
- Excellent problem-solving and analytical skills.
- Strong communication and collaboration abilities.
Qualifications
- Bachelor's or Master's degree in Computer Science, Data Science, Statistics, or a related field.
- 7–8 years of relevant industry experience in AI/ML-driven applications
- User & Permission Management:
Manage user access, permissions, and security roles on Tableau Server. Collaborate with data/business
About The Organization: Giant in IT. CTC: 25 LPA
Linkedin-
Be The First To Know
About the latest Ai researchers Jobs in Bengaluru !
Data Science
Posted today
Job Viewed
Job Description
rimary Skills:
• Application Development:
Python, Pytest framework, SQL (experience with Postgres preferred), Angular, JavaScript and TypeScript
Experience in Microservices development and building/operate scalable Kubernetes (K8S) clusters, containerization, virtualization, and cloud-based systems
Experienced in developing AI applications with AI frameworks such as LangGraph, LangChain etc
• Problem Solving:
Strong analytical and troubleshooting abilities.
• Communication:
Clear written and verbal communication for documentation and stakeholder engagement.
Data Science
Posted today
Job Viewed
Job Description
Data Science + Gen AI with below mandatory skills.
Must to Have : Agent Framework, RAG Framework, Chunking Strategies, LLMs, AI on cloud
Services, Open Source Frameworks like Langchain, Llama Index, Vector Database, Token
Management, Knowledge Graph, Vision
Exp Range - 10 to 12 years
RequirementsMajor Duties & Responsibilities
• Work with business stakeholders and cross-functional SMEs to deeply understand business context and key business
questions
• Create Proof of concepts (POCs) / Minimum Viable Products (MVPs), then guide them through to production deployment
and operationalization of projects
• Influence machine learning strategy for Digital programs and projects
• Make solution recommendations that appropriately balance speed to market and analytical soundness
• Explore design options to assess efficiency and impact, develop approaches to improve robustness and rigor
• Develop analytical / modelling solutions using a variety of commercial and open-source tools (e.g., Python, R,
TensorFlow)
• Formulate model-based solutions by combining machine learning algorithms with other techniques such as simulations.
• Design, adapt, and visualize solutions based on evolving requirements and communicate them through presentations,
scenarios, and stories.
• Create algorithms to extract information from large, multiparametric data sets.
• Deploy algorithms to production to identify actionable insights from large databases.
• Compare results from various methodologies and recommend optimal techniques.
• Design, adapt, and visualize solutions based on evolving requirements and communicate them through presentations,
scenarios, and stories.
• Develop and embed automated processes for predictive model validation, deployment, and implementation
• Work on multiple pillars of AI including cognitive engineering, conversational bots, and data science
• Ensure that solutions exhibit high levels of performance, security, scalability, maintainability, repeatability, appropriate
reusability, and reliability upon deployment
• Lead discussions at peer review and use interpersonal skills to positively influence decision making
• Provide thought leadership and subject matter expertise in machine learning techniques, tools, and concepts; make
impactful contributions to internal discussions on emerging practices
• Facilitate cross-geography sharing of new ideas, learnings, and best-practices
Required Qualifications
• Bachelor of Science or Bachelor of Engineering at a minimum.
• years of work experience as a Data Scientist
• A combination of business focus, strong analytical and problem-solving skills, and programming knowledge to be able to
quickly cycle hypothesis through the discovery phase of a project
• Advanced skills with statistical/programming software (e.g., R, Python) and data querying languages (e.g., SQL,
Hadoop/Hive, Scala)
• Good hands-on skills in both feature engineering and hyperparameter optimization
• Experience producing high-quality code, tests, documentation
• Experience with Microsoft Azure or AWS data management tools such as Azure Data factory, data lake, Azure ML,
Synapse, Databricks
• Understanding of descriptive and exploratory statistics, predictive modelling, evaluation metrics, decision trees, machine
learning algorithms, optimization & forecasting techniques, and / or deep learning methodologies
• Proficiency in statistical concepts and ML algorithms
• Good knowledge of Agile principles and process
• Ability to lead, manage, build, and deliver customer business results through data scientists or professional services team
• Ability to share ideas in a compelling manner, to clearly summarize and communicate data analysis assumptions and
results
• Self-motivated and a proactive problem solver who can work independently and in teams
Work with one of the Big 4's in India
Data Science & Machine Learning Engineer
Posted today
Job Viewed
Job Description
TE-4 Years and above
Location- Bangalore/Chennai/Hyderabad
NP- 15-30 Days max
JOB DESCRIPTION
Join our fast‑growing team to build a unified platform for data analytics, machine learning, and generative AI. You’ll integrate the AI/ML toolkit, real‑time streaming into a backed feature store, and dashboards—turning raw events into reliable features, insights, and user‑facing analytics at scale.
What you’ll do
- Design and build streaming data pipelines (exactly‑once or effectively‑once) from event sources into low‑latency feature serving and NRT and OLAP queries.
- Develop an AI/ML toolkit: reusable libraries, SDKs, and CLIs for data ingestion, feature engineering, model training, evaluation, and deployment.
- Stand up and optimize a production feature store (schemas, SCD handling, point‑in‑time correctness, TTL/compaction, backfills).
- Expose features and analytics via well‑designed APIs/Services;
integrate with model serving and retrieval for ML/GenAI use cases. - Build and operationalize Superset dashboards for monitoring data quality, pipeline health, feature drift, model performance, and business KPIs.
- Implement governance and reliability: data contracts, schema evolution, lineage, observability, alerting, and cost controls.
- Partner with UI/UX, data science, and backend teams to ship end‑to‑end workflows from data capture to real‑time inference and decisioning.
- Drive performance: benchmark and tune distributed DB (partitions, indexes, compression, merge settings), streaming frameworks, and query patterns.
- Automate with CI/CD, infrastructure‑as‑code, and reproducible environments for quick, safe releases.
Tech you may use
Languages: Python, Java/Scala, SQL
Streaming/Compute: Kafka (or Pulsar), Spark, Flink, Beam
Storage/OLAP: ClickHouse (primary), object storage (S3/GCS), Parquet/Iceberg/Delta
Orchestration/Workflow: Airflow, dbt (for transformations), Makefiles/Poetry/pipenv
ML/MLOps: MLflow/Weights & Biases, KServe/Seldon, Feast/custom feature store patterns, vector stores (optional)
Dashboards/BI: Superset (plugins, theming), Grafana for ops
Platform: Kubernetes, Docker, Terraform, GitHub Actions/GitLab CI, Prometheus/OpenTelemetry
Cloud: AWS/GCP/Azure
What we’re looking for
- 4+ years building production data/ML or streaming systems with high TPS and large data volumes.
- Strong coding skills in Python and one of Java/Scala;
solid SQL and data modeling. - Hands‑on experience with Kafka (or similar), Spark/Flink, and OLAP stores—ideally ClickHouse.
- GenAI pipelines: retrieval‑augmented generation (RAG), embeddings, prompt/tooling workflows, model evaluation at scale.
- Proven experience designing feature pipelines with point‑in‑time correctness and backfills;
understanding of online/offline consistency. - Experience instrumenting Superset dashboards tied to ClickHouse for operational and product analytics.
- Fluency with CI/CD, containerization, Kubernetes, and infrastructure‑as‑code.
- Solid grasp of distributed systems and architecture fundamentals: partitioning, consistency, idempotency, retries, batching vs. streaming, and cost/perf trade‑offs.
- Excellent collaboration skills;
ability to work cross‑functionally with DS/ML, product, and UI/UX. - Ability to pass a CodeSignal prescreen coding test.
Grid Dynamics (Nasdaq:GDYN) is a digital-native technology services provider that accelerates growth and bolsters competitive advantage for Fortune 1000 companies. Grid Dynamics provides digital transformation consulting and implementation services in omnichannel customer experience, big data analytics, search, artificial intelligence, cloud migration, and application modernization. Grid Dynamics achieves high speed-to-market, quality, and efficiency by using technology accelerators, an agile delivery culture, and its pool of global engineering talent. Founded in 2006, Grid Dynamics is headquartered in Silicon Valley with offices across the US, UK, Netherlands, Mexico, India, Central and Eastern Europe.
To learn more about Grid Dynamics, please visit . Follow us on Facebook , Twitter , and LinkedIn .
--