278 Computer Science jobs in Nashik
Data Science Intern
Posted 6 days ago
Job Viewed
Job Description
NLP Data Science Intern
Did you notice a shortage of food at supermarkets during covid? Have you heard about the recent issues in the global shipping industry? or perhaps you've heard about the shortages of microchips? These problems are called supply chain disruptions. They have been increasing in frequency and severity. Supply chain disruptions are threatening our very way of life.
Our vision is to advance society's capacity to withstand shocks and stresses. Kavida.ai believes the only way to ensure security is through supply chain resiliency. We are on a mission to help companies proactively manage disruption supply chain disruption risks using integrated data.
Our Story
In March 2020 over 35 academics, data scientists, students, and software engineering volunteers came together to address the food shortage issues caused by the pandemic - A core team of 9 was formed and spun off into a startup and the rest is history.
Our investors include one of the world's largest supply chain quality & compliance monitoring companies, a £1.25bn apparel manufacturer, and some very impressive angel investors.
Social Impact:
Social impact is in our DNA. We believe private sector innovation is the only way to address social problems at scale. If we achieve our mission, humanity will always have access to its essential goods for sustenance. No more shortages of food, PPE, medicine, etc.
Our Culture :
Idea Meritocracy:
The best ideas win. We only care about what is right, not who is right. We know arriving at the best answer requires constructive tension. Sometimes it can get heated but it's never personal. Everyone contributes to better ideas knowing they will be heard but also challenged.
Drivers Not Passengers:
We think as owners who drive the bus, not as passengers. We are self-starters and never wait for instructions. We are hungry for autonomy, trust, and responsibility. Everyone is a leader because we know leadership is a trait, not a title. Leaders drive growth and navigate the chaos.
We Figure Out The Answers:
We trust our ability to figure stuff out. We do not need all the information to start answering the question. We can connect the dots and answer difficult questions with logic.
Customer & Mission Obsessed:
Our customers are our heroes and we are obsessed with helping them. We are obsessed with; understanding their supply chains better, resolving their biggest headaches, and advancing their competitiveness.
Learning and growth
We all take personal responsibility for becoming smarter, wiser, more skilled, happier. We are obsessed with learning about our industry and improving our own skills. We are obsessed with our personal growth; to become more.
Job Description:
As a member of our Research team, you will be responsible for researching, developing, and coding Agents using state-of-the-art LLM's with automated pipelines.
- Write code for the development of our ML engines and micro-services pipelines.
- use, optimize, train, and evaluate state-of-the-art GPT models.
- research and Develop Agentic pipelines using LLM's.
- research and develop RAG based pipeline using vector DB's .
Essential Requirements:
- prompt engineering and Agentic LLm frameworks like langchain/llama index
- good enough undersanding of vectors/tensors and RAG pipelines
- Knowledge of building NLP systems using transfer learning or building custom NLP systems from scratch using TensorFlow or PyTorch.
- In-depth knowledge of DSA, async, python, and containers.
- Knowledge of transformers and NLP techniques is essential, and deployment experience is a significant advantage.
Salary Range: 15000 - 25000
We are offering a full-time internship position to final-year students. The internship will last for an initial period of 6-12 months before converting to a full-time job, depending on suitability for both parties. If the applicant is a student who needs to return to university, they can continue with the program on a part-time basis.
Data Science Intern
Posted today
Job Viewed
Job Description
NLP Data Science Intern
Did you notice a shortage of food at supermarkets during covid? Have you heard about the recent issues in the global shipping industry? or perhaps you've heard about the shortages of microchips? These problems are called supply chain disruptions. They have been increasing in frequency and severity. Supply chain disruptions are threatening our very way of life.
Our vision is to advance society's capacity to withstand shocks and stresses. Kavida.ai believes the only way to ensure security is through supply chain resiliency. We are on a mission to help companies proactively manage disruption supply chain disruption risks using integrated data.
Our Story
In March 2020 over 35 academics, data scientists, students, and software engineering volunteers came together to address the food shortage issues caused by the pandemic - Covid19foodsupply.com. A core team of 9 was formed and spun off into a startup and the rest is history.
Our investors include one of the world's largest supply chain quality & compliance monitoring companies, a £1.25bn apparel manufacturer, and some very impressive angel investors.
Social Impact:
Social impact is in our DNA. We believe private sector innovation is the only way to address social problems at scale. If we achieve our mission, humanity will always have access to its essential goods for sustenance. No more shortages of food, PPE, medicine, etc.
Our Culture:
Idea Meritocracy:
The best ideas win. We only care about what is right, not who is right. We know arriving at the best answer requires constructive tension. Sometimes it can get heated but it's never personal. Everyone contributes to better ideas knowing they will be heard but also challenged.
Drivers Not Passengers:
We think as owners who drive the bus, not as passengers. We are self-starters and never wait for instructions. We are hungry for autonomy, trust, and responsibility. Everyone is a leader because we know leadership is a trait, not a title. Leaders drive growth and navigate the chaos.
We Figure Out The Answers:
We trust our ability to figure stuff out. We do not need all the information to start answering the question. We can connect the dots and answer difficult questions with logic.
Customer & Mission Obsessed:
Our customers are our heroes and we are obsessed with helping them. We are obsessed with; understanding their supply chains better, resolving their biggest headaches, and advancing their competitiveness.
Learning and growth
We all take personal responsibility for becoming smarter, wiser, more skilled, happier. We are obsessed with learning about our industry and improving our own skills. We are obsessed with our personal growth; to become more.
Job Description:
As a member of our Research team, you will be responsible for researching, developing, and coding Agents using state-of-the-art LLM's with automated pipelines.
- Write code for the development of our ML engines and micro-services pipelines.
- use, optimize, train, and evaluate state-of-the-art GPT models.
- research and Develop Agentic pipelines using LLM's.
- research and develop RAG based pipeline using vector DB's .
Essential Requirements:
- prompt engineering and Agentic LLm frameworks like langchain/llama index
- good enough undersanding of vectors/tensors and RAG pipelines
- Knowledge of building NLP systems using transfer learning or building custom NLP systems from scratch using TensorFlow or PyTorch.
- In-depth knowledge of DSA, async, python, and containers.
- Knowledge of transformers and NLP techniques is essential, and deployment experience is a significant advantage.
Salary Range: ₹15000 - ₹25000
We are offering a full-time internship position to final-year students. The internship will last for an initial period of 6-12 months before converting to a full-time job, depending on suitability for both parties. If the applicant is a student who needs to return to university, they can continue with the program on a part-time basis.
Data Science Intern
Posted today
Job Viewed
Job Description
NLP Data Science Intern
Did you notice a shortage of food at supermarkets during covid? Have you heard about the recent issues in the global shipping industry? or perhaps you’ve heard about the shortages of microchips? These problems are called supply chain disruptions. They have been increasing in frequency and severity. Supply chain disruptions are threatening our very way of life.
Our vision is to advance society’s capacity to withstand shocks and stresses. Kavida.ai believes the only way to ensure security is through supply chain resiliency. We are on a mission to help companies proactively manage disruption supply chain disruption risks using integrated data.
Our Story
In March 2020 over 35 academics, data scientists, students, and software engineering volunteers came together to address the food shortage issues caused by the pandemic - Covid19foodsupply.com. A core team of 9 was formed and spun off into a startup and the rest is history.
Our investors include one of the world's largest supply chain quality & compliance monitoring companies, a £1.25bn apparel manufacturer, and some very impressive angel investors.
Social Impact:
Social impact is in our DNA. We believe private sector innovation is the only way to address social problems at scale. If we achieve our mission, humanity will always have access to its essential goods for sustenance. No more shortages of food, PPE, medicine, etc.
Our Culture :
Idea Meritocracy:
The best ideas win. We only care about what is right, not who is right. We know arriving at the best answer requires constructive tension. Sometimes it can get heated but it's never personal. Everyone contributes to better ideas knowing they will be heard but also challenged.
Drivers Not Passengers:
We think as owners who drive the bus, not as passengers. We are self-starters and never wait for instructions. We are hungry for autonomy, trust, and responsibility. Everyone is a leader because we know leadership is a trait, not a title. Leaders drive growth and navigate the chaos.
We Figure Out The Answers:
We trust our ability to figure stuff out. We do not need all the information to start answering the question. We can connect the dots and answer difficult questions with logic.
Customer & Mission Obsessed:
Our customers are our heroes and we are obsessed with helping them. We are obsessed with; understanding their supply chains better, resolving their biggest headaches, and advancing their competitiveness.
Learning and growth
We all take personal responsibility for becoming smarter, wiser, more skilled, happier. We are obsessed with learning about our industry and improving our own skills. We are obsessed with our personal growth; to become more.
Job Description:
As a member of our Research team, you will be responsible for researching, developing, and coding Agents using state-of-the-art LLM's with automated pipelines.
- Write code for the development of our ML engines and micro-services pipelines.
- use, optimize, train, and evaluate state-of-the-art GPT models.
- research and Develop Agentic pipelines using LLM's.
- research and develop RAG based pipeline using vector DB's .
Essential Requirements:
- prompt engineering and Agentic LLm frameworks like langchain/llama index
- good enough undersanding of vectors/tensors and RAG pipelines
- Knowledge of building NLP systems using transfer learning or building custom NLP systems from scratch using TensorFlow or PyTorch.
- In-depth knowledge of DSA, async, python, and containers.
- Knowledge of transformers and NLP techniques is essential, and deployment experience is a significant advantage.
Salary Range: ₹15000 - ₹25000
We are offering a full-time internship position to final-year students. The internship will last for an initial period of 6-12 months before converting to a full-time job, depending on suitability for both parties. If the applicant is a student who needs to return to university, they can continue with the program on a part-time basis.
Data Science Intern
Posted today
Job Viewed
Job Description
About the Company
ZeTheta Algorithms Private Limited is a FinTech start-up which has been recently set up and is developing innovative AI tools.
About the Role
As a Data Scientist intern, you will work on cutting-edge projects involving financial data analysis, investment research, and risk modelling. You will have the opportunity to engage in multiple mini-projects or take up a focused innovation-based research project. The project experience is designed to provide practical exposure to data science in the context of asset management, trading, and financial technology. We provide problem statements, methodology and after you submit your solution to develop the solutions/ model, we also showcase to you sample solution. You can use our sample solution to modify your project submission and expand further based on suggestions given in our sample solution. You can opt for your own research based data science solution to develop/ model.
Responsibilities
- Conduct data cleaning, wrangling, and pre-processing for financial datasets.
- Assist investment teams in equity research, fixed income research, portfolio management, and economic analysis.
- Apply statistical techniques to financial problems such as credit risk modelling, probability of default, and value-at-risk estimation.
- Work with big data sources including financial reports, macroeconomic datasets, and alternative investment data.
- Use either one – Python, Excel or R to analyse, visualize, and model financial data.
- Participate in research projects related to quantitative trading, financial derivatives, and portfolio optimization.
Who Should Apply?
- Any student even without coding skills can upskill (self learning) to develop Data Science Solutions. Some basic knowledge of Excel or Python or R script can help complete the projects quicker. We permit the use of all LLMs/ NLPs to help students to develop the solutions.
- Strong problem-solving and analytical skills.
- Able to self-learn and work independently in a remote, flexible environment.
Internship Details
- Duration: Option of 1 month, 2 month, 3 month, 4 month or 6 months
- Timing: Self-paced.
- Type: Unpaid
Data Science Intern
Posted 21 days ago
Job Viewed
Job Description
NLP Data Science Intern
Did you notice a shortage of food at supermarkets during covid? Have you heard about the recent issues in the global shipping industry? or perhaps you’ve heard about the shortages of microchips? These problems are called supply chain disruptions. They have been increasing in frequency and severity. Supply chain disruptions are threatening our very way of life.
Our vision is to advance society’s capacity to withstand shocks and stresses. Kavida.ai believes the only way to ensure security is through supply chain resiliency. We are on a mission to help companies proactively manage disruption supply chain disruption risks using integrated data.
Our Story
In March 2020 over 35 academics, data scientists, students, and software engineering volunteers came together to address the food shortage issues caused by the pandemic - Covid19foodsupply.com. A core team of 9 was formed and spun off into a startup and the rest is history.
Our investors include one of the world's largest supply chain quality & compliance monitoring companies, a £1.25bn apparel manufacturer, and some very impressive angel investors.
Social Impact:
Social impact is in our DNA. We believe private sector innovation is the only way to address social problems at scale. If we achieve our mission, humanity will always have access to its essential goods for sustenance. No more shortages of food, PPE, medicine, etc.
Our Culture :
Idea Meritocracy:
The best ideas win. We only care about what is right, not who is right. We know arriving at the best answer requires constructive tension. Sometimes it can get heated but it's never personal. Everyone contributes to better ideas knowing they will be heard but also challenged.
Drivers Not Passengers:
We think as owners who drive the bus, not as passengers. We are self-starters and never wait for instructions. We are hungry for autonomy, trust, and responsibility. Everyone is a leader because we know leadership is a trait, not a title. Leaders drive growth and navigate the chaos.
We Figure Out The Answers:
We trust our ability to figure stuff out. We do not need all the information to start answering the question. We can connect the dots and answer difficult questions with logic.
Customer & Mission Obsessed:
Our customers are our heroes and we are obsessed with helping them. We are obsessed with; understanding their supply chains better, resolving their biggest headaches, and advancing their competitiveness.
Learning and growth
We all take personal responsibility for becoming smarter, wiser, more skilled, happier. We are obsessed with learning about our industry and improving our own skills. We are obsessed with our personal growth; to become more.
Job Description:
As a member of our Research team, you will be responsible for researching, developing, and coding Agents using state-of-the-art LLM's with automated pipelines.
- Write code for the development of our ML engines and micro-services pipelines.
- use, optimize, train, and evaluate state-of-the-art GPT models.
- research and Develop Agentic pipelines using LLM's.
- research and develop RAG based pipeline using vector DB's .
Essential Requirements:
- prompt engineering and Agentic LLm frameworks like langchain/llama index
- good enough undersanding of vectors/tensors and RAG pipelines
- Knowledge of building NLP systems using transfer learning or building custom NLP systems from scratch using TensorFlow or PyTorch.
- In-depth knowledge of DSA, async, python, and containers.
- Knowledge of transformers and NLP techniques is essential, and deployment experience is a significant advantage.
Salary Range: ₹15000 - ₹25000
We are offering a full-time internship position to final-year students. The internship will last for an initial period of 6-12 months before converting to a full-time job, depending on suitability for both parties. If the applicant is a student who needs to return to university, they can continue with the program on a part-time basis.
Advanced Data Science Professional
Posted today
Job Viewed
Job Description
We are looking for a talented Machine Learning Researcher to join our team.
Job Description:
- Participate in daily live coding sessions focusing on various aspects of Machine Learning.
Key Responsibilities:
- Explore and apply mathematical methods relevant to machine learning algorithms.
- Gain hands-on experience and practical knowledge in the dynamic field of Machine Learning.
Requirements:
- Prior exposure to PyTorch and Python is essential.
- Demonstrated experience with starter projects in machine learning.
- Strong interest in delving into the mathematical foundations of machine learning.
What We Offer:
- Enhance your skills through daily coding sessions and mentorship.
Be The First To Know
About the latest Computer science Jobs in Nashik !
QA Analyst – Data Science
Posted today
Job Viewed
Job Description
**We are currently hiring for a senior-level position and are looking for immediate joiners only.
If you are interested, please send your updated resume to along with details of your CTC, ECTC and notice period **
Location: Remote
Employment Type: Full-time
About the Role
The QA Engineer will own quality assurance across the ML lifecycle—from raw data validation through
feature engineering checks, model training/evaluation verification, batch prediction/optimization
validation, and end-to-end (E2E) workflow testing. The role is hands-on with Python automation, data
profiling, and pipeline test harnesses in Azure ML and Azure DevOps. Success means provably correct
data, models, and outputs at production scale and cadence.
About the Role
The QA Engineer will own quality assurance across the ML lifecycle—from raw data validation through
feature engineering checks, model training/evaluation verification, batch prediction/optimization
validation, and end-to-end (E2E) workflow testing. The role is hands-on with Python automation, data
profiling, and pipeline test harnesses in Azure ML and Azure DevOps. Success means provably correct
data, models, and outputs at production scale and cadence.
Key Responsibilities
● Test Strategy & Governance
○ Define an ML-specific Test Strategy covering data quality KPIs, feature consistency
checks, model acceptance gates (metrics + guardrails), and E2E run acceptance
(timeliness, completeness, integrity).
○ Establish versioned test datasets & golden baselines for repeatable regression of
features, models, and optimizers.
● Data Quality & Transformation
○ Validate raw data extracts and landed datalake data: schema/contract checks,
null/outlier thresholds, time-window completeness, duplicate detection, site/material
coverage.
○ Validate transformed/feature datasets: deterministic feature generation, leakage
detection, drift vs. historical distributions, feature parity across runs (hash or statistical
similarity tests).
○ Implement automated data quality checks (e.g., Great Expectations/pytest +
Pandas/SQL) executed in CI and AML pipelines.
● Model Training & Evaluation
○ Verify training inputs (splits, windowing, target leakage prevention) and
hyperparameter configs per site/cluster.
○ Automate metric verification (e.g., MAPE/MAE/RMSE, uplift vs. last model, stability
tests) with acceptance thresholds and champion/challenger logic.
○ Validate feature importance stability and sensitivity/elasticity sanity checks (pricevolume
monotonicity where applicable).
○ Gate model registration/promotion in AML based on signed test artifacts and
reproducible metrics.
● Predictions, Optimization & Guardrails
○ Validate batch predictions: result shapes, coverage, latency, and failure handling.
© 2025 Insurge Partners. All rights reserved.
○ Test model optimization outputs and enforced guardrails: detect violations and prove
idempotent writes to DB.
○ Verify API push to third party system (idempotency keys, retry/backoff, delivery
receipts).
● Pipelines & E2E
○ Build pipeline test harnesses for AML pipelines (data-gen nightly, training weekly,
prediction/optimization) including orchestrated synthetic runs and fault injection
(missing slice, late competitor data, SB backlog).
○ Run E2E tests from raw data store -> ADLS -> AML -> RDBMS -> APIM/Frontend; assert
freshness SLOs and audit event completeness (Event Hubs -> ADLS immutable).
● Automation & Tooling
○ Develop Python-based automated tests (pytest) for data checks, model metrics, and API
contracts; integrate with Azure DevOps (pipelines, badges, gates).
○ Implement data-driven test runners (parameterized by site/material/model-version)
and store signed test artifacts alongside models in AML Registry.
○ Create synthetic test data generators and golden fixtures to cover edge cases (price
gaps, competitor shocks, cold starts).
● Reporting & Quality Ops
○ Publish weekly test reports and go/no-go recommendations for promotions; maintain a
defect taxonomy (data vs. model vs. serving vs. optimization).
○ Contribute to SLI/SLO dashboards (prediction timeliness, queue/DLQ, push success, data
drift) used for release gates.
Required Qualifications
○ 5–7+ years in QA with 3+ years focused on ML/Data systems (data pipelines + model validation).
○ Python automation (pytest, pandas, NumPy), SQL (PostgreSQL/Snowflake), and CI/CD (Azure
DevOps) for fully automated ML QA.
○ Strong grasp of ML validation: leakage checks, proper splits, metric selection
(MAE/MAPE/RMSE), drift detection, sensitivity/elasticity sanity checks.
○Experience testing AML pipelines (pipelines/jobs/components), and message-driven integrations
(Service Bus/Event Hubs).
○ API test skills (FastAPI/OpenAPI, contract tests, Postman/pytest- + idempotency and retry
patterns.
○ Familiar with feature stores/feature engineering concepts and reproducibility.
○Solid understanding of observability (App Insights/Log Analytics) and auditability requirements.
Education
• Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field.
• Certification in Azure Data or ML Engineer Associate is a plus.
Data Science Intern (Remote)
Posted 21 days ago
Job Viewed
Job Description
Job title: Data Science Intern- Remote
Report to: Data Science Manager in Pune
Job Responsibilities
- Solve Time series 1D (continuous glucose monitoring), RNA sequencing, and Computer vision (mainly medical images) problems
- Solve challenging problems using scalable 1D signal processing, machine learning, and deep learning approaches.
- In charge of developing state-of-the-art machine learning/deep learning algorithms for medical datasets
- Communicate highly technical results and methods concisely and clearly
- Collaborate with researchers in Japan to understand requirement as well request data.
Requirement
- Master/Ph.D. in relevant field from tier-1 colleges
- 1~2 years of experience with programming language/s Python or C++, Pandas
- 1~2 years of experience of working with deep learning framework, i.e. Pytorch or Tensorflow
- Well acquainted with classical time series problems-algorithms, NLP, Computer vision etc.
- Demonstrated experience with machine learning/deep learning models.
- Candidates should be able to read and implement research papers from top conferences.
- Develop IP (patents) and publish papers.
- Proficiency in Windows, Linux, dockers, PPT, and Git commands are highly required.
Preferred Skills
- Experience working with time-series, text, and sequential datasets in real world settings.
- Proven track record of research or industry experience on Time series problems, NLP, tabular datasets.
- Well acquainted with machine learning libraries such as pandas, scikit-learn etc.
- Experience programming in Azure or GCP or other cloud service.
- Publications in top-tier conferences will be a plus.
Location
This is a remote internship. If your performance is strong, we will consider converting your role to a full-time position after six months.
Data Science Lead, Clinical Intelligence
Posted 8 days ago
Job Viewed
Job Description
Job Description: Data Science Lead, Clinical Intelligence
Company Overview
Anervea is a pioneering AI transformation tech company delivering AI-powered SaaS solutions for the US pharma industry. Our therapy-agnostic clinical intelligence platform leverages real-world evidence (RWE) to predict patient outcomes and personalize treatments, empowering clients to optimize clinical trials and accelerate drug development.
Job Title
Data Science Lead, Clinical Intelligence
Location
Pune, India (hybrid office/remote) or fully remote within India.
Job Summary
We are seeking an experienced Data Science Lead to spearhead data operations for our clinical intelligence SaaS platform. You will lead data pipeline development, integrate computational biology insights, and ensure compliance with US pharma regulations, driving AI-powered predictions for patient outcomes across therapies (e.g., oncology, diabetes). This role is perfect for a leader with expertise in computational biology, clinical research, and scalable data systems.
Key Responsibilities
- Build and manage end-to-end data pipelines for ingesting and analyzing de-identified RWE, preclinical data, and public datasets (e.g., TCGA, ChEMBL) using Python, pandas, and cloud tools.
- Ensure data quality, privacy, and compliance with HIPAA, FDA (21 CFR Part 11), and GDPR, focusing on de-identification and bias mitigation.
- Lead integration of computational biology (e.g., genomics, AlphaFold protein modeling) into AI models for therapy-agnostic outcome predictions.
- Collaborate with AI teams to develop predictive models (e.g., XGBoost, PyTorch) for clinical trials and personalized medicine.
- Optimize data operations for scalability and cost-efficiency, handling large, diverse health datasets.
- Oversee cross-functional teams (remote/hybrid) to troubleshoot issues, audit data, and deliver client-ready insights.
- Stay ahead of US pharma trends (e.g., RWE, precision medicine) to enhance platform capabilities.
Qualifications and Requirements
- Master’s or PhD in Computational Biology, Bioinformatics, Data Science, or related field.
- 4+ years in data science or operations in US pharma/biotech, with expertise in clinical research (e.g., trials, RWE).
- Deep knowledge of computational biology (e.g., genomics, RDKit/AlphaFold for drug-protein interactions).
- Proficiency in Python, SQL, ETL tools (e.g., Airflow), and big data frameworks (e.g., Spark).
- Familiarity with US pharma regulations (HIPAA, FDA) and clinical trial processes (Phase 1-3).
- Experience with AI/ML for health data (e.g., scikit-learn, PyTorch).
- Based in India; open to remote or hybrid work in Pune.
- Strong leadership and communication skills for global client collaboration.
Preferred Skills
- Experience with cloud platforms (e.g., AWS, Azure) for secure data processing.
- Knowledge of SaaS platforms and API integrations for client data.
- Background in oncology or precision medicine (e.g., breast cancer outcome predictions).
- Expertise in mitigating data biases for fair AI predictions.
What We Offer
- Highly Competitive salary based on experience with performance bonuses.
- Flexible remote/hybrid work, health benefits, and learning opportunities.
- Leadership role in cutting-edge US pharma AI innovation with travel opportunities in the US
- Collaborative, global team environment.