156 Data Scientist jobs in Kochi
Data Scientist
Posted 1 day ago
Job Viewed
Job Description
Job Summary:
We are seeking a highly skilled and analytical Data Scientist with hands-on experience in designing, developing, and deploying data-driven solutions. The ideal candidate will have strong expertise in data analysis, machine learning, and cloud-based model deployment preferably on Google Cloud Platform (GCP). This role involves working closely with cross-functional teams to translate data into actionable insights that enhance business decisions, optimize user experiences, and drive measurable outcomes across web and mobile applications, including React-based environments.
Key Responsibilities:
1. Data Analysis and Modelling:
- Analyse large-scale datasets from web and mobile applications (including React-based systems) to identify trends, patterns, and actionable insights.
- Design and develop machine learning, predictive, and statistical models to solve complex business challenges such as churn prediction and customer segmentation.
- Validate and refine model performance using suitable evaluation metrics to ensure accuracy, reliability, and business relevance.
2. Data Collection and Integration:
- Collect and preprocess data from multiple sources, including Firebase services (Firestore, Firebase Analytics), ensuring data quality and consistency.
- Integrate disparate data sources into unified datasets compatible with APIs and application backends on cloud platforms.
- Perform data cleaning, transformation, and feature engineering for modelling and visualization readiness.
3. Insight Generation and Reporting:
- Derive actionable insights to support business and product strategies, such as enhancing user experience in React-based applications.
- Develop and maintain data visualizations and interactive dashboards using cloud-compatible tools.
- Present findings and recommendations through clear reports to both technical and non-technical stakeholders.
4. Collaboration and Communication:
- Collaborate with developers, product managers, and UX teams to align data science initiatives with business goals.
- Communicate project progress, analysis outcomes, and model performance through regular stakeholder updates.
- Work with engineering teams to deploy models seamlessly into production environments (e.g., Firebase-hosted systems).
5. Model Deployment and Monitoring:
- Deploy and operationalize machine learning models in production using Google Cloud Platform (GCP) or other public clouds.
- Continuously monitor model performance, addressing issues like data drift and optimizing scalability and efficiency.
- Implement and maintain model tracking, logging, and monitoring frameworks in cloud environments.
Required Skills:
- 4+ years of professional experience as a Data Scientist or in a similar analytical role.
- Proven hands-on experience with at least one public cloud platform (strong preference for Google Cloud Platform).
- Strong proficiency in Python and its data science libraries (Pandas, NumPy, Scikit-learn, TensorFlow, PyTorch, etc.).
- Experience with Firebase, BigQuery, and Vertex AI is highly desirable.
- Expertise in data visualization tools (e.g., Looker Studio, Tableau, or Power BI).
- Strong understanding of data modelling, ETL processes, and API integrations in cloud-based systems.
- Excellent communication skills and ability to collaborate with cross-functional teams.
Qualification:
- Strong analytical and problem-solving skills.
- Desire and ability to rapidly learn a wide variety of new technical skills.
- Self-motivated, takes initiative, assumes ownership.
- Enthusiastic, professional, with a focus on customer success.
- Passion for solving client challenges and commitment to client delight.
Data Scientist
Posted 1 day ago
Job Viewed
Job Description
Tech Stack
Modeling & ML Frameworks:Python, scikit-learn, PyTorch, TensorFlow — spanning classical
ML, deep learning, and transformer-based architectures. Includes modern ensemble methods
(XGBoost, LightGBM) for large-scale structured modeling.
Applied Domains: Ranking, Recommendation, Dynamic Pricing, Forecasting, Supply–Demand
Optimization, Semantic Search, NLP/NLU, Generative Content Systems
Data & Compute: Databricks, PySpark, AWS (S3, Glue, EMR, Athena), ScyllaDB, MongoDB,
Redis
Experimentation & Optimization: MLflow, Airflow, SageMaker, Bayesian Optimization,
Bandit/Sequential Experimentation
LLMs & GenAI: Claude, OpenAI GPT-4, SLMs, LangChain, Cursor IDE, RAG Pipelines,
Embedding Models, Vector Search (FAISS / Pinecone)
Observability: Grafana, Prometheus, Data Quality Monitors, Custom Model Dashboards
We’re in the early stages of building a Data Science & AI team — the learning curve,
innovation velocity, and ownership opportunities are immense. You’ll help define the foundation
for experimentation, production ML pipelines, and GenAI innovation from the ground up.
Role : Senior Data Scientist (AI & Data)
Location: Remote (Work from Home)
We’re hiring a Senior Data Scientist to build the next generation of intelligent
decision systems that power pricing, supply optimization, ranking, and personalization
in our global B2B hotel marketplace.
This is a high-impact role at the intersection of machine learning, optimization, and
product engineering, where you’ll leverage deep statistical modeling and modern ML
techniques to make real-time decisions at scale.
You’ll collaborate closely with Product, Engineering, and Data Platform teams to
operationalize data science models that directly improve revenue, conversion, and
marketplace efficiency.
You’ll own the full lifecycle of ML models—from experimentation and training to
deployment, monitoring, and continuous retraining to ensure performance at scale.
Responsibilities
● Design and implement ML models for dynamic pricing, availability prediction,
and real-time hotel demand optimization.
● Develop and maintain data pipelines and feature stores supporting
large-scale model training and inference.
● Leverage Bayesian inference, causal modeling, and reinforcement learning
(bandits / sequential decision systems) to drive adaptive decision platforms.
● Build ranking / recommendation systems for personalization, relevance, and
supply visibility.
● Use LLMs (Claude, GPT-4, SLMs) for:
○ Contract parsing, metadata extraction, and mapping resolution
○ Semantic search and retrieval-augmented generation (RAG)
○ Conversational systems for CRS, rate insights, and partner
communication
○ Automated summarization and content enrichment
● Operationalize ML + LLM pipelines on Databricks / AWS for training, inference,
and monitoring.
● Deploy and monitor models in production with strong observability, tracing,
and SLO ownership.
● Run A/B experiments and causal validation to measure real business impact.
● Collaborate cross-functionally with engineering, data platform, and product
teams to translate research into scalable production systems.
● Your models will directly influence GMV growth, conversion rates, and partner
revenue yield across the global marketplace.
Requirements
● 5–9 years of hands-on experience in Applied ML / Data Science.
● Strong proficiency in Python, PySpark, and SQL.
● Experience developing models for ranking, pricing, recommendation, or
forecasting at scale.
● Hands-on with PyTorch or TensorFlow for real-world ML or DL use cases.
● Strong grasp of probabilistic modeling, Bayesian methods, and causal
inference.
● Practical experience integrating LLM/GenAI workflows (LangChain, RAG,
embeddings, Claude, GPT, SLMs) into production.
● Experience with Databricks, Spark, or SageMaker for distributed training and
deployment.
● Familiar with experiment platforms, MLflow, and model observability best
practices.
● Strong business understanding and ability to communicate model impact to
product stakeholders.
Nice to Have
● Background in travel-tech, marketplace, or pricing/revenue optimization
domains.
● Experience in retrieval, semantic search, or content-based information
retrieval.
● Familiarity with small language model (SLM) optimization for cost-efficient
inference.
● Prior work on RL/bandit-driven decision systems or personalization engines.
● Experience designing AI-assisted developer workflows using tools like Cursor,
Claude, or Code Interpreter.
Data Scientist
Posted 1 day ago
Job Viewed
Job Description
Role Overview: Data Scientist
Location: Remote/ Indore/ Mumbai/ Chennai/ Gurugram
Experience: Min 5 Years
Work Mode: Remote
Notice Period: Max. 30 Days (45 for Notice Serving)
Interview Process: 2 Rounds
Interview Mode: Virtual Face-to-Face
Interview Timeline: 1 Week
Industry: Must be from a BPO/ KPO/ Shared Services or Healthcare Org.
Key Responsibilities:
- AI/ML Development & Research
- Design, develop, and deploy advanced machine learning and deep learning models to solve complex business problems.
- Implement and optimize Large Language Models (LLMs) and Generative AI solutions for real-world applications.
- Build agent-based AI systems with autonomous decision-making capabilities.
- Conduct cutting-edge research on emerging AI technologies and explore their practical applications.
- Perform model evaluation, validation, and continuous optimization to ensure high performance.
- Cloud Infrastructure & Full-Stack Development:
- Architect and implement scalable, cloud-native ML/AI solutions using AWS, Azure, or GCP.
- Develop full-stack applications that seamlessly integrate AI models with modern web technologies.
- Build and maintain robust ML pipelines using cloud services (e.g., SageMaker, ML Engine).
- Implement CI/CD pipelines to streamline ML model deployment and monitoring processes.
- Design and optimize cloud infrastructure to support high-performance computing workloads.
- Data Engineering & Database Management
- Design and implement data pipelines to enable large-scale data processing and real-time analytics.
- Work with both SQL and NoSQL databases (e.g., PostgreSQL, MongoDB, Cassandra) to manage structured and unstructured data.
- Optimize database performance to support machine learning workloads and real-time applications.
- Implement robust data governance frameworks and ensure data quality assurance practices.
- Manage and process streaming data to enable real-time decision-making.
- Leadership & Collaboration
- Mentor junior data scientists and assist in technical decision-making to drive innovation.
- Collaborate with cross-functional teams, including product, engineering, and business stakeholders, to develop solutions that align with organizational goals.
- Present findings and insights to both technical and non-technical audiences in a clear and actionable manner.
- Lead proof-of-concept projects and innovation initiatives to push the boundaries of AI/ML applications.
Required Qualifications:
- Education & Experience
- Master’s or PhD in Computer Science, Data Science, Statistics, Mathematics, or a related field.
- 5+ years of hands-on experience in data science and machine learning, with a focus on real-world applications.
- 3+ years of experience working with deep learning frameworks and neural networks.
- 2+ years of experience with cloud platforms and full-stack development.
- Technical Skills - Core AI/ML
- Machine Learning: Proficient in Scikit-learn, XGBoost, LightGBM, and advanced ML algorithms.
- Deep Learning: Expertise in TensorFlow, PyTorch, Keras, CNNs, RNNs, LSTMs, and Transformers.
- Large Language Models: Experience with GPT, BERT, T5, fine-tuning, and prompt engineering.
- Generative AI: Hands-on experience with Stable Diffusion, DALL-E, text-to-image, and text generation models.
- Agentic AI: Knowledge of multi-agent systems, reinforcement learning, and autonomous agents.
- Technical Skills - Development & Infrastructure
- Programming: Expertise in Python, with proficiency in R, Java/Scala, JavaScript/TypeScript.
- Cloud Platforms: Proficient with AWS (SageMaker, EC2, S3, Lambda), Azure ML, or Google Cloud AI.
- Databases: Proficiency with SQL (PostgreSQL, MySQL) and NoSQL (MongoDB, Cassandra, DynamoDB).
- Full-Stack Development: Experience with React/Vue.js, Node.js, FastAPI, Flask, Docker, Kubernetes.
- MLOps: Experience with MLflow, Kubeflow, model versioning, and A/B testing frameworks.
- Big Data: Expertise in Spark, Hadoop, Kafka, and streaming data processing.
Non Negotiables:
- Cloud Infrastructure - ML/AI solutions on AWS, Azure, or GCP
- Build and maintain ML pipelines using cloud services (SageMaker, ML Engine, etc.)
- Implement CI/CD pipelines for ML model deployment and monitoring
- Work with both SQL and NoSQL databases (PostgreSQL, MongoDB, Cassandra, etc.)
- Machine Learning: Scikit-learn
- Deep Learning: TensorFlow
- Programming: Python (expert), R, Java/Scala, JavaScript/TypeScript
- Cloud Platforms: AWS (SageMaker, EC2, S3, Lambda)
- Vector databases and embeddings (Pinecone, Weaviate, Chroma)
- Knowledge of LangChain, LlamaIndex, or similar LLM frameworks.
- Industry: Must be a BPO or Healthcare Org.
Data Scientist
Posted 1 day ago
Job Viewed
Job Description
Overview:
The Data Scientist supports the development and implementation of data models, focusing on Machine Learning, under the supervision of more experienced scientists, contributing to the team’s innovative projects.
Job Description:
- Assist in the development of Machine Learning models and algorithms, contributing to the design and implementation of data-driven solutions.
- Perform data preprocessing, cleaning, and analysis, preparing datasets for modeling and supporting higher-level data science initiatives.
- Learn from and contribute to projects involving Deep Learning and General AI, gaining hands-on experience under the guidance of senior data scientists.
- Engage in continuous professional development, enhancing skills in Python, Machine Learning, and related areas through training and practical experience.
- Collaborate with team members to ensure the effective implementation of data science solutions, participating in brainstorming sessions and project discussions.
- Support the documentation of methodologies and results, ensuring transparency and reproducibility of data science processes.
Qualifications:
- Bachelor's degree in Computer Science, Statistics, Mathematics, or a related field, with a strong interest in Machine Learning, Deep Learning, and AI.
- Experience in a data science role, demonstrating practical experience and strong Python programming skills.
- Exposure to Business Intelligence (BI) & Data Engineering concepts and tools.
- Familiarity with data platforms such as Dataiku is a bonus.
Skills:
- Solid understanding of Machine Learning principles and practical experience in Python programming.
- Familiarity with data science and machine learning libraries in Python (e.g., scikit-learn, pandas, NumPy).
- Eagerness to learn Deep Learning and General AI technologies, with a proactive approach to acquiring new knowledge and skills.
- Strong analytical and problem-solving abilities, capable of tackling data-related challenges and deriving meaningful insights.
- Basic industry domain knowledge, with a willingness to deepen expertise and apply data science principles to solve real-world problems.
- Effective communication skills, with the ability to work collaboratively in a team environment and contribute to discussions.
v4c.ai is an equal opportunity employer. We value diversity and are committed to creating an inclusive environment for all employees, regardless of race, color, religion, gender, sexual orientation, national origin, age, disability, or veteran status.
We believe in the power of diversity and strive to foster a culture where every team member feels valued and respected. We encourage applications from individuals of all backgrounds and experiences.
If you are passionate about diversity and innovation and thrive in a collaborative environment, we invite you to apply and join our team.
Data Scientist
Posted 1 day ago
Job Viewed
Job Description
Are you a Data Scientist who thrives on building AI-driven solutions that impact the real world? We’re looking for a highly skilled and motivated professional to join our team and help shape the future of skill assessment, job matching, and intelligent recommendations through cutting-edge machine learning.
As a Data Scientist, you’ll wear multiple hats and play a key role in advancing our AI-powered platform:
- Predictive Model Developer – Design, develop, and deploy predictive models to power skill assessment and job matching. Continuously evaluate and improve performance using appropriate ML metrics.
- Data Modelling Expert – Build and maintain robust data models and schemas for large-scale datasets. Collaborate with data engineers to ensure consistency, scalability, and accessibility.
- Embedding Specialist – Develop embeddings for skills, jobs, and user profiles to drive recommendations and similarity analysis. Experiment with models and integrate into search/recommendation pipelines.
- Model Trainer – Train, validate, and fine-tune ML models on diverse datasets. Address imbalanced data, mitigate bias, and monitor production performance.
- Statistical Analyst – Apply advanced statistical techniques (e.g., K-Means, Cosine Similarity ) for insights, anomaly detection, and product development. Deliver findings via dashboards, reports, and visualizations.
- GCP BigData Implementer – Build scalable pipelines with BigQuery, Dataflow, Dataproc , and optimize workflows for cost and performance.
- Multi-Cloud Integrator – Manage deployments across multi-cloud environments, ensuring interoperability, security, and compliance.
- Python Programming Expert – Write clean, testable Python code using scikit-learn, TensorFlow, PyTorch and contribute to internal data science libraries.
- Algorithm Optimizer – Enhance performance of ML pipelines, reduce bottlenecks, and improve training speed and accuracy.
- Insight Communicator – Present results to technical and non-technical stakeholders, influencing decisions and driving product improvements.
- 3–5 years of hands-on experience as a Data Scientist with a strong focus on machine learning, predictive modeling, and data analytics .
- Advanced proficiency in Python and core libraries (scikit-learn, TensorFlow, PyTorch, Pandas, NumPy).
- Experience with GCP BigData tools such as BigQuery, Dataflow, and Dataproc.
- Knowledge of multi-cloud environments and deployment strategies.
- Proven expertise in statistical analysis techniques (e.g., K-Means, Cosine Similarity).
- Experience with data modeling and database design principles.
- Strong problem-solving skills and ability to work independently.
- Excellent communication and presentation skills.
- Bachelor’s or Master’s in Computer Science, Statistics, Data Science, or related field .
- A passion for using data to solve real-world problems and make a positive impact.
- 100% Remote Role – flexibility to work from anywhere in India.
- Opportunity to work on high-impact AI projects in skill assessment and job matching.
- Exposure to state-of-the-art technologies in ML, embeddings, and BigData.
- Competitive compensation aligned with 3–5 years of experience .
- A collaborative, innovative culture that values continuous learning and growth.
Apply now and help us transform the way skills and jobs connect through AI!
Send your updated resume and cover letters to contact at entrustechinc dot com.
Data Scientist
Posted 1 day ago
Job Viewed
Job Description
Job Title: Senior Data Scientist (Remote – India) – Predictive Modeling & Machine Learning
Location: Remote (India)
Job Type: Full-time
Experience: 5+ Years
Job Summary:
We are looking for a highly skilled Senior Data Scientist to join our India-based team in a remote capacity. This role focuses on building and deploying advanced predictive models to influence key business decisions. The ideal candidate should have strong experience in machine learning, data engineering, and working in cloud environments, particularly with AWS.
You'll be collaborating closely with cross-functional teams to design, develop, and deploy cutting-edge ML models using tools like SageMaker, Bedrock, PyTorch, TensorFlow, Jupyter Notebooks, and AWS Glue. This is a fantastic opportunity to work on impactful AI/ML solutions within a dynamic and innovative team.
Key Responsibilities:
Predictive Modeling & Machine Learning
• Develop and deploy machine learning models for forecasting, optimization, and predictive analytics.
• Use tools such as AWS SageMaker, Bedrock, LLMs, TensorFlow, and PyTorch for model training and deployment.
• Perform model validation, tuning, and performance monitoring.
• Deliver actionable insights from complex datasets to support strategic decision-making.
Data Engineering & Cloud Computing
• Design scalable and secure ETL pipelines using AWS Glue.
• Manage and optimize data infrastructure in the AWS environment.
• Ensure high data integrity and availability across the pipeline.
• Integrate AWS services to support the end-to-end machine learning lifecycle.
Python Programming
• Write efficient, reusable Python code for data processing and model development.
• Work with libraries like pandas, scikit-learn, TensorFlow, and PyTorch.
• Maintain documentation and ensure best coding practices.
Collaboration & Communication
• Work with engineering, analytics, and business teams to understand and solve business challenges.
• Present complex models and insights to both technical and non-technical stakeholders.
• Participate in sprint planning, stand-ups, and reviews in an Agile setup.
Preferred Experience (Nice to Have):
• Experience with applications in the utility industry (e.g., demand forecasting, asset optimization).
• Exposure to Generative AI technologies.
• Familiarity with geospatial data and GIS tools for predictive analytics.
Qualifications:
• Master’s or Ph.D. in Computer Science, Statistics, Mathematics, or a related field.
• 5+ years of relevant experience in data science, predictive modeling, and machine learning.
• Experience working in cloud-based data science environments (AWS preferred).
Be The First To Know
About the latest Data scientist Jobs in Kochi !