407 Data Scientist jobs in Delhi
Data Scientist
Posted 1 day ago
Job Viewed
Job Description
Role: Data Scientist, Delhi (Gurugram), India - Salary Band P2
Fiddlehead Technology is a Canadian leader in advanced analytics and AI-driven solutions, helping global companies unlock value from their data. We specialize in applying machine learning, predictive forecasting, and Generative AI to solve complex business problems and empower smarter decision-making.
Our culture thrives on innovation, collaboration, and continuous learning. We invest in our people by offering structured opportunities for professional development, a healthy work-life balance, and exposure to cutting-edge AI/ML projects across industries. At Fiddlehead, employees are encouraged to explore, create, and grow while contributing to high-impact solutions.
Fiddlehead Technology is a data science company with over 10 years of experience helping
consumer-packaged goods (CPG) companies harness the power of machine learning and
AI. We transform data into actionable insights, building predictive models that drive
efficiency, growth, and competitive advantage. With increasing demand for our solutions,
we’re expanding our global team.
We are seeking Data Scientists to collaborate with our team based in Canada in developing
advanced forecasting models and optimization algorithms for leading CPG manufacturers
and service providers. In this role, you’ll monitor model performance in production,
addressing challenges like data drift and concept drift, while delivering data-driven insights
that shape business decisions.
What You’ll Bring
• Education and/or professional experience in data science and forecasting
• Proficiency with forecasting tools and libraries, ideally in Python
• Knowledge of machine learning and statistical concepts
• Strong analytical and problem-solving abilities
• Ability to communicate complex findings to non-technical stakeholders
• High attention to detail and data accuracy
• Degree in Statistics, Data Science, Computer Science, Engineering, or related field
(Bachelor’s, Master’s, or PhD)
At Fiddlehead, you’ll work on meaningful projects that advance predictive forecasting and
sustainability in the CPG industry. We offer a collaborative, inclusive, and supportive
environment that prioritizes professional development, work-life balance, and continuous
learning. Our team members enjoy dedicated time to expand their skills while contributing
to innovative solutions with real-world impact.
We carefully review every application and are committed to providing a response. Candidates selected will be invited to an in-person or virtual interview. To ensure equal access, we provide accommodations during the recruitment process for applicants with disabilities. If you require accommodations, please reach out to our team through the contact page on our website. At Fiddlehead, we are dedicated to fostering an inclusive and accessible environment where every employee and customer is respected, valued, and supported. We welcome applications from women, Indigenous peoples, persons with disabilities, ethnic and visible minorities, members of the LGBT+ community, and others who can help enrich the diversity of our workforce.
We offer a competitive compensation package with performance-based incentives and opportunities to contribute to impactful projects. Employees benefit from mentorship, training, and active participation in AI communities, all within a collaborative culture that values innovation, creativity, and professional growth.
Data Scientist
Posted 1 day ago
Job Viewed
Job Description
Role Overview: Data Scientist
Location: Remote/ Indore/ Mumbai/ Chennai/ Gurugram
Experience: Min 5 Years
Work Mode: Remote
Notice Period: Max. 30 Days (45 for Notice Serving)
Interview Process: 2 Rounds
Interview Mode: Virtual Face-to-Face
Interview Timeline: 1 Week
Industry: Must be from a BPO/ KPO/ Shared Services or Healthcare Org.
Key Responsibilities:
- AI/ML Development & Research
- Design, develop, and deploy advanced machine learning and deep learning models to solve complex business problems.
- Implement and optimize Large Language Models (LLMs) and Generative AI solutions for real-world applications.
- Build agent-based AI systems with autonomous decision-making capabilities.
- Conduct cutting-edge research on emerging AI technologies and explore their practical applications.
- Perform model evaluation, validation, and continuous optimization to ensure high performance.
- Cloud Infrastructure & Full-Stack Development:
- Architect and implement scalable, cloud-native ML/AI solutions using AWS, Azure, or GCP.
- Develop full-stack applications that seamlessly integrate AI models with modern web technologies.
- Build and maintain robust ML pipelines using cloud services (e.g., SageMaker, ML Engine).
- Implement CI/CD pipelines to streamline ML model deployment and monitoring processes.
- Design and optimize cloud infrastructure to support high-performance computing workloads.
- Data Engineering & Database Management
- Design and implement data pipelines to enable large-scale data processing and real-time analytics.
- Work with both SQL and NoSQL databases (e.g., PostgreSQL, MongoDB, Cassandra) to manage structured and unstructured data.
- Optimize database performance to support machine learning workloads and real-time applications.
- Implement robust data governance frameworks and ensure data quality assurance practices.
- Manage and process streaming data to enable real-time decision-making.
- Leadership & Collaboration
- Mentor junior data scientists and assist in technical decision-making to drive innovation.
- Collaborate with cross-functional teams, including product, engineering, and business stakeholders, to develop solutions that align with organizational goals.
- Present findings and insights to both technical and non-technical audiences in a clear and actionable manner.
- Lead proof-of-concept projects and innovation initiatives to push the boundaries of AI/ML applications.
Required Qualifications:
- Education & Experience
- Master’s or PhD in Computer Science, Data Science, Statistics, Mathematics, or a related field.
- 5+ years of hands-on experience in data science and machine learning, with a focus on real-world applications.
- 3+ years of experience working with deep learning frameworks and neural networks.
- 2+ years of experience with cloud platforms and full-stack development.
- Technical Skills - Core AI/ML
- Machine Learning: Proficient in Scikit-learn, XGBoost, LightGBM, and advanced ML algorithms.
- Deep Learning: Expertise in TensorFlow, PyTorch, Keras, CNNs, RNNs, LSTMs, and Transformers.
- Large Language Models: Experience with GPT, BERT, T5, fine-tuning, and prompt engineering.
- Generative AI: Hands-on experience with Stable Diffusion, DALL-E, text-to-image, and text generation models.
- Agentic AI: Knowledge of multi-agent systems, reinforcement learning, and autonomous agents.
- Technical Skills - Development & Infrastructure
- Programming: Expertise in Python, with proficiency in R, Java/Scala, JavaScript/TypeScript.
- Cloud Platforms: Proficient with AWS (SageMaker, EC2, S3, Lambda), Azure ML, or Google Cloud AI.
- Databases: Proficiency with SQL (PostgreSQL, MySQL) and NoSQL (MongoDB, Cassandra, DynamoDB).
- Full-Stack Development: Experience with React/Vue.js, Node.js, FastAPI, Flask, Docker, Kubernetes.
- MLOps: Experience with MLflow, Kubeflow, model versioning, and A/B testing frameworks.
- Big Data: Expertise in Spark, Hadoop, Kafka, and streaming data processing.
Non Negotiables:
- Cloud Infrastructure - ML/AI solutions on AWS, Azure, or GCP
- Build and maintain ML pipelines using cloud services (SageMaker, ML Engine, etc.)
- Implement CI/CD pipelines for ML model deployment and monitoring
- Work with both SQL and NoSQL databases (PostgreSQL, MongoDB, Cassandra, etc.)
- Machine Learning: Scikit-learn
- Deep Learning: TensorFlow
- Programming: Python (expert), R, Java/Scala, JavaScript/TypeScript
- Cloud Platforms: AWS (SageMaker, EC2, S3, Lambda)
- Vector databases and embeddings (Pinecone, Weaviate, Chroma)
- Knowledge of LangChain, LlamaIndex, or similar LLM frameworks.
- Industry: Must be a BPO or Healthcare Org.
Data Scientist
Posted today
Job Viewed
Job Description
Modeling & ML Frameworks:Python, scikit-learn, PyTorch, TensorFlow — spanning classical
ML, deep learning, and transformer-based architectures. Includes modern ensemble methods
(XGBoost, LightGBM) for large-scale structured modeling.
Applied Domains: Ranking, Recommendation, Dynamic Pricing, Forecasting, Supply–Demand
Optimization, Semantic Search, NLP/NLU, Generative Content Systems
Data & Compute: Databricks, PySpark, AWS (S3, Glue, EMR, Athena), ScyllaDB, MongoDB,
Redis
Experimentation & Optimization: MLflow, Airflow, SageMaker, Bayesian Optimization,
Bandit/Sequential Experimentation
LLMs & GenAI: Claude, OpenAI GPT-4, SLMs, LangChain, Cursor IDE, RAG Pipelines,
Embedding Models, Vector Search (FAISS / Pinecone)
Observability: Grafana, Prometheus, Data Quality Monitors, Custom Model Dashboards
We’re in the early stages of building a Data Science & AI team — the learning curve,
innovation velocity, and ownership opportunities are immense. You’ll help define the foundation
for experimentation, production ML pipelines, and GenAI innovation from the ground up.
Role : Senior Data Scientist (AI & Data)
Location: Remote (Work from Home)
We’re hiring a Senior Data Scientist to build the next generation of intelligent
decision systems that power pricing, supply optimization, ranking, and personalization
in our global B2B hotel marketplace.
This is a high-impact role at the intersection of machine learning, optimization, and
product engineering, where you’ll leverage deep statistical modeling and modern ML
techniques to make real-time decisions at scale.
You’ll collaborate closely with Product, Engineering, and Data Platform teams to
operationalize data science models that directly improve revenue, conversion, and
marketplace efficiency.
You’ll own the full lifecycle of ML models—from experimentation and training to
deployment, monitoring, and continuous retraining to ensure performance at scale.
Responsibilities
● Design and implement ML models for dynamic pricing, availability prediction,
and real-time hotel demand optimization.
● Develop and maintain data pipelines and feature stores supporting
large-scale model training and inference.
● Leverage Bayesian inference, causal modeling, and reinforcement learning
(bandits / sequential decision systems) to drive adaptive decision platforms.
● Build ranking / recommendation systems for personalization, relevance, and
supply visibility.
● Use LLMs (Claude, GPT-4, SLMs) for:
○ Contract parsing, metadata extraction, and mapping resolution
○ Semantic search and retrieval-augmented generation (RAG)
○ Conversational systems for CRS, rate insights, and partner
communication
○ Automated summarization and content enrichment
● Operationalize ML + LLM pipelines on Databricks / AWS for training, inference,
and monitoring.
● Deploy and monitor models in production with strong observability, tracing,
and SLO ownership.
● Run A/B experiments and causal validation to measure real business impact.
● Collaborate cross-functionally with engineering, data platform, and product
teams to translate research into scalable production systems.
● Your models will directly influence GMV growth, conversion rates, and partner
revenue yield across the global marketplace.
Requirements
● 5–9 years of hands-on experience in Applied ML / Data Science.
● Strong proficiency in Python, PySpark, and SQL.
● Experience developing models for ranking, pricing, recommendation, or
forecasting at scale.
● Hands-on with PyTorch or TensorFlow for real-world ML or DL use cases.
● Strong grasp of probabilistic modeling, Bayesian methods, and causal
inference.
● Practical experience integrating LLM/GenAI workflows (LangChain, RAG,
embeddings, Claude, GPT, SLMs) into production.
● Experience with Databricks, Spark, or SageMaker for distributed training and
deployment.
● Familiar with experiment platforms, MLflow, and model observability best
practices.
● Strong business understanding and ability to communicate model impact to
product stakeholders.
Nice to Have
● Background in travel-tech, marketplace, or pricing/revenue optimization
domains.
● Experience in retrieval, semantic search, or content-based information
retrieval.
● Familiarity with small language model (SLM) optimization for cost-efficient
inference.
● Prior work on RL/bandit-driven decision systems or personalization engines.
● Experience designing AI-assisted developer workflows using tools like Cursor,
Claude, or Code Interpreter.
Data Scientist
Posted today
Job Viewed
Job Description
What You’ll Do
As a Data Scientist, you’ll wear multiple hats and play a key role in advancing our AI-powered platform:
- Predictive Model Developer – Design, develop, and deploy predictive models to power skill assessment and job matching. Continuously evaluate and improve performance using appropriate ML metrics.
- Data Modelling Expert – Build and maintain robust data models and schemas for large-scale datasets. Collaborate with data engineers to ensure consistency, scalability, and accessibility.
- Embedding Specialist – Develop embeddings for skills, jobs, and user profiles to drive recommendations and similarity analysis. Experiment with models and integrate into search/recommendation pipelines.
- Model Trainer – Train, validate, and fine-tune ML models on diverse datasets. Address imbalanced data, mitigate bias, and monitor production performance.
- Statistical Analyst – Apply advanced statistical techniques (e.g., K-Means, Cosine Similarity) for insights, anomaly detection, and product development. Deliver findings via dashboards, reports, and visualizations.
- GCP BigData Implementer – Build scalable pipelines with BigQuery, Dataflow, Dataproc, and optimize workflows for cost and performance.
- Multi-Cloud Integrator – Manage deployments across multi-cloud environments, ensuring interoperability, security, and compliance.
- Python Programming Expert – Write clean, testable Python code using scikit-learn, TensorFlow, PyTorch and contribute to internal data science libraries.
- Algorithm Optimizer – Enhance performance of ML pipelines, reduce bottlenecks, and improve training speed and accuracy.
- Insight Communicator – Present results to technical and non-technical stakeholders, influencing decisions and driving product improvements.
What We’re Looking For
- 3–5 years of hands-on experience as a Data Scientist with a strong focus on machine learning, predictive modeling, and data analytics.
- Advanced proficiency in Python and core libraries (scikit-learn, TensorFlow, PyTorch, Pandas, NumPy).
- Experience with GCP BigData tools such as BigQuery, Dataflow, and Dataproc.
- Knowledge of multi-cloud environments and deployment strategies.
- Proven expertise in statistical analysis techniques (e.g., K-Means, Cosine Similarity).
- Experience with data modeling and database design principles.
- Strong problem-solving skills and ability to work independently.
- Excellent communication and presentation skills.
- Bachelor’s or Master’s in Computer Science, Statistics, Data Science, or related field.
- A passion for using data to solve real-world problems and make a positive impact.
Why Join Us?
- 100% Remote Role – flexibility to work from anywhere in India.
- Opportunity to work on high-impact AI projects in skill assessment and job matching.
- Exposure to state-of-the-art technologies in ML, embeddings, and BigData.
- Competitive compensation aligned with 3–5 years of experience.
- A collaborative, innovative culture that values continuous learning and growth.
Apply now and help us transform the way skills and jobs connect through AI!
Send your updated resume and cover letters to contact at entrustechinc dot com.
Data Scientist
Posted today
Job Viewed
Job Description
Location: Remote (India)
Job Type: Full-time
Experience: 5+ Years
Job Summary:
We are looking for a highly skilled Senior Data Scientist to join our India-based team in a remote capacity. This role focuses on building and deploying advanced predictive models to influence key business decisions. The ideal candidate should have strong experience in machine learning, data engineering, and working in cloud environments, particularly with AWS.
You'll be collaborating closely with cross-functional teams to design, develop, and deploy cutting-edge ML models using tools like SageMaker, Bedrock, PyTorch, TensorFlow, Jupyter Notebooks, and AWS Glue. This is a fantastic opportunity to work on impactful AI/ML solutions within a dynamic and innovative team.
Key Responsibilities:
Predictive Modeling & Machine Learning
• Develop and deploy machine learning models for forecasting, optimization, and predictive analytics.
• Use tools such as AWS SageMaker, Bedrock, LLMs, TensorFlow, and PyTorch for model training and deployment.
• Perform model validation, tuning, and performance monitoring.
• Deliver actionable insights from complex datasets to support strategic decision-making.
Data Engineering & Cloud Computing
• Design scalable and secure ETL pipelines using AWS Glue.
• Manage and optimize data infrastructure in the AWS environment.
• Ensure high data integrity and availability across the pipeline.
• Integrate AWS services to support the end-to-end machine learning lifecycle.
Python Programming
• Write efficient, reusable Python code for data processing and model development.
• Work with libraries like pandas, scikit-learn, TensorFlow, and PyTorch.
• Maintain documentation and ensure best coding practices.
Collaboration & Communication
• Work with engineering, analytics, and business teams to understand and solve business challenges.
• Present complex models and insights to both technical and non-technical stakeholders.
• Participate in sprint planning, stand-ups, and reviews in an Agile setup.
Preferred Experience (Nice to Have):
• Experience with applications in the utility industry (e.g., demand forecasting, asset optimization).
• Exposure to Generative AI technologies.
• Familiarity with geospatial data and GIS tools for predictive analytics.
Qualifications:
• Master’s or Ph.D. in Computer Science, Statistics, Mathematics, or a related field.
• 5+ years of relevant experience in data science, predictive modeling, and machine learning.
• Experience working in cloud-based data science environments (AWS preferred).
Data Scientist
Posted today
Job Viewed
Job Description
Location: India, Remote
About Lingaro:
Lingaro Group is the end-to-end data services partner to global brands and enterprises. We lead our clients through their data journey, from strategy through development to operations and adoption, helping them to realize the full value of their data.
Since 2008, Lingaro has been recognized by clients and global research and advisory firms for innovation, technology excellence, and the consistent delivery of highest-quality data services. Our commitment to data excellence has created an environment that attracts the brightest global data talent to our team.
About DS/AI Competency Center:
Focuses on leveraging data, analytics, and artificial intelligence (AI) technologies to extract insights, build predictive models, and develop AI powered solutions. Utilizes Exploratory Data Analysis, Statistical Modeling and Machine Learning, Model Deployment, and Integration as well as Model Monitoring and Maintenance. Delivers business solutions using multiple AI techniques and tools.
Website:
Have:
- Minimum 4+ years of professional experience in a Data Scientist or similar role, 7+ YoExp in technology consultancy will be a big plus.
- Strong understanding of machine learning concepts and theory, with a focus on Generative AI (e.g., LLMs/LMMs).
- Experience in production-ready Generative AI solutions, including LLMs, chatbots, AI agents, and RAG mechanisms.
- Proven experience in designing and deploying Agent-based AI systems, such as:Autonomous agents, task orchestration, or multi-agent systems.
- Knowledge of Python, SQL, and GenAI frameworks (e.g., LangChain, LangGraph, vector databases).
- Knowledge of Google Cloud Platform (GCP), including its AI/ML services (e.g., Agentspace, Vertex AI, BigQuery ML) and cloud-based deployment practices.
- Experience with cloud-based development environments (e.g., Azure, Databricks) is a plus.
- Very good understanding of consulting role requirements and demonstrated behaviors and skills expected of technology consultant.
- Ability to communicate complex technical concepts to non-technical stakeholders.
- Collaborative mindset and ability to work effectively in cross-functional teams.
Tasks:
- Lead end-to-end AI/ML initiatives, including:
- Understanding business objectives and translating them into AI-driven solutions.
- Preparing and analyzing data, building and fine-tuning models, evaluating their performance, and deploying them into production.
- Design, implement, and enhance Agent-based AI solutions (e.g., autonomous agents, multi-agent systems) to solve complex business problems.
- Conduct business requirement gathering and transform these into actionable technical plans, including data processing, feature engineering, hypothesis testing, and model deployment.
- Analyze and interpret the results of AI/ML models, draw actionable conclusions, and provide recommendations, including expected benefits and ROI measurements.
- Collaborate with cross-functional teams and stakeholders to support decision-making with data-driven insights.
- Support pre-sales activities by designing AI/ML solutions and contributing to technical discussions and proposals.
Why join us:
- Stable employment. On the market since 2008, 1300+ talents currently on board in 7 global sites.
- 100% remote.
- Flexibility regarding working hours.
- Full-time position
- Comprehensive online onboarding program with a “Buddy” from day 1.
- Cooperation with top-tier engineers and experts.
- Unlimited access to the Udemy learning platform from day 1.
- Certificate training programs. Lingarians earn 500+ technology certificates yearly.
- Upskilling support. Capability development programs, Competency Centers, knowledge sharing sessions, community webinars, 110+ training opportunities yearly.
- Grow as we grow as a company. 76% of our managers are internal promotions.
- A diverse, inclusive, and values-driven community.
- Autonomy to choose the way you work. We trust your ideas.
- Create our community together. Refer your friends to receive bonuses.
- Activities to support your well-being and health.
- Plenty of opportunities to donate to charities and support the environment.
Please click on this link to submit your application:
Data Scientist
Posted today
Job Viewed
Job Description
Fiddlehead Technology is a Canadian leader in advanced analytics and AI-driven solutions, helping global companies unlock value from their data. We specialize in applying machine learning, predictive forecasting, and Generative AI to solve complex business problems and empower smarter decision-making.
Our culture thrives on innovation, collaboration, and continuous learning. We invest in our people by offering structured opportunities for professional development, a healthy work-life balance, and exposure to cutting-edge AI/ML projects across industries. At Fiddlehead, employees are encouraged to explore, create, and grow while contributing to high-impact solutions.
Fiddlehead Technology is a data science company with over 10 years of experience helping
consumer-packaged goods (CPG) companies harness the power of machine learning and
AI. We transform data into actionable insights, building predictive models that drive
efficiency, growth, and competitive advantage. With increasing demand for our solutions,
we’re expanding our global team.
We are seeking Data Scientists to collaborate with our team based in Canada in developing
advanced forecasting models and optimization algorithms for leading CPG manufacturers
and service providers. In this role, you’ll monitor model performance in production,
addressing challenges like data drift and concept drift, while delivering data-driven insights
that shape business decisions.
What You’ll Bring
• Education and/or professional experience in data science and forecasting
• Proficiency with forecasting tools and libraries, ideally in Python
• Knowledge of machine learning and statistical concepts
• Strong analytical and problem-solving abilities
• Ability to communicate complex findings to non-technical stakeholders
• High attention to detail and data accuracy
• Degree in Statistics, Data Science, Computer Science, Engineering, or related field
(Bachelor’s, Master’s, or PhD)
At Fiddlehead, you’ll work on meaningful projects that advance predictive forecasting and
sustainability in the CPG industry. We offer a collaborative, inclusive, and supportive
environment that prioritizes professional development, work-life balance, and continuous
learning. Our team members enjoy dedicated time to expand their skills while contributing
to innovative solutions with real-world impact.
We carefully review every application and are committed to providing a response. Candidates selected will be invited to an in-person or virtual interview. To ensure equal access, we provide accommodations during the recruitment process for applicants with disabilities. If you require accommodations, please reach out to our team through the contact page on our website. At Fiddlehead, we are dedicated to fostering an inclusive and accessible environment where every employee and customer is respected, valued, and supported. We welcome applications from women, Indigenous peoples, persons with disabilities, ethnic and visible minorities, members of the LGBT+ community, and others who can help enrich the diversity of our workforce.
We offer a competitive compensation package with performance-based incentives and opportunities to contribute to impactful projects. Employees benefit from mentorship, training, and active participation in AI communities, all within a collaborative culture that values innovation, creativity, and professional growth.
Be The First To Know
About the latest Data scientist Jobs in Delhi !
Data Scientist
Posted today
Job Viewed
Job Description
Location: Remote/ Indore/ Mumbai/ Chennai/ Gurugram
Experience: Min 5 Years
Work Mode: Remote
Notice Period: Max. 30 Days (45 for Notice Serving)
Interview Process: 2 Rounds
Interview Mode: Virtual Face-to-Face
Interview Timeline: 1 Week
Industry: Must be from a BPO/ KPO/ Shared Services or Healthcare Org.
Key Responsibilities:
- AI/ML Development & Research
- Design, develop, and deploy advanced machine learning and deep learning models to solve complex business problems.
- Implement and optimize Large Language Models (LLMs) and Generative AI solutions for real-world applications.
- Build agent-based AI systems with autonomous decision-making capabilities.
- Conduct cutting-edge research on emerging AI technologies and explore their practical applications.
- Perform model evaluation, validation, and continuous optimization to ensure high performance.
- Cloud Infrastructure & Full-Stack Development:
- Architect and implement scalable, cloud-native ML/AI solutions using AWS, Azure, or GCP.
- Develop full-stack applications that seamlessly integrate AI models with modern web technologies.
- Build and maintain robust ML pipelines using cloud services (e.g., SageMaker, ML Engine).
- Implement CI/CD pipelines to streamline ML model deployment and monitoring processes.
- Design and optimize cloud infrastructure to support high-performance computing workloads.
- Data Engineering & Database Management
- Design and implement data pipelines to enable large-scale data processing and real-time analytics.
- Work with both SQL and NoSQL databases (e.g., PostgreSQL, MongoDB, Cassandra) to manage structured and unstructured data.
- Optimize database performance to support machine learning workloads and real-time applications.
- Implement robust data governance frameworks and ensure data quality assurance practices.
- Manage and process streaming data to enable real-time decision-making.
- Leadership & Collaboration
- Mentor junior data scientists and assist in technical decision-making to drive innovation.
- Collaborate with cross-functional teams, including product, engineering, and business stakeholders, to develop solutions that align with organizational goals.
- Present findings and insights to both technical and non-technical audiences in a clear and actionable manner.
- Lead proof-of-concept projects and innovation initiatives to push the boundaries of AI/ML applications.
Required Qualifications:
- Education & Experience
- Master’s or PhD in Computer Science, Data Science, Statistics, Mathematics, or a related field.
- 5+ years of hands-on experience in data science and machine learning, with a focus on real-world applications.
- 3+ years of experience working with deep learning frameworks and neural networks.
- 2+ years of experience with cloud platforms and full-stack development.
- Technical Skills - Core AI/ML
- Machine Learning: Proficient in Scikit-learn, XGBoost, LightGBM, and advanced ML algorithms.
- Deep Learning: Expertise in TensorFlow, PyTorch, Keras, CNNs, RNNs, LSTMs, and Transformers.
- Large Language Models: Experience with GPT, BERT, T5, fine-tuning, and prompt engineering.
- Generative AI: Hands-on experience with Stable Diffusion, DALL-E, text-to-image, and text generation models.
- Agentic AI: Knowledge of multi-agent systems, reinforcement learning, and autonomous agents.
- Technical Skills - Development & Infrastructure
- Programming: Expertise in Python, with proficiency in R, Java/Scala, JavaScript/TypeScript.
- Cloud Platforms: Proficient with AWS (SageMaker, EC2, S3, Lambda), Azure ML, or Google Cloud AI.
- Databases: Proficiency with SQL (PostgreSQL, MySQL) and NoSQL (MongoDB, Cassandra, DynamoDB).
- Full-Stack Development: Experience with React/Vue.js, Node.js, FastAPI, Flask, Docker, Kubernetes.
- MLOps: Experience with MLflow, Kubeflow, model versioning, and A/B testing frameworks.
- Big Data: Expertise in Spark, Hadoop, Kafka, and streaming data processing.
Non Negotiables:
- Cloud Infrastructure - ML/AI solutions on AWS, Azure, or GCP
- Build and maintain ML pipelines using cloud services (SageMaker, ML Engine, etc.)
- Implement CI/CD pipelines for ML model deployment and monitoring
- Work with both SQL and NoSQL databases (PostgreSQL, MongoDB, Cassandra, etc.)
- Machine Learning: Scikit-learn
- Deep Learning: TensorFlow
- Programming: Python (expert), R, Java/Scala, JavaScript/TypeScript
- Cloud Platforms: AWS (SageMaker, EC2, S3, Lambda)
- Vector databases and embeddings (Pinecone, Weaviate, Chroma)
- Knowledge of LangChain, LlamaIndex, or similar LLM frameworks.
- Industry: Must be a BPO or Healthcare Org.
Data Scientist
Posted today
Job Viewed
Job Description
- While Apache PySpark is a primary/mandatory skill, if the candidate lacks strong proficiency in Python, candidate must have hands on experience on Deep Learning Frameworks such as TensorFlow or PyTorch.
- Advance SQL skills for data manipulation and analysis is a must.
- Solid understanding of machine learning algorithms, model evaluation, and deployment is a must.
Data Scientist
Posted today
Job Viewed
Job Description
The Data Scientist supports the development and implementation of data models, focusing on Machine Learning, under the supervision of more experienced scientists, contributing to the team’s innovative projects.
Job Description:
- Assist in the development of Machine Learning models and algorithms, contributing to the design and implementation of data-driven solutions.
- Perform data preprocessing, cleaning, and analysis, preparing datasets for modeling and supporting higher-level data science initiatives.
- Learn from and contribute to projects involving Deep Learning and General AI, gaining hands-on experience under the guidance of senior data scientists.
- Engage in continuous professional development, enhancing skills in Python, Machine Learning, and related areas through training and practical experience.
- Collaborate with team members to ensure the effective implementation of data science solutions, participating in brainstorming sessions and project discussions.
- Support the documentation of methodologies and results, ensuring transparency and reproducibility of data science processes.
Qualifications:
- Bachelor's degree in Computer Science, Statistics, Mathematics, or a related field, with a strong interest in Machine Learning, Deep Learning, and AI.
- Experience in a data science role, demonstrating practical experience and strong Python programming skills.
- Exposure to Business Intelligence (BI) & Data Engineering concepts and tools.
- Familiarity with data platforms such as Dataiku is a bonus.
Skills:
- Solid understanding of Machine Learning principles and practical experience in Python programming.
- Familiarity with data science and machine learning libraries in Python (e.g., scikit-learn, pandas, NumPy).
- Eagerness to learn Deep Learning and General AI technologies, with a proactive approach to acquiring new knowledge and skills.
- Strong analytical and problem-solving abilities, capable of tackling data-related challenges and deriving meaningful insights.
- Basic industry domain knowledge, with a willingness to deepen expertise and apply data science principles to solve real-world problems.
- Effective communication skills, with the ability to work collaboratively in a team environment and contribute to discussions.
v4c.ai is an equal opportunity employer. We value diversity and are committed to creating an inclusive environment for all employees, regardless of race, color, religion, gender, sexual orientation, national origin, age, disability, or veteran status.
We believe in the power of diversity and strive to foster a culture where every team member feels valued and respected. We encourage applications from individuals of all backgrounds and experiences.
If you are passionate about diversity and innovation and thrive in a collaborative environment, we invite you to apply and join our team.