595 Data Scientists jobs in Mumbai
Data Scientists
Posted today
Job Viewed
Job Description
Description
Key Responsibilities
AI/ML Development & Research
• Design, develop, and deploy advanced machine learning and deep learning models for complex business problems
• Implement and optimize Large Language Models (LLMs) and Generative AI solutions
• Build agentic AI systems with autonomous decision-making capabilities
• Conduct research on emerging AI technologies and their practical applications
• Perform model evaluation, validation, and continuous improvement
Cloud Infrastructure & Full-Stack Development
• Architect and implement scalable cloud-native ML/AI solutions on AWS, Azure, or GCP
• Develop full-stack applications integrating AI models with modern web technologies
• Build and maintain ML pipelines using cloud services (SageMaker, ML Engine, etc.)
• Implement CI/CD pipelines for ML model deployment and monitoring
• Design and optimize cloud infrastructure for high-performance computing workloads
Data Engineering & Database Management
• Design and implement data pipelines for large-scale data processing
• Work with both SQL and NoSQL databases (PostgreSQL, MongoDB, Cassandra, etc.)
• Optimize database performance for ML workloads and real-time applications
• Implement data governance and quality assurance frameworks
• Handle streaming data processing and real-time analytics
Leadership & Collaboration
• Mentor junior data scientists and guide technical decision-making
• Collaborate with cross-functional teams including product, engineering, and business stakeholders
• Present findings and recommendations to technical and non-technical audiences
• Lead proof-of-concept projects and innovation initiatives
Required Qualifications
Education & Experience
• Master's or PhD in Computer Science, Data Science, Statistics, Mathematics, or related field
• 5+ years of hands-on experience in data science and machine learning
• 3+ years of experience with deep learning frameworks and neural networks
• 2+ years of experience with cloud platforms and full-stack development
Technical Skills - Core AI/ML
• Machine Learning: Scikit-learn, XGBoost, LightGBM, advanced ML algorithms
• Deep Learning: TensorFlow, PyTorch, Keras, CNN, RNN, LSTM, Transformers
• Large Language Models: GPT, BERT, T5, fine-tuning, prompt engineering
• Generative AI: Stable Diffusion, DALL-E, text-to-image, text generation
• Agentic AI: Multi-agent systems, reinforcement learning, autonomous agents
Technical Skills - Development & Infrastructure
• Programming: Python (expert), R, Java/Scala, JavaScript/TypeScript
• Cloud Platforms: AWS (SageMaker, EC2, S3, Lambda), Azure ML, or Google Cloud AI
• Databases: SQL (PostgreSQL, MySQL), NoSQL (MongoDB, Cassandra, DynamoDB)
• Full-Stack Development: React/Vue.js, Node.js, FastAPI, Flask, Docker, Kubernetes
• MLOps: MLflow, Kubeflow, Model versioning, A/B testing frameworks
• Big Data: Spark, Hadoop, Kafka, streaming data processing
Preferred Qualifications
• Experience with vector databases and embeddings (Pinecone, Weaviate, Chroma)
• Knowledge of LangChain, LlamaIndex, or similar LLM frameworks
• Experience with model compression and edge deployment
• Familiarity with distributed computing and parallel processing
• Experience with computer vision and NLP applications
• Knowledge of federated learning and privacy-preserving ML
• Experience with quantum machine learning
• Expertise in MLOps and production ML system design
Key Competencies
Technical Excellence
• Strong mathematical foundation in statistics, linear algebra, and optimization
• Ability to implement algorithms from research papers
• Experience with model interpretability and explainable AI
• Knowledge of ethical AI and bias detection/mitigation
Problem-Solving & Innovation
• Strong analytical and critical thinking skills
• Ability to translate business requirements into technical solutions
• Creative approach to solving complex, ambiguous problems
• Experience with rapid prototyping and experimentation
Communication & Leadership
• Excellent written and verbal communication skills
• Ability to explain complex technical concepts to diverse audiences
• Strong project management and organizational skills
• Experience mentoring and leading technical teams
How We Partner To Protect You: TaskUs will neither solicit money from you during your application process nor require any form of payment in order to proceed with your application. Kindly ensure that you are always in communication with only authorized recruiters of TaskUs.DEI: In TaskUs we believe that innovation and higher performance are brought by people from all walks of life. We welcome applicants of different backgrounds, demographics, and circumstances. Inclusive and equitable practices are our responsibility as a business. TaskUs is committed to providing equal access to opportunities. If you need reasonable accommodations in any part of the hiring process, please let us know.We invite you to explore all TaskUs career opportunities and apply through the provided URL.
Data Scientists (Data Governance Rule, Lims)
Posted today
Job Viewed
Job Description
**Location: Navi Mumbai**
**Experience: 5 years**
**Experience**
- **MSc in data science, applied mathematics, computer science, data engineering or software engineering.**
- Advanced skills and **experience in experimental data analysis: statistical method validation, design of experiments, unsupervised learning, statistical modeling.**
- Good knowledge in **physics and chemistry, and a marked interest to work on scientific topics.**
- Proven ability to work with and, ideally, **develop data engineering systems: automated data flows (ETLs), data warehousing. Prior experience with IT system administration would be a plus.**
- A strong taste to work in multidisciplinary teams and projects. A proven ability to communicate clearly in international teams.
- Experience **working on LIMS software** - Laboratory Information Management System (LIMS)
- **Knowledge of Python** is a plus
- Experience on visual analytics tool such as **Spotfire** is required
Work Location: In person
Big Data Developer
Posted 1 day ago
Job Viewed
Job Description
Position Overview:
We are seeking a skilled Big Data Developer to join our growing delivery team, with a dual focus on hands-on project support and mentoring junior engineers. This role is ideal for a developer who not only thrives in a technical, fast-paced environment but is also passionate about coaching and developing the next generation of talent.
You will work on live client projects, provide technical support, contribute to solution delivery, and serve as a go-to technical mentor for less experienced team members.
Key Responsibilities:
- Perform hands-on Big Data development work, including coding, testing, troubleshooting, and deploying solutions.
- Support ongoing client projects, addressing technical challenges and ensuring smooth delivery.
- Collaborate with junior engineers to guide them on coding standards, best practices, debugging, and project execution.
- Review code and provide feedback to junior engineers to maintain high quality and scalable solutions.
- Assist in designing and implementing solutions using Hadoop, Spark, Hive, HDFS, and Kafka.
- Lead by example in object-oriented development, particularly using Scala and Java.
- Translate complex requirements into clear, actionable technical tasks for the team.
- Contribute to the development of ETL processes for integrating data from various sources.
- Document technical approaches, best practices, and workflows for knowledge sharing within the team.
Required Skills and Qualifications:
- 8+ years of professional experience in Big Data development and engineering.
- Strong hands-on expertise with Hadoop, Hive, HDFS, Apache Spark, and Kafka.
- Solid object-oriented development experience with Scala and Java.
- Strong SQL skills with experience working with large data sets.
- Practical experience designing, installing, configuring, and supporting Big Data clusters.
- Deep understanding of ETL processes and data integration strategies.
- Proven experience mentoring or supporting junior engineers in a team setting.
- Strong problem-solving, troubleshooting, and analytical skills.
- Excellent communication and interpersonal skills.
Preferred Qualifications:
- Professional certifications in Big Data technologies (Cloudera, Databricks, AWS Big Data Specialty, etc.).
- Experience with cloud Big Data platforms (AWS EMR, Azure HDInsight, or GCP Dataproc).
- Exposure to Agile or DevOps practices in Big Data project environments.
What We Offer:
Opportunity to work on challenging, high-impact Big Data projects.
Leadership role in shaping and mentoring the next generation of engineers.
Supportive and collaborative team culture.
Flexible working environment
Competitive compensation and professional growth opportunities.
Big Data Developer
Posted today
Job Viewed
Job Description
We are seeking a skilled Big Data Developer to join our growing delivery team, with a dual focus on hands-on project support and mentoring junior engineers. This role is ideal for a developer who not only thrives in a technical, fast-paced environment but is also passionate about coaching and developing the next generation of talent.
You will work on live client projects, provide technical support, contribute to solution delivery, and serve as a go-to technical mentor for less experienced team members.
Key Responsibilities:
- Perform hands-on Big Data development work, including coding, testing, troubleshooting, and deploying solutions.
- Support ongoing client projects, addressing technical challenges and ensuring smooth delivery.
- Collaborate with junior engineers to guide them on coding standards, best practices, debugging, and project execution.
- Review code and provide feedback to junior engineers to maintain high quality and scalable solutions.
- Assist in designing and implementing solutions using Hadoop, Spark, Hive, HDFS, and Kafka.
- Lead by example in object-oriented development, particularly using Scala and Java.
- Translate complex requirements into clear, actionable technical tasks for the team.
- Contribute to the development of ETL processes for integrating data from various sources.
- Document technical approaches, best practices, and workflows for knowledge sharing within the team.
Required Skills and Qualifications:
- 8+ years of professional experience in Big Data development and engineering.
- Strong hands-on expertise with Hadoop, Hive, HDFS, Apache Spark, and Kafka.
- Solid object-oriented development experience with Scala and Java.
- Strong SQL skills with experience working with large data sets.
- Practical experience designing, installing, configuring, and supporting Big Data clusters.
- Deep understanding of ETL processes and data integration strategies.
- Proven experience mentoring or supporting junior engineers in a team setting.
- Strong problem-solving, troubleshooting, and analytical skills.
- Excellent communication and interpersonal skills.
Preferred Qualifications:
- Professional certifications in Big Data technologies (Cloudera, Databricks, AWS Big Data Specialty, etc.).
- Experience with cloud Big Data platforms (AWS EMR, Azure HDInsight, or GCP Dataproc).
- Exposure to Agile or DevOps practices in Big Data project environments.
What We Offer:
Opportunity to work on challenging, high-impact Big Data projects.
Leadership role in shaping and mentoring the next generation of engineers.
Supportive and collaborative team culture.
Flexible working environment
Competitive compensation and professional growth opportunities.
Big Data Developer
Posted today
Job Viewed
Job Description
Role Highlights:
Position: Big Data Engineer
Experience: 4+ years
Location: All India-Remote, Hyderabad- Hybrid
Notice Period: Immediate/7 days joiners mandate
Job Overview:
Must have skills- Big Data, Scala, AWS and Python or Java
Big Data Developer
Posted today
Job Viewed
Job Description
Experience: 5 to 9 years
Must have Skills:
- Kotlin/Scala/Java
- Spark
- SQL
- Spark Streaming
- Any cloud (AWS preferable)
- Kafka /Kinesis/Any streaming services
- Object-Oriented Programming
- Hive, ETL/ELT design experience
- CICD experience (ETL pipeline deployment)
- Data Modeling experience
Good to Have Skills:
- Git/similar version control tool
- Knowledge in CI/CD, Microservices
Role Objective:
Big Data Engineer will be responsible for expanding and optimizing our data and database architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building. The Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems, and products
Roles & Responsibilities:
- Sound knowledge in Spark architecture and distributed computing and Spark streaming.
- Proficient in Spark – including RDD and Data frames core functions, troubleshooting and performance tuning.
- Good understanding in object-oriented concepts and hands on experience on Kotlin/Scala/Java with excellent programming logic and technique.
- Good in functional programming and OOPS concept on Kotlin/Scala/Java
- Good experience in SQL
- Managing the team of Associates and Senior Associates and ensuring the utilization is maintained across the project.
- Able to mentor new members for onboarding to the project.
- Understand the client requirement and able to design, develop from scratch and deliver.
- AWS cloud experience would be preferable.
- Experience in analyzing, re-architecting, and re-platforming on-premises data warehouses to data platforms on cloud (AWS is preferred)
- Leading the client calls to flag off any delays, blockers, escalations and collate all the requirements.
- Managing project timing, client expectations and meeting deadlines.
- Should have played project and team management roles.
- Facilitate meetings within the team on regular basis.
- Understand business requirement and analyze different approaches and plan deliverables and milestones for the project.
- Optimization, maintenance, and support of pipelines.
- Strong analytical and logical skills.
- Ability to comfortably tackling new challenges and learn
Big Data Developer
Posted today
Job Viewed
Job Description
Position Overview:
We are seeking a skilled Big Data Developer to join our growing delivery team, with a dual focus on hands-on project support and mentoring junior engineers. This role is ideal for a developer who not only thrives in a technical, fast-paced environment but is also passionate about coaching and developing the next generation of talent.
You will work on live client projects, provide technical support, contribute to solution delivery, and serve as a go-to technical mentor for less experienced team members.
Key Responsibilities:
- Perform hands-on Big Data development work, including coding, testing, troubleshooting, and deploying solutions.
- Support ongoing client projects, addressing technical challenges and ensuring smooth delivery.
- Collaborate with junior engineers to guide them on coding standards, best practices, debugging, and project execution.
- Review code and provide feedback to junior engineers to maintain high quality and scalable solutions.
- Assist in designing and implementing solutions using Hadoop, Spark, Hive, HDFS, and Kafka.
- Lead by example in object-oriented development, particularly using Scala and Java.
- Translate complex requirements into clear, actionable technical tasks for the team.
- Contribute to the development of ETL processes for integrating data from various sources.
- Document technical approaches, best practices, and workflows for knowledge sharing within the team.
Required Skills and Qualifications:
- 8+ years of professional experience in Big Data development and engineering.
- Strong hands-on expertise with Hadoop, Hive, HDFS, Apache Spark, and Kafka.
- Solid object-oriented development experience with Scala and Java.
- Strong SQL skills with experience working with large data sets.
- Practical experience designing, installing, configuring, and supporting Big Data clusters.
- Deep understanding of ETL processes and data integration strategies.
- Proven experience mentoring or supporting junior engineers in a team setting.
- Strong problem-solving, troubleshooting, and analytical skills.
- Excellent communication and interpersonal skills.
Preferred Qualifications:
- Professional certifications in Big Data technologies (Cloudera, Databricks, AWS Big Data Specialty, etc.).
- Experience with cloud Big Data platforms (AWS EMR, Azure HDInsight, or GCP Dataproc).
- Exposure to Agile or DevOps practices in Big Data project environments.
What We Offer:
Opportunity to work on challenging, high-impact Big Data projects.
Leadership role in shaping and mentoring the next generation of engineers.
Supportive and collaborative team culture.
Flexible working environment
Competitive compensation and professional growth opportunities.
Be The First To Know
About the latest Data scientists Jobs in Mumbai !
Big Data Developer
Posted today
Job Viewed
Job Description
Role Highlights:
Position: Big Data Engineer
Experience: 4+ years
Location: All India-Remote, Hyderabad- Hybrid
Notice Period: Immediate/7 days joiners mandate
Job Overview:
Must have skills- Big Data, Scala, AWS and Python or Java
Big Data Developer
Posted today
Job Viewed
Job Description
Experience: 5 to 9 years
Must have Skills:
- Kotlin/Scala/Java
- Spark
- SQL
- Spark Streaming
- Any cloud (AWS preferable)
- Kafka /Kinesis/Any streaming services
- Object-Oriented Programming
- Hive, ETL/ELT design experience
- CICD experience (ETL pipeline deployment)
- Data Modeling experience
Good to Have Skills:
- Git/similar version control tool
- Knowledge in CI/CD, Microservices
Role Objective:
Big Data Engineer will be responsible for expanding and optimizing our data and database architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building. The Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems, and products
Roles & Responsibilities:
- Sound knowledge in Spark architecture and distributed computing and Spark streaming.
- Proficient in Spark – including RDD and Data frames core functions, troubleshooting and performance tuning.
- Good understanding in object-oriented concepts and hands on experience on Kotlin/Scala/Java with excellent programming logic and technique.
- Good in functional programming and OOPS concept on Kotlin/Scala/Java
- Good experience in SQL
- Managing the team of Associates and Senior Associates and ensuring the utilization is maintained across the project.
- Able to mentor new members for onboarding to the project.
- Understand the client requirement and able to design, develop from scratch and deliver.
- AWS cloud experience would be preferable.
- Experience in analyzing, re-architecting, and re-platforming on-premises data warehouses to data platforms on cloud (AWS is preferred)
- Leading the client calls to flag off any delays, blockers, escalations and collate all the requirements.
- Managing project timing, client expectations and meeting deadlines.
- Should have played project and team management roles.
- Facilitate meetings within the team on regular basis.
- Understand business requirement and analyze different approaches and plan deliverables and milestones for the project.
- Optimization, maintenance, and support of pipelines.
- Strong analytical and logical skills.
- Ability to comfortably tackling new challenges and learn