147 Data Scientists jobs in Gurugram
Data Scientists
Posted today
Job Viewed
Job Description
Description
Key Responsibilities
AI/ML Development & Research
• Design, develop, and deploy advanced machine learning and deep learning models for complex business problems
• Implement and optimize Large Language Models (LLMs) and Generative AI solutions
• Build agentic AI systems with autonomous decision-making capabilities
• Conduct research on emerging AI technologies and their practical applications
• Perform model evaluation, validation, and continuous improvement
Cloud Infrastructure & Full-Stack Development
• Architect and implement scalable cloud-native ML/AI solutions on AWS, Azure, or GCP
• Develop full-stack applications integrating AI models with modern web technologies
• Build and maintain ML pipelines using cloud services (SageMaker, ML Engine, etc.)
• Implement CI/CD pipelines for ML model deployment and monitoring
• Design and optimize cloud infrastructure for high-performance computing workloads
Data Engineering & Database Management
• Design and implement data pipelines for large-scale data processing
• Work with both SQL and NoSQL databases (PostgreSQL, MongoDB, Cassandra, etc.)
• Optimize database performance for ML workloads and real-time applications
• Implement data governance and quality assurance frameworks
• Handle streaming data processing and real-time analytics
Leadership & Collaboration
• Mentor junior data scientists and guide technical decision-making
• Collaborate with cross-functional teams including product, engineering, and business stakeholders
• Present findings and recommendations to technical and non-technical audiences
• Lead proof-of-concept projects and innovation initiatives
Required Qualifications
Education & Experience
• Master's or PhD in Computer Science, Data Science, Statistics, Mathematics, or related field
• 5+ years of hands-on experience in data science and machine learning
• 3+ years of experience with deep learning frameworks and neural networks
• 2+ years of experience with cloud platforms and full-stack development
Technical Skills - Core AI/ML
• Machine Learning: Scikit-learn, XGBoost, LightGBM, advanced ML algorithms
• Deep Learning: TensorFlow, PyTorch, Keras, CNN, RNN, LSTM, Transformers
• Large Language Models: GPT, BERT, T5, fine-tuning, prompt engineering
• Generative AI: Stable Diffusion, DALL-E, text-to-image, text generation
• Agentic AI: Multi-agent systems, reinforcement learning, autonomous agents
Technical Skills - Development & Infrastructure
• Programming: Python (expert), R, Java/Scala, JavaScript/TypeScript
• Cloud Platforms: AWS (SageMaker, EC2, S3, Lambda), Azure ML, or Google Cloud AI
• Databases: SQL (PostgreSQL, MySQL), NoSQL (MongoDB, Cassandra, DynamoDB)
• Full-Stack Development: React/Vue.js, Node.js, FastAPI, Flask, Docker, Kubernetes
• MLOps: MLflow, Kubeflow, Model versioning, A/B testing frameworks
• Big Data: Spark, Hadoop, Kafka, streaming data processing
Preferred Qualifications
• Experience with vector databases and embeddings (Pinecone, Weaviate, Chroma)
• Knowledge of LangChain, LlamaIndex, or similar LLM frameworks
• Experience with model compression and edge deployment
• Familiarity with distributed computing and parallel processing
• Experience with computer vision and NLP applications
• Knowledge of federated learning and privacy-preserving ML
• Experience with quantum machine learning
• Expertise in MLOps and production ML system design
Key Competencies
Technical Excellence
• Strong mathematical foundation in statistics, linear algebra, and optimization
• Ability to implement algorithms from research papers
• Experience with model interpretability and explainable AI
• Knowledge of ethical AI and bias detection/mitigation
Problem-Solving & Innovation
• Strong analytical and critical thinking skills
• Ability to translate business requirements into technical solutions
• Creative approach to solving complex, ambiguous problems
• Experience with rapid prototyping and experimentation
Communication & Leadership
• Excellent written and verbal communication skills
• Ability to explain complex technical concepts to diverse audiences
• Strong project management and organizational skills
• Experience mentoring and leading technical teams
How We Partner To Protect You: TaskUs will neither solicit money from you during your application process nor require any form of payment in order to proceed with your application. Kindly ensure that you are always in communication with only authorized recruiters of TaskUs.DEI: In TaskUs we believe that innovation and higher performance are brought by people from all walks of life. We welcome applicants of different backgrounds, demographics, and circumstances. Inclusive and equitable practices are our responsibility as a business. TaskUs is committed to providing equal access to opportunities. If you need reasonable accommodations in any part of the hiring process, please let us know.We invite you to explore all TaskUs career opportunities and apply through the provided URL.
Lead Consultant-Data Scientists with AI and Generative Model experience!
Posted today
Job Viewed
Job Description
Ready to shape the future of work?
At Genpact, we don’t just adapt to change—we drive it. AI and digital innovation are redefining industries, and we’re leading the charge. Genpact’s AI Gigafactory, our industry-first accelerator, is an example of how we’re scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI, our breakthrough solutions tackle companies’ most complex challenges.
If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that’s shaping the future, this is your moment.
Genpact (NYSE: G) is anadvanced technology services and solutions company that deliverslastingvalue for leading enterprisesglobally.Through ourdeep business knowledge, operational excellence, and cutting-edge solutions – we help companies across industries get ahead and stay ahead.Powered by curiosity, courage, and innovation,our teamsimplementdata, technology, and AItocreate tomorrow, today.Get to know us atgenpact.comand onLinkedIn,X,YouTube, andFacebook.
Inviting applications for the role of Lead Consultant-Data Scientists with AI and Generative Model experience!
We are currently looking for a talented and experienced Data Scientist with a strong background in AI, specifically in building generative AI models using large language models, to join our team. This individual will play a crucial role in developing and implementing data-driven solutions, AI-powered applications, and generative models that will help us stay ahead of the competition and achieve our ambitious goals.
Responsibilities • Collaborate with cross-functional teams to identify, analyze, and interpret complex datasets to develop actionable insights and drive data-driven decision-making.
• Design, develop, and implement advanced statistical models, machine learning algorithms, AI applications, and generative models using large language models such as GPT-3, BERT and also frameworks like RAG, Knowledge Graphs etc.
• Communicate findings and insights to both technical and non-technical stakeholders through clear and concise presentations, reports, and visualizations.
• Continuously monitor and assess the performance of AI models, generative models, and data-driven solutions, refining and optimizing them as needed.
• Stay up-to-date with the latest industry trends, tools, and technologies in data science, AI, and generative models, and apply this knowledge to improve existing solutions and develop new ones.
• Mentor and guide junior team members, helping to develop their skills and contribute to their professional growth.
Qualifications we seek in you:
Minimum Qualifications• Bachelor's or Master's degree in Data Science, Computer Science, Statistics, or a related field.
• Experience in data science, machine learning, AI applications, and generative AI modelling.
• Strong expertise in Python, R, or other programming languages commonly used in data science and AI, with experience in implementing large language models and generative AI frameworks.
• Proficient in statistical modelling, machine learning techniques, AI algorithms, and generative model development using large language models such as GPT-3, BERT, or similar frameworks like RAG, Knowledge Graphs etc.
• Experience working with large datasets and using various data storage and processing technologies such as SQL, NoSQL, Hadoop, and Spark.
• Strong analytical, problem-solving, and critical thinking skills, with the ability to draw insights from complex data and develop actionable recommendations.
• Excellent communication and collaboration skills, with the ability to work effectively with cross-functional teams and explain complex concepts to non-technical stakeholders.
Preferred Qualifications/ skills• Experience in deploying AI models, generative models, and applications in a production environment using cloud platforms such as AWS, Azure, or GCP.
• Knowledge of industry-specific data sources, challenges, and opportunities relevant to Insurance
• Demonstrated experience in leading data science projects from inception to completion, including project management and team collaboration skills.
Why join Genpact?
Be a transformation leader – Work at the cutting edge of AI, automation, and digital innovation
Make an impact – Drive change for global enterprises and solve business challenges that matter
Accelerate your career – Get hands-on experience, mentorship, and continuous learning opportunities
Work with the best – Join 140,000+ bold thinkers and problem-solvers who push boundaries every day
Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress
Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up.
Let’s build tomorrow together.
Big Data Developer
Posted 2 days ago
Job Viewed
Job Description
Experience: 5 to 9 years
Must have Skills:
- Kotlin/Scala/Java
- Spark
- SQL
- Spark Streaming
- Any cloud (AWS preferable)
- Kafka /Kinesis/Any streaming services
- Object-Oriented Programming
- Hive, ETL/ELT design experience
- CICD experience (ETL pipeline deployment)
- Data Modeling experience
Good to Have Skills:
- Git/similar version control tool
- Knowledge in CI/CD, Microservices
Role Objective:
Big Data Engineer will be responsible for expanding and optimizing our data and database architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building. The Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems, and products
Roles & Responsibilities:
- Sound knowledge in Spark architecture and distributed computing and Spark streaming.
- Proficient in Spark – including RDD and Data frames core functions, troubleshooting and performance tuning.
- Good understanding in object-oriented concepts and hands on experience on Kotlin/Scala/Java with excellent programming logic and technique.
- Good in functional programming and OOPS concept on Kotlin/Scala/Java
- Good experience in SQL
- Managing the team of Associates and Senior Associates and ensuring the utilization is maintained across the project.
- Able to mentor new members for onboarding to the project.
- Understand the client requirement and able to design, develop from scratch and deliver.
- AWS cloud experience would be preferable.
- Experience in analyzing, re-architecting, and re-platforming on-premises data warehouses to data platforms on cloud (AWS is preferred)
- Leading the client calls to flag off any delays, blockers, escalations and collate all the requirements.
- Managing project timing, client expectations and meeting deadlines.
- Should have played project and team management roles.
- Facilitate meetings within the team on regular basis.
- Understand business requirement and analyze different approaches and plan deliverables and milestones for the project.
- Optimization, maintenance, and support of pipelines.
- Strong analytical and logical skills.
- Ability to comfortably tackling new challenges and learn
Big Data QA
Posted today
Job Viewed
Job Description
**About Us**
- We empower enterprises globally through intelligent, creative, and insightful services for data integration, data analytics and data visualization.- Hoonartek is a leader in enterprise transformation, data engineering and an acknowledged world-class Ab Initio delivery partner.- Using centuries of cumulative experience, research and leadership, we help our clients eliminate the complexities & risk of legacy modernization and safely deliver big data hubs, operational data integration, business intelligence, risk & compliance solutions and traditional data warehouses & marts.- At Hoonartek, we work to ensure that our customers, partners and employees all benefit from our unstinting commitment to delivery, quality and value. Hoonartek is increasingly the choice for customers seeking a trusted partner of vision, value and integrity**How We Work?**
Define, Design and Deliver (D3) is our in-house delivery philosophy. It’s culled from agile and rapid methodologies and focused on ‘just enough design’. We embrace this philosophy in everything we do, leading to numerous client success stories and indeed to our own success.- We embrace change, empowering and trusting our people and building long and valuable relationships with our employees, our customers and our partners. We work flexibly, even adopting traditional/waterfall methods where circumstances demand it. At Hoonartek, the focus is always on delivery and value.**QA - Big Data**
**Technical Skills**:
- Proficient in c with hands-on experience.
- Strong understanding of Hadoop ecosystem and job scheduling tools like Airflow and Oozie.
- Skilled in writing and executing SQL queries for comprehensive data validation.
- Familiarity with test automation frameworks (e.g., Robot Framework), with automation skills as an asset.
- Basic programming knowledge in Python is a plus.
- Experience with S3 buckets and cloud storage workflows is advantageous.
**Soft Skills**:
- Strong analytical and problem-solving skills with a high attention to detail.
- Excellent verbal and written communication abilities.
- Ability to collaborate effectively in a fast-paced Agile/Scrum environment.
- Adaptable and eager to learn new tools, technologies, and processes.
**Experience**:
- 2-4 years of experience in Big Data testing, focusing on both automated and manual testing for data validation and UI testing.
- Proven experience in testing Spark job performance, security, and integration across diverse systems.
- Hands-on experience with defect tracking tools such as JIRA or Bugzilla.
DevOps - Big Data
Posted today
Job Viewed
Job Description
**About Us**
- We empower enterprises globally through intelligent, creative, and insightful services for data integration, data analytics and data visualization.- Hoonartek is a leader in enterprise transformation, data engineering and an acknowledged world-class Ab Initio delivery partner.- Using centuries of cumulative experience, research and leadership, we help our clients eliminate the complexities & risk of legacy modernization and safely deliver big data hubs, operational data integration, business intelligence, risk & compliance solutions and traditional data warehouses & marts.- At Hoonartek, we work to ensure that our customers, partners and employees all benefit from our unstinting commitment to delivery, quality and value. Hoonartek is increasingly the choice for customers seeking a trusted partner of vision, value and integrity**How We Work?**
Define, Design and Deliver (D3) is our in-house delivery philosophy. It’s culled from agile and rapid methodologies and focused on ‘just enough design’. We embrace this philosophy in everything we do, leading to numerous client success stories and indeed to our own success.- We embrace change, empowering and trusting our people and building long and valuable relationships with our employees, our customers and our partners. We work flexibly, even adopting traditional/waterfall methods where circumstances demand it. At Hoonartek, the focus is always on delivery and value.- 1. Collaborate with cross-functional teams to design, implement, and maintain robust and scalable Big Data infrastructure.- 3. Implement automation scripts for provisioning, configuration, and orchestration of Big Data clusters using tools such as (tools like Ranger, Ansible, or others).
- 4. Ensure high availability, performance, and security of Big Data platforms.- 6. Collaborate with data engineers and data scientists to optimize and streamline data processing workflows.- 8. Evaluate and adopt new tools and technologies to enhance the efficiency of the DevOps processes.
- 9. Provide support to development teams in areas such as environment setup, debugging, and performance tuning.
- 1. Bachelor's degree in Computer Science, Engineering, or a related field.
- 2. Proven experience (3-6 years) working as a DevOps Engineer with a focus on Big Data technologies.
- 3. Strong expertise in deploying and managing Big Data frameworks such as Hadoop, Spark, Kafka, Hue, Airflow, Trino, etc.
- 4. Experience with containerization and orchestration tools such as Docker and Kubernetes.
- 5. Proficiency in scripting languages (e.g., Python, Bash) for automation tasks.
- 6. Hands-on experience with configuration management tools like Ansible,
- 7. Strong hands on with version control systems (e.g., Git) and CI/CD tools (e.g., Jenkins, GitLab CI).
- 8. Understanding of security best practices for Big Data environments.
- 9. Excellent problem-solving and communication skills.
Java Big Data Developer
Posted today
Job Viewed
Job Description
Responsibilities include, but are not limited to - Develops and tests software, including ongoing refactoring of code, and drives continuous improvement in code structure and quality Primary focus is spent writing code, API specs, conducting code reviews and testing in ongoing sprints, or doing proof of concepts/automation tools. Applies visualization and other techniques to fast track concepts. Functions as a core member of an Agile team driving user story analysis and elaboration, design and development of software applications, testing and builds automation tools. Works on a specific platform/product or as part of a dynamic resource pool assigned to projects based on demand and business priority. Identifies opportunities to adopt innovative technologies & build reusable components. Ensures timely & effective communication with the reporting manager.
Strong programming knowledge in Java Solid understanding of data structures, Algorithms & Design Patters is required
Strong programming knowledge in Java Solid understanding of data structures, Algorithms & Design Patters is required
Strong SQL knowledge is required
Hands-on experience or Knowledge on Big Data technologies (at least MapReduce, Hive and Hbase)
Understanding and experience with UNIX / Shell / Python scripting Database query optimization and indexing Web services design and implementation using REST / SOAP
Primary as Java API and SQL
Secondary -Big Data
Big data Manual QA
Posted today
Job Viewed
Job Description
Develop and execute test scripts to validate data pipelines, transformations, and integrations.
Formulate and maintain test strategies including smoke, performance, functional, and regression testing to ensure data processing and ETL jobs meet requirements.
Collaborate with development teams to assess changes in data workflows and update test cases to preserve data integrity.
Design and run tests for data validation, storage, and retrieval using Azure services like Data Lake, Synapse, and Data Factory, adhering to industry standards.
Continuously enhance automated tests as new features are developed, ensuring timely delivery per defined quality standards.
Participate in data reconciliation and verify Data Quality frameworks to maintain data accuracy, completeness, and consistency across the platform.
Share knowledge and best practices by collaborating with business analysts and technology teams to document testing processes and findings.
Communicate testing progress effectively with stakeholders, highlighting issues or blockers, and ensuring alignment with business objectives.
Maintain a comprehensive understanding of the Azure Data Lake platform's data landscape to ensure thorough testing coverage.
Skills & Experience:
3-6 years of QA experience with a strong focus on Big Data testing, particularly in Data Lake environments on Azure's cloud platform.
Proficient in Azure Data Factory, Azure Synapse Analytics and Databricks for big data processing and scaled data quality checks.
Proficiency in SQL, capable of writing and optimizing both simple and complex queries for data validation and testing purposes.
Proficient in PySpark, with experience in data manipulation and transformation, and a demonstrated ability to write and execute test scripts for data processing and validation.
Hands-on experience with Functional & system integration testing in big data environments, ensuring seamless data flow and accuracy across multiple systems.
Knowledge and ability to design and execute test cases in a behaviour-driven development environment.
Fluency in Agile methodologies, with active participation in Scrum ceremonies and a strong understanding of Agile principles.
Familiarity with tools like Jira, including experience with X-Ray or Jira Zephyr for defect management and test case management.
Proven experience working on high-traffic and large-scale software products, ensuring data quality, reliability, and performance under demanding conditions.
Be The First To Know
About the latest Data scientists Jobs in Gurugram !
Big Data Solution Architect
Posted today
Job Viewed
Job Description
Description
EPAM is a leading global provider of digital platform engineering and development services. We are committed to having a positive impact on our customers, our employees, and our communities. We embrace a dynamic and inclusive culture. Here you will collaborate with multi-national teams, contribute to a myriad of innovative projects that deliver the most creative and cutting-edge solutions, and have an opportunity to continuously learn and grow. No matter where you are located, you will join a dedicated, creative, and diverse community that will help you discover your fullest potential.
We are looking for Solution Architects for data-driven projects to join our Data Practice team in India. Together we design and drive lots of solutions that generate value from data, taking advantage of scalable platforms, cutting-edge technologies, and machine learning algorithms. We provide a solid architecture framework, educational programs, and a strong SA community to support our new Architects in a deep dive into the data domain.
#LI-DNI #REF-IN-WOMEN
Responsibilities
Requirements
We offer
Sn. Data Scientists- AI/ML- GEN AI- Work location : Across india | EXP: 4 - 12 years
Posted 11 days ago
Job Viewed
Job Description
Data Scientists- AI/ML- GEN AI- Across india | EXP: 4 - 10 years
data scientists with total of around 4-10 years of experience and atleast 4-10 years of relevant data science, analytics, and AI/ML Python; data science; AI/ML; GEN AI
Primary Skills :
- Excellent understanding and hand-on experience of data-science and machine learning techniques & algorithms for supervised & unsupervised problems, NLP and computer vision and GEN AI. Good applied statistics skills, such as distributions, statistical inference & testing, etc.
- Excellent understanding and hand-on experience on building Deep-learning models for text & image analytics (such as ANNs, CNNs, LSTM, Transfer Learning, Encoder and decoder, etc).
- Proficient in coding in common data science language & tools such as R, Python.
- Experience with common data science toolkits, such as NumPy, Pandas, Matplotlib, StatsModel, Scikitlearn, SciPy, NLTK, Spacy, OpenCV etc.
- Experience with common data science frameworks such as Tensorflow, Keras, PyTorch, XGBoost,etc.
- Exposure or knowledge in cloud (Azure/AWS).
- Experience on deployment of model in production.
Principal Software Architect - Big Data
Posted today
Job Viewed
Job Description
Driving lasting impact and building long-term capabilities with our clients is not easy work. You are the kind of person who thrives in a high performance/high reward culture - doing hard things, picking yourself up when you stumble, and having the resilience to try another way forward.
In return for your drive, determination, and curiosity, we'll provide the resources, mentorship, and opportunities you need to become a stronger leader faster than you ever thought possible. Your colleagues—at all levels—will invest deeply in your development, just as much as they invest in delivering exceptional results for clients. Every day, you'll receive apprenticeship, coaching, and exposure that will accelerate your growth in ways you won’t find anywhere else.
When you join us, you will have:
- Continuous learning: Our learning and apprenticeship culture, backed by structured programs, is all about helping you grow while creating an environment where feedback is clear, actionable, and focused on your development. The real magic happens when you take the input from others to heart and embrace the fast-paced learning experience, owning your journey.
- A voice that matters: From day one, we value your ideas and contributions. You’ll make a tangible impact by offering innovative ideas and practical solutions. We not only encourage diverse perspectives, but they are critical in driving us toward the best possible outcomes.
- Global community: With colleagues across 65+ countries and over 100 different nationalities, our firm’s diversity fuels creativity and helps us come up with the best solutions for our clients. Plus, you’ll have the opportunity to learn from exceptional colleagues with diverse backgrounds and experiences.
- World-class benefits: On top of a competitive salary (based on your location, experience, and skills), we provide a comprehensive benefits package to enable holistic well-being for you and your family.
As a Software Architect, you will be involved in leading complex software development teams in a hands-on manner. You will prototype code and participate in code reviews, facilitate breaking down user stories into technical tasks and help manage the process by which code is delivered. Your expertise will expand into Cloud technologies, DevOps and continuous delivery domains.
You will be an active learner, identifying new or better ways to deliver impact with people and technology. You will develop a growth mindset and relish opportunities to use familiar and unfamiliar technologies, closed source and open source software, and develop better approaches to solving business and technical challenges. You will have a strong understanding of key agile engineering practices to guide teams on improvement opportunities in their engineering practices. You will lead the adoption of technical standards and best practices to improve our organizational capability.
You will also provide ongoing coaching and mentoring to the technical leads and developers to grow high performing teams. You will be based in our Bengaluru or Gurugram office as part of our Growth, Marketing & Sales solutions team.
You will be aligned primarily with Periscope’s technology team. Periscope® By McKinsey enables better commercial decisions by uncovering actionable insights. The Periscope platform combines world leading intellectual property, prescriptive analytics, and cloud based tools to provide more than 25 solutions focused on insights and marketing, with expert support and training. It is a unique combination that drives revenue growth both now and in the future. Customer experience, performance, pricing, category, and sales optimization are powered by the Periscope platform.
Periscope has a presence in 26 locations across 16 countries with a team of 1000+ business and IT professionals and a network of 300+ experts.
To learn more about how Periscope’s solutions and experts are helping businesses continually drive better performance, visit
- Bachelor's degree in computer science or equivalent area; master's degree is a plus
- 11+ years of experience in software development
- 3+ years of experience in architecting SaaS/Web based customer facing products, leading engineering teams as a software/technical architect
- Hands-on experience in designing and building B2B or B2C products
- Hands-on experience working with Python or Java (Python preferred)
- Strong cloud infrastructure experience with Azure cloud / AWS / GCP; Azure preferred
- Hands-on experience with container technologies like Docker, Kubernetes with Helm charts
- Hands-on experience with relational databases like SQL Server, PostgreSQL and document stores like Elasticsearch or MongoDB
- Hands-on experience with Big Data processing technologies like Spark or Databricks
- Experience in engineering practices such as code refactoring, microservices, design and enterprise integration patterns, test and design driven development, continuous integration, building highly scalable applications, application and infrastructure security
- Experience in building event driven systems and working with message queues/topics
- Knowledge of Agile software development process