273 Data Scientists jobs in Gurugram
Data Scientists
Posted today
Job Viewed
Job Description
Description
Key Responsibilities
AI/ML Development & Research
• Design, develop, and deploy advanced machine learning and deep learning models for complex business problems
• Implement and optimize Large Language Models (LLMs) and Generative AI solutions
• Build agentic AI systems with autonomous decision-making capabilities
• Conduct research on emerging AI technologies and their practical applications
• Perform model evaluation, validation, and continuous improvement
Cloud Infrastructure & Full-Stack Development
• Architect and implement scalable cloud-native ML/AI solutions on AWS, Azure, or GCP
• Develop full-stack applications integrating AI models with modern web technologies
• Build and maintain ML pipelines using cloud services (SageMaker, ML Engine, etc.)
• Implement CI/CD pipelines for ML model deployment and monitoring
• Design and optimize cloud infrastructure for high-performance computing workloads
Data Engineering & Database Management
• Design and implement data pipelines for large-scale data processing
• Work with both SQL and NoSQL databases (PostgreSQL, MongoDB, Cassandra, etc.)
• Optimize database performance for ML workloads and real-time applications
• Implement data governance and quality assurance frameworks
• Handle streaming data processing and real-time analytics
Leadership & Collaboration
• Mentor junior data scientists and guide technical decision-making
• Collaborate with cross-functional teams including product, engineering, and business stakeholders
• Present findings and recommendations to technical and non-technical audiences
• Lead proof-of-concept projects and innovation initiatives
Required Qualifications
Education & Experience
• Master's or PhD in Computer Science, Data Science, Statistics, Mathematics, or related field
• 5+ years of hands-on experience in data science and machine learning
• 3+ years of experience with deep learning frameworks and neural networks
• 2+ years of experience with cloud platforms and full-stack development
Technical Skills - Core AI/ML
• Machine Learning: Scikit-learn, XGBoost, LightGBM, advanced ML algorithms
• Deep Learning: TensorFlow, PyTorch, Keras, CNN, RNN, LSTM, Transformers
• Large Language Models: GPT, BERT, T5, fine-tuning, prompt engineering
• Generative AI: Stable Diffusion, DALL-E, text-to-image, text generation
• Agentic AI: Multi-agent systems, reinforcement learning, autonomous agents
Technical Skills - Development & Infrastructure
• Programming: Python (expert), R, Java/Scala, JavaScript/TypeScript
• Cloud Platforms: AWS (SageMaker, EC2, S3, Lambda), Azure ML, or Google Cloud AI
• Databases: SQL (PostgreSQL, MySQL), NoSQL (MongoDB, Cassandra, DynamoDB)
• Full-Stack Development: React/Vue.js, Node.js, FastAPI, Flask, Docker, Kubernetes
• MLOps: MLflow, Kubeflow, Model versioning, A/B testing frameworks
• Big Data: Spark, Hadoop, Kafka, streaming data processing
Preferred Qualifications
• Experience with vector databases and embeddings (Pinecone, Weaviate, Chroma)
• Knowledge of LangChain, LlamaIndex, or similar LLM frameworks
• Experience with model compression and edge deployment
• Familiarity with distributed computing and parallel processing
• Experience with computer vision and NLP applications
• Knowledge of federated learning and privacy-preserving ML
• Experience with quantum machine learning
• Expertise in MLOps and production ML system design
Key Competencies
Technical Excellence
• Strong mathematical foundation in statistics, linear algebra, and optimization
• Ability to implement algorithms from research papers
• Experience with model interpretability and explainable AI
• Knowledge of ethical AI and bias detection/mitigation
Problem-Solving & Innovation
• Strong analytical and critical thinking skills
• Ability to translate business requirements into technical solutions
• Creative approach to solving complex, ambiguous problems
• Experience with rapid prototyping and experimentation
Communication & Leadership
• Excellent written and verbal communication skills
• Ability to explain complex technical concepts to diverse audiences
• Strong project management and organizational skills
• Experience mentoring and leading technical teams
How We Partner To Protect You: TaskUs will neither solicit money from you during your application process nor require any form of payment in order to proceed with your application. Kindly ensure that you are always in communication with only authorized recruiters of TaskUs.DEI: In TaskUs we believe that innovation and higher performance are brought by people from all walks of life. We welcome applicants of different backgrounds, demographics, and circumstances. Inclusive and equitable practices are our responsibility as a business. TaskUs is committed to providing equal access to opportunities. If you need reasonable accommodations in any part of the hiring process, please let us know.We invite you to explore all TaskUs career opportunities and apply through the provided URL.
Solution Data Scientists
Posted 1 day ago
Job Viewed
Job Description
About AiDASH
AiDASH is making critical infrastructure industries climate-resilient and sustainable with satellites and AI. Using our full-stack SaaS solutions, customers in electric, gas, water utilities, transportation, and construction are transforming asset inspection and maintenance - and complying with biodiversity net gain mandates and carbon capture goals. AiDASH exists to safeguard critical infrastructure and secure the future of humanAIty. Learn more at
We are a Series C climate tech startup backed by leading investors, including Shell Ventures, National Grid Partners, G2 Venture Partners, Duke Energy, Edison International, Lightrock, Marubeni, among others. We have been recognized by Forbes two years in a row as one of "America's Best Startup Employers." We are also proud to be one of the few climate software companies in Time Magazine's "America's Top GreenTech Companies 2024". Deloitte Technology Fast 500 recently ranked us at No. 12 among San Francisco Bay Area companies, and No. 59 overall in their selection of the top 500 for 2024.
Join us in Securing Tomorrow
How you'll make an impact:- Extract, clean, and analyze large datasets from multiple sources using SQL and Python/R.
- Build and optimize data scraping pipelines to process large-scale unstructured data.
- Perform statistical analysis and data mining to identify trends, patterns, and anomalies.
- Develop automated workflows for data preparation and transformation.
- Conduct data quality checks and implement validation procedures.
- Build and validate ML models (classification, regression, clustering) using TensorFlow, Keras, and Pandas.
- Apply feature engineering to enhance accuracy and interpretability.
- Execute experiments, apply cross-validation, and benchmark model performance.
- Collaborate on A/B testing frameworks to validate hypotheses.
- Work on predictive analytics for wildfire risk detection and storm damage assessment.
- Translate complex business requirements into data science solutions.
- Develop predictive models and analytical tools that directly support CRIS decision-making.
- Create interactive dashboards and automated reports for stakeholders.
- Document methodologies and maintain clean, production-ready code repositories.
- 4–5 years of hands-on experience in data science, analytics, or quantitative research.
- Bachelor's degree in Data Science, Statistics, Computer Science, Mathematics, Engineering, or related field.
- Strong programming skills in Python for data analysis and machine learning.
- Proficiency in SQL and experience with relational databases.
- Experience with ML libraries (scikit-learn, pandas, NumPy, Keras, TensorFlow).
- Knowledge of statistical methods (clustering, regression, classification) and experimental design.
- Familiarity with data visualization tools (matplotlib, seaborn, ggplot2, Tableau, or similar).
- Exposure to cloud platforms (AWS, GCP, Azure) is a plus.
- Proven experience with end-to-end model development (from data exploration to deployment).
- Track record of delivering actionable insights that influenced business decisions.
- Strong analytical and problem-solving skills with attention to detail.
We are proud to be an equal-opportunity employer. We are committed to embracing diversity and inclusion in our hiring practices, and we promote a work environment where everyone, from any race, color, religion, sex, sexual orientation, gender identity, or national origin, can do their best work.
We are committed to providing an inclusive and accessible interview experience for all candidates. Please let us know if you require any accommodation during the interview process, and we will make every effort to meet your needs.
Lead Consultant-Data Scientists with AI and Generative Model experience!
Posted today
Job Viewed
Job Description
Ready to shape the future of work?
At Genpact, we don’t just adapt to change—we drive it. AI and digital innovation are redefining industries, and we’re leading the charge. Genpact’s AI Gigafactory, our industry-first accelerator, is an example of how we’re scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI, our breakthrough solutions tackle companies’ most complex challenges.
If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that’s shaping the future, this is your moment.
Genpact (NYSE: G) is anadvanced technology services and solutions company that deliverslastingvalue for leading enterprisesglobally.Through ourdeep business knowledge, operational excellence, and cutting-edge solutions – we help companies across industries get ahead and stay ahead.Powered by curiosity, courage, and innovation,our teamsimplementdata, technology, and AItocreate tomorrow, today.Get to know us atgenpact.comand onLinkedIn,X,YouTube, andFacebook.
Inviting applications for the role of Lead Consultant-Data Scientists with AI and Generative Model experience!
We are currently looking for a talented and experienced Data Scientist with a strong background in AI, specifically in building generative AI models using large language models, to join our team. This individual will play a crucial role in developing and implementing data-driven solutions, AI-powered applications, and generative models that will help us stay ahead of the competition and achieve our ambitious goals.
Responsibilities • Collaborate with cross-functional teams to identify, analyze, and interpret complex datasets to develop actionable insights and drive data-driven decision-making.
• Design, develop, and implement advanced statistical models, machine learning algorithms, AI applications, and generative models using large language models such as GPT-3, BERT and also frameworks like RAG, Knowledge Graphs etc.
• Communicate findings and insights to both technical and non-technical stakeholders through clear and concise presentations, reports, and visualizations.
• Continuously monitor and assess the performance of AI models, generative models, and data-driven solutions, refining and optimizing them as needed.
• Stay up-to-date with the latest industry trends, tools, and technologies in data science, AI, and generative models, and apply this knowledge to improve existing solutions and develop new ones.
• Mentor and guide junior team members, helping to develop their skills and contribute to their professional growth.
Qualifications we seek in you:
Minimum Qualifications• Bachelor's or Master's degree in Data Science, Computer Science, Statistics, or a related field.
• Experience in data science, machine learning, AI applications, and generative AI modelling.
• Strong expertise in Python, R, or other programming languages commonly used in data science and AI, with experience in implementing large language models and generative AI frameworks.
• Proficient in statistical modelling, machine learning techniques, AI algorithms, and generative model development using large language models such as GPT-3, BERT, or similar frameworks like RAG, Knowledge Graphs etc.
• Experience working with large datasets and using various data storage and processing technologies such as SQL, NoSQL, Hadoop, and Spark.
• Strong analytical, problem-solving, and critical thinking skills, with the ability to draw insights from complex data and develop actionable recommendations.
• Excellent communication and collaboration skills, with the ability to work effectively with cross-functional teams and explain complex concepts to non-technical stakeholders.
Preferred Qualifications/ skills• Experience in deploying AI models, generative models, and applications in a production environment using cloud platforms such as AWS, Azure, or GCP.
• Knowledge of industry-specific data sources, challenges, and opportunities relevant to Insurance
• Demonstrated experience in leading data science projects from inception to completion, including project management and team collaboration skills.
Why join Genpact?
Be a transformation leader – Work at the cutting edge of AI, automation, and digital innovation
Make an impact – Drive change for global enterprises and solve business challenges that matter
Accelerate your career – Get hands-on experience, mentorship, and continuous learning opportunities
Work with the best – Join 140,000+ bold thinkers and problem-solvers who push boundaries every day
Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress
Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up.
Let’s build tomorrow together.
Big Data Developer
Posted today
Job Viewed
Job Description
Position: Big Data Engineer
Experience: 5+ Years
Location: ( Gurugram / Bangalore )
Joining: Immediate Joiner
Budget=16.5 LPA
Job Summary:
We're seeking an experienced Senior Big Data Engineer with 5+ years of experience in designing, developing, and implementing large-scale data systems using Redshift, AWS, Spark, and Scala. The ideal candidate will have expertise in building data pipelines, data warehousing, and data processing applications.
Key Responsibilities:
- Data Warehousing:
- Design, develop, and maintain large-scale data warehouses using Amazon Redshift
- Optimize Redshift cluster performance, scalability, and cost-effectiveness
- Data Pipelines:
- Build and maintain data pipelines using Apache Spark, Scala, and AWS services like S3, Glue, and Lambda
- Ensure data quality, integrity, and security across the data pipeline
- Data Processing:
- Develop and optimize data processing applications using Spark, Scala, and AWS services
- Work with data scientists and analysts to develop predictive models and perform advanced analytics
- AWS Services:
- Leverage AWS services like S3, Glue, Lambda, and IAM to build scalable and secure data systems
- Ensure data systems are highly available, scalable, and fault-tolerant
- Troubleshooting and Optimization:
- Troubleshoot and optimize data pipeline performance issues
- Ensure data systems are optimized for cost, performance, and scalability
Requirements:
- Experience: 5+ years of experience in big data engineering or a related field
- Technical Skills:
- Proficiency in Amazon Redshift, Apache Spark, and Scala
- Experience with AWS services like S3, Glue, Lambda, and IAM
- Knowledge of data processing frameworks like Spark and data storage solutions like S3 and Redshift
- Data Architecture: Strong understanding of data architecture principles and design patterns
- Problem-Solving: Excellent problem-solving skills and attention to detail
Preferred Qualifications:
- Certifications: AWS Certified Big Data - Specialty or similar certifications
- Machine Learning: Familiarity with machine learning frameworks like Spark MLlib or TensorFlow
- Agile Methodology: Experience working in agile development environments
- Data Governance: Experience with data governance, data quality, and data security
What We Offer:
- Competitive salary and benefits package
- Opportunities for professional development and growth
- Collaborative and dynamic work environment
- Flexible work arrangements
Big Data Developer
Posted 1 day ago
Job Viewed
Job Description
About The Opportunity
A staffing and HR services firm focused on placing talent into Enterprise IT, Analytics, and Big Data engineering projects across Banking, Retail, Telecom, and SaaS sectors. We partner with product and services organisations to deliver scalable data platforms, real-time analytics, and high-throughput ETL solutions.
We are hiring an on-site Big Data Engineer in India to design, build, and optimise data pipelines and platform components for mission-critical data applications.
Role & Responsibilities
- Design and implement end-to-end batch and streaming data pipelines using Spark and Hadoop ecosystem components.
- Develop ETL jobs and data transformation logic in Python/SQL, ensuring scalability and maintainability.
- Integrate and process real-time data streams using Kafka and related ingestion tools; ensure low-latency delivery to downstream systems.
- Author Hive queries and optimize performance for large-scale datasets; tune Spark jobs to reduce cost and improve throughput.
- Collaborate with data engineers, analysts, and stakeholders to translate business requirements into robust data models and pipelines.
- Implement monitoring, logging, and automated testing for data workflows; troubleshoot production issues and perform root-cause analysis.
Skills & Qualifications
Must-Have
- Apache Spark
- Hadoop HDFS
- Apache Hive
- Apache Kafka
- SQL
- Python
Preferred
- Scala
- Apache Airflow
- AWS EMR
Other Qualifications
- Bachelor's degree in Computer Science, Engineering, or related field (or equivalent practical experience).
- Proven experience delivering on-site in enterprise Big Data environments; familiarity with production deployment practices and CI/CD for data workflows.
- Strong troubleshooting mindset with experience in performance tuning and data quality assurance.
Benefits & Culture Highlights
- On-site delivery model with high-impact projects for large enterprise clients and opportunities to work end-to-end on data platforms.
- Collaborative engineering culture, technical mentoring, and exposure to modern data stack components.
- Competitive compensation and opportunities for certification/training in cloud and Big Data technologies.
Keywords: Big Data Engineer, Hadoop, Spark, Kafka, ETL, data pipelines, on-site India, data platform, Python, Hive.
Skills: spark,apache spark,sql,apache kafka,python,scala,big data
Big Data Engineer
Posted 1 day ago
Job Viewed
Job Description
Company:Indian / Global Digital Organization
Key Skills:
Pyspark, AWS, Python, SCALA, ETL
Roles and Responsibilities:
- Develop and deploy ETL and data warehousing solutions using Python libraries and Linux bash scripts on AWS EC2, with data stored in Redshift.
- Collaborate with product and analytics teams to scope business needs, design metrics, and build reports/dashboards.
- Automate and optimize existing data sets and ETL pipelines for efficiency and reliability.
- Work with multi-terabyte data sets and write complex SQL queries to support analytics.
- Design and implement ETL solutions integrating multiple data sources using Pentaho.
- Utilize Linux/Unix scripting for data processing tasks.
- Leverage AWS services (Redshift, S3, EC2) for storage, processing, and pipeline automation.
- Follow software engineering best practices for coding standards, code reviews, source control, testing, and operations.
Skills Required:
Must-Have:
- Hands-on experience with PySpark for big data processing
- Strong knowledge of AWS services (Redshift, S3, EC2)
- Proficiency in Python for data processing and automation
- Strong SQL skills for working with RDBMS and multi-terabyte data sets
Nice-to-Have:
- Experience with SCALA for distributed data processing
- Knowledge of ETL tools such as Pentaho
- Familiarity with Linux/Unix scripting for data operations
- Exposure to data modeling, pipelines, and visualization
Education:
Bachelor's degree in Computer Science, Information Technology, or a related field
Big Data Tester
Posted today
Job Viewed
Job Description
Job Title:
Big Data Tester (SQL)
Experience:
3+Years
Location:
(Gurugram/Bangalore)
Employment Type:
Full-Time
Job Summary:
We are seeking a highly skilled
Big Data Tester with strong SQL expertise
to join our team. The ideal candidate will be responsible for validating, testing, and ensuring the quality of large-scale data pipelines, data lakes, and big data platforms. This role requires expertise in SQL, Big Data testing frameworks, ETL processes, and hands-on experience with Hadoop ecosystem tools.
Key Responsibilities:
- Design, develop, and execute test strategies for
Big Data applications and pipelines
. - Perform
data validation, reconciliation, and quality checks
on large datasets. - Write and execute complex
SQL queries
for data validation and analysis. - Validate data ingestion from multiple sources (structured/unstructured) into Hadoop/Big Data platforms.
- Conduct testing for
ETL jobs, data transformations, and data loading processes
. - Work with
Hadoop, Hive, Spark, Sqoop, HDFS, and related Big Data tools
. - Identify, document, and track defects; collaborate with developers and data engineers to resolve issues.
- Automate data validation and testing processes where possible.
- Ensure compliance with
data governance, quality standards, and best practices
. - Work in an
Agile environment
with cross-functional teams.
Required Skills & Qualifications:
- Bachelor's degree
in Computer Science, Information Technology, or related field. - 3+ years of experience in
Data/ETL/Big Data testing
. - Strong
SQL skills
(complex queries, joins, aggregations, window functions). - Hands-on experience with
Big Data tools
: Hadoop, Hive, Spark, HDFS, Sqoop, Impala (any relevant). - Experience in testing
ETL processes and Data Warehousing solutions
. - Familiarity with
scripting languages
(Python/Unix Shell) for test automation. - Knowledge of
defect management and test case management tools
(e.g., JIRA, HP ALM). - Strong analytical and problem-solving skills.
Good to Have:
- Exposure to
Cloud platforms
(AWS, Azure, GCP) with Big Data services. - Experience with
automation frameworks
for Big Data testing. - Understanding of
CI/CD pipelines
for data projects.
Be The First To Know
About the latest Data scientists Jobs in Gurugram !
Big Data Developer
Posted today
Job Viewed
Job Description
Position: Big Data Engineer ( Immediate Joiner)
Experience: 5+ Years
Location: ( Gurugram / Bangalore )
Joining: Immediate Joiner
Budget LPA
Job Summary:
We're seeking an experienced Senior Big Data Engineer with 5+ years of experience in designing, developing, and implementing large-scale data systems using Redshift, AWS, Spark, and Scala. The ideal candidate will have expertise in building data pipelines, data warehousing, and data processing applications.
Key Responsibilities:
- Data Warehousing:
- Design, develop, and maintain large-scale data warehouses using Amazon Redshift
- Optimize Redshift cluster performance, scalability, and cost-effectiveness
- Data Pipelines:
- Build and maintain data pipelines using Apache Spark, Scala, and AWS services like S3, Glue, and Lambda
- Ensure data quality, integrity, and security across the data pipeline
- Data Processing:
- Develop and optimize data processing applications using Spark, Scala, and AWS services
- Work with data scientists and analysts to develop predictive models and perform advanced analytics
- AWS Services:
- Leverage AWS services like S3, Glue, Lambda, and IAM to build scalable and secure data systems
- Ensure data systems are highly available, scalable, and fault-tolerant
- Troubleshooting and Optimization:
- Troubleshoot and optimize data pipeline performance issues
- Ensure data systems are optimized for cost, performance, and scalability
Requirements:
- Experience: 5+ years of experience in big data engineering or a related field
- Technical Skills:
- Proficiency in Amazon Redshift, Apache Spark, and Scala
- Experience with AWS services like S3, Glue, Lambda, and IAM
- Knowledge of data processing frameworks like Spark and data storage solutions like S3 and Redshift
- Data Architecture: Strong understanding of data architecture principles and design patterns
- Problem-Solving: Excellent problem-solving skills and attention to detail
Preferred Qualifications:
- Certifications: AWS Certified Big Data - Specialty or similar certifications
- Machine Learning: Familiarity with machine learning frameworks like Spark MLlib or TensorFlow
- Agile Methodology: Experience working in agile development environments
- Data Governance: Experience with data governance, data quality, and data security
Big Data Engineer
Posted today
Job Viewed
Job Description
Pay Range:
LPA (INR)
Required Qualifications:
- 3–5 years of hands-on experience in Big Data engineering.
- Proficiency in
Python
(Java is a plus). - Strong understanding of
Google Cloud Platform (GCP)
services and architecture. GCP certification is a plus. - Solid problem-solving skills and a structured thought process.
- Experience with data migration and cloud-native development is highly desirable.
- Experience with building RESTful APIs
- Thorough understanding of JSON, and data structure fundamentals
- Expertise in objected oriented analysis and design across a variety of platforms
- Fundamental knowledge of distributed computing and Experience in Big Data Technologies like Spark, Hive, distributed computing.
- Experience with workflow scheduling tools. Airflow experience is desirable.
- Demonstrated experience in Agile development, application design, software development, and testing
- Hands-on expertise with application design, software development and automated testing
- Experience with distributed (multi-tiered) systems, algorithms, and relational databases
- Aptitude for learning and applying programming concepts
- Ability to effectively communicate with internal and external business partners
Nice to Have:
- Experience working in hybrid teams and agile environments.
- Exposure to GenAI or AI/ML-based applications.
- Familiarity with vendor collaboration (e.g., Infosys, IBM, CDS).
About the Role:
Insight Global is seeking a skilled and motivated
Big Data Engineer
to join our clients GenAI applications team based out of India. This role is part of a strategic initiative to migrate their data infrastructure from on-premise to Google Cloud Platform (GCP). You will work closely with a high-performing team of engineers and leaders to build scalable data solutions and support cloud-native application development.
Big Data Analytics
Posted today
Job Viewed
Job Description
Job description
All about You: Qualifications & Experience:-
• 5+ years of experience in analytics, data science, pricing strategy, customer success, or related roles, ideally in the payments, financial services, or technology sectors.
• Proven track record of developing and scaling data-driven tools and frameworks with measurable outcomes.
• Expertise in programming (Python, R, SQL) and experience building scalable analytics solutions.
• Proficiency in business intelligence tools (e.g., Tableau, Power BI) for creating dashboards and data visualizations.
• Experience integrating AI/ML models to drive predictive insights and automate workflows is a strong advantage.
Role and Responsibilities:-
- Design and implement value enablement frameworks for Pricing, Pre-Sales Enablement, and Customer Success, aligning with Mastercard's growth strategies.
•Collaborate with global and regional stakeholders to ensure scalable solutions tailored to regional needs.
•Provide data-driven recommendations to optimize pricing, enhance pre-sales propositions, and ensure customer success.
• Develop project structures/frameworks and build/review presentations.
• Conduct data sanity and hygiene checks to ensure data integrity.
• Convert business problems into analytical problems for strategic development.
Technical Leadership:-
- Develop and deploy advanced analytics tools like ROI calculators and value dashboards.
• Use Python, R, and SQL for data analysis, modeling, and tool development.
• Create dynamic dashboards and visualizations using business intelligence platforms.
• Integrate AI/ML models to enhance tool accuracy and efficiency.
• Drive process efficiency and scalability through automation and advanced analytics.
Value Enablement Initiatives:-
- Build frameworks to measure and track customer value realization.
• Design tailored customer solutions and business cases with predictive models and real-time insights.
• Develop self-service analytics tools for actionable insights.
• Encourage participation in Sandbox challenges and keep regular brainstorming sessions for new ideas.
• Build agile and flexible solutions by understanding core business and context.
Revenue Optimization:-
- Identify and implement revenue optimization opportunities through strategic analysis.
• Monitor performance metrics to align with revenue goals and identify improvement areas.
• Develop tools to track realized ROI and provide diagnostics for customer outcomes.
Collaboration & Team Enablement:-
- Work closely with cross-functional teams to ensure seamless initiative execution.
• Foster a collaborative and innovative environment, encouraging knowledge sharing.
• Plan and lead working sessions with the team.
• Provide mentorship and feedback for personal and professional growth.
• Support training and enablement for internal teams on analytics tools and methodologies.
Skills:-
• Strong technical acumen with the ability to design and implement advanced analytics and visualization solutions.
• Exceptional analytical and problem-solving skills, with a focus on deriving actionable insights from complex datasets.
• Excellent communication and stakeholder management skills, with the ability to translate technical insights into business impact.
• Deep understanding of pricing strategies, customer success enablement, and revenue optimization principles.
Education:-
• Bachelors degree in Data Science, Computer Science, Business Analytics, Economics, Finance, or a related field. MBA/Advanced degrees or certifications in analytics, data science, or AI/ML are preferred.