8,291 Recommendation Systems jobs in India
Data Science & Machine Learning Lead
Posted today
Job Viewed
Job Description
Job Description
With excellent analytical and problem-solving skills, you should understand business problems of the customers, translate them into scope of work and technical specifications for developing into Data Science projects. Efficiently utilize cutting edge technologies in AI, Generative AI areas and implement solutions for business problems. Good exposure technology platforms for Data Science, AI, Gen AI, cloud with implementation experience. Ability to provide end to end technical solutions leveraging latest AI, Gen AI tools, frameworks for the business problems. This Job requires the following:
- Designing, developing, and implementing end-to-end machine learning production pipelines (data exploration, sampling, training data generation, feature engineering, model building, and performance evaluation)
- Deep experience in predictive analytics and statistical modeling
Deep experience in successfully making use of the following: Logistic Regression Multivariate Regression, Support Vector Machines, Stochastic Processes, Decision Trees, Lifetime analysis, common clustering algorithms, Optimization, CNN
Essential Qualifications
B-Tech or BE - computer / IT or MCA or MSC- Computer Science along with necessary certifications is preferred
Technical Qualifications (Essential)
- At least 2 to 3 ML apps deployed in production
- At least 1 to 2 Gen AI apps in production.
- Experience in Computer vision is a must.
- Core ML experience is a must
- Hands-on programming experience
- Hands-on technical design experience
- Hands-on prompt engineering experience
- At least 3 Data Science, AI Projects designed, developed and delivered to production
At least 1 Generative AI Project designed, developed and delivered to production
Primary Skills
ü Hands-on coding experience in Python, PyTorch, Spark/PySpark, SQL, TensorFlow, NLP Frameworks and similar tools/frameworks
ü Good understanding of business and domain of the applications
ü Hands-on experience in design and development of Gen AI applications using Open Source LLMs and cloud platforms
ü Hands-on experience in design and development of API based applications for AI and Data Science Projects
ü Expertise in GenAI concepts, RAG and Models fine-tuning techniques
ü Understand the concepts of major AI models such as OpenAI, Llama, Hugging Face, Mistral AI etc., and implementation experience
ü Understanding of DevOps pipelines for deployment
Good understanding of Data Engineering lifecycle – data pipelines, data warehouse, data lake
Secondary Skills
ü End to end data engineering project experience using Databricks and Azure Data platform
ü Knowledge of any configuration management tools is desirable
Familiarity with containerization and container orchestration services like Docker and Kubernetes
Functional & Technical Responsibilities
ü Hands-on coding experience in Python, PyTorch, Spark/PySpark, SQL, TensorFlow, NLP Frameworks and similar tools/frameworks
ü Good understanding of business and domain of the applications
ü Hands-on experience in design and development of Gen AI applications using Open Source LLMs and cloud platforms
ü Hands-on experience in design and development of API based applications for AI and Data Science Projects
ü Expertise in GenAI concepts, RAG and Models fine-tuning techniques
ü Understand the concepts of major AI models such as OpenAI, Llama, Hugging Face, Mistral AI etc., and implementation experience
ü Provide technical solutions to sales, pre-sales teams for the proposals
ü Develop and guide team members in implementation of solutions related to Data Science, AI and Gen AI Projects
ü Expert knowledge of data modeling and understanding of different data structures
Experience with design of AI/ML solutions either as standalone or integrated with other applications
Data Science & Machine Learning Engineer
Posted today
Job Viewed
Job Description
TE-4 Years and above
Location- Bangalore/Chennai/Hyderabad
NP- 15-30 Days max
JOB DESCRIPTION
Join our fast‑growing team to build a unified platform for data analytics, machine learning, and generative AI. You’ll integrate the AI/ML toolkit, real‑time streaming into a backed feature store, and dashboards—turning raw events into reliable features, insights, and user‑facing analytics at scale.
What you’ll do
- Design and build streaming data pipelines (exactly‑once or effectively‑once) from event sources into low‑latency feature serving and NRT and OLAP queries.
- Develop an AI/ML toolkit: reusable libraries, SDKs, and CLIs for data ingestion, feature engineering, model training, evaluation, and deployment.
- Stand up and optimize a production feature store (schemas, SCD handling, point‑in‑time correctness, TTL/compaction, backfills).
- Expose features and analytics via well‑designed APIs/Services;
integrate with model serving and retrieval for ML/GenAI use cases. - Build and operationalize Superset dashboards for monitoring data quality, pipeline health, feature drift, model performance, and business KPIs.
- Implement governance and reliability: data contracts, schema evolution, lineage, observability, alerting, and cost controls.
- Partner with UI/UX, data science, and backend teams to ship end‑to‑end workflows from data capture to real‑time inference and decisioning.
- Drive performance: benchmark and tune distributed DB (partitions, indexes, compression, merge settings), streaming frameworks, and query patterns.
- Automate with CI/CD, infrastructure‑as‑code, and reproducible environments for quick, safe releases.
Tech you may use
Languages: Python, Java/Scala, SQL
Streaming/Compute: Kafka (or Pulsar), Spark, Flink, Beam
Storage/OLAP: ClickHouse (primary), object storage (S3/GCS), Parquet/Iceberg/Delta
Orchestration/Workflow: Airflow, dbt (for transformations), Makefiles/Poetry/pipenv
ML/MLOps: MLflow/Weights & Biases, KServe/Seldon, Feast/custom feature store patterns, vector stores (optional)
Dashboards/BI: Superset (plugins, theming), Grafana for ops
Platform: Kubernetes, Docker, Terraform, GitHub Actions/GitLab CI, Prometheus/OpenTelemetry
Cloud: AWS/GCP/Azure
What we’re looking for
- 4+ years building production data/ML or streaming systems with high TPS and large data volumes.
- Strong coding skills in Python and one of Java/Scala;
solid SQL and data modeling. - Hands‑on experience with Kafka (or similar), Spark/Flink, and OLAP stores—ideally ClickHouse.
- GenAI pipelines: retrieval‑augmented generation (RAG), embeddings, prompt/tooling workflows, model evaluation at scale.
- Proven experience designing feature pipelines with point‑in‑time correctness and backfills;
understanding of online/offline consistency. - Experience instrumenting Superset dashboards tied to ClickHouse for operational and product analytics.
- Fluency with CI/CD, containerization, Kubernetes, and infrastructure‑as‑code.
- Solid grasp of distributed systems and architecture fundamentals: partitioning, consistency, idempotency, retries, batching vs. streaming, and cost/perf trade‑offs.
- Excellent collaboration skills;
ability to work cross‑functionally with DS/ML, product, and UI/UX. - Ability to pass a CodeSignal prescreen coding test.
Grid Dynamics (Nasdaq:GDYN) is a digital-native technology services provider that accelerates growth and bolsters competitive advantage for Fortune 1000 companies. Grid Dynamics provides digital transformation consulting and implementation services in omnichannel customer experience, big data analytics, search, artificial intelligence, cloud migration, and application modernization. Grid Dynamics achieves high speed-to-market, quality, and efficiency by using technology accelerators, an agile delivery culture, and its pool of global engineering talent. Founded in 2006, Grid Dynamics is headquartered in Silicon Valley with offices across the US, UK, Netherlands, Mexico, India, Central and Eastern Europe.
To learn more about Grid Dynamics, please visit . Follow us on Facebook , Twitter , and LinkedIn .
--
Trainee Intern Data Science
Posted 12 days ago
Job Viewed
Job Description
Company Overview – WhatJobs Ltd
WhatJobs is a global job search engine and career platform operating in over 50 countries. We leverage advanced technology and AI-driven tools to connect millions of job seekers with opportunities, helping businesses and individuals achieve their goals.
Position: Data Science Trainee/Intern
Location: Commercial Street
Duration: 3 Months
Type: Internship/Traineeship (with potential for full-time opportunities)
Role Overview
We are looking for enthusiastic Data Science trainees/interns eager to explore the world of data analytics, machine learning, and business insights. You will work on real-world datasets, apply statistical and computational techniques, and contribute to data-driven decision-making at WhatJobs.
Key Responsibilities
- Collect, clean, and analyze datasets to derive meaningful insights.
- Assist in building and evaluating machine learning models.
- Work with visualization tools to present analytical results.
- Support the team in developing data pipelines and automation scripts.
- Research new tools, techniques, and best practices in data science.
Requirements
- Basic knowledge of Python and data science libraries (Pandas, NumPy, Matplotlib, Scikit-learn).
- Understanding of statistics, probability, and data analysis techniques.
- Familiarity with machine learning concepts.
- Knowledge of Google Data Studio and BigQuery for reporting and data management.
- Strong analytical skills and eagerness to learn.
- Good communication and teamwork abilities.
What We Offer
- Hands-on experience with real-world data science projects.
- Guidance and mentorship from experienced data professionals.
- Opportunity to work with a global technology platform.
- Certificate of completion and potential for full-time role.
Company Details
Data Science
Posted today
Job Viewed
Job Description
Data Science -Machine Learning - 7+ Years - Remote
We are looking for a professional as Data Scientist with proficiency over Python and SQL or Azure.
Your Future Employer: - You will be working with a prestigious organization known for its commitment to diversity, equality and inclusion. They offer a dynamic work environment, opportunities for career growth, and a supportive team culture.
Location: - Remote
Responsibilities
Superior analytical and problem-solving skills
High Proficiency in Python Coding along with good knowledge of SQL
Knowledge of using Python Libraries such as scikit-learn, scipy, pandas, numpy etc.
Proficient hands on experience over using NLP.
Deep rooted knowledge of Traditional Machine Learning Algorithms & Advanced modelling techniques (e.g. Time series forecasting and analysis etc.) and Text Analytics technique (NLTK, Genism, LDA etc.)
Must have hands on experience of building and deploying Predictive Model.
Requirements
Bachelors/Master's degree in economics, mathematics, computer science/engineering, operations research or related analytics areas; candidates with BA/BS degrees in the same fields from the top tier academic institutions are also welcome to apply
7+ years experience of working as a Data Scientist
Strong and in-depth understanding of statistics, data analytics
Superior analytical and problem-solving skills
Outstanding written and verbal communication skills
What is in it for you: -
A stimulating work environment with equal employment opportunity.
Work in a fast-paced environment in established brand.
Grow in a culture focused on training and mentoring.
Reach us: If this role is aligned with your career, kindly write me an email along with your updated resume at for a confidential discussion on the role.
Disclaimer: Crescendo Global in specializes in Senior to C-level niche recruitment. We are passionate about empowering job seekers and employers with an engaging memorable job search and leadership hiring experience. Crescendo Global does not discriminate on the basis of race, religion, color, origin, gender, sexual orientation, age, marital status, veteran status or disability status.
Note: We receive a lot of applications daily, so it may not be possible to respond to each one individually. Please assume that your profile has not been shortlisted if you don't hear from us in a week. Thank you for your understanding.
Scammers can misuse Crescendo Globals name for fake job offers. We never ask for money, purchases, or system upgrades. Verify all opportunities at and report fraud immediately. Stay alert
Profile Keywords: Data science, Data scientist, Python, SQL, Statistical Modelling, NLP, Machine Learning, Power BI Stochastic Modelling, GLM / Regression
Data Science
Posted today
Job Viewed
Job Description
Job description:
Silicon Interfaces is looking for Mumbai-based Data Science Engineers Years) for its Artificial Intelligence and Machine Learning group in the Software Department.
Experienced as Team Members and Years) Experienced as Team Leads.
Outstation candidates without accommodation in Mumbai need not apply.
The ideal candidate will be responsible for developing high-quality Small Language Models (SML), Large Language Models (LLM), Shared Memory API, Vertex DB for specific domains in Semiconductors. You will be required to work on Agents, Agenitfied AI Agents, as well as Roving Agents using standardized open models for deployment on industry and educational websites. They will also be responsible for designing and implementing testable and scalable code.
Roles and Responsibilities
- Develop quality software and models
- Analyze and maintain existing models
- Design highly scalable, testable code
- Discover and fix programming bugs
- Desired Candidate Profile
Bachelor's degree or equivalent experience in AI, AI/ML, Computer Science/Computer Engineering, or related field
- Development experience with programming languages
- SQL database or relational database skills
Skills Required: We are looking for intelligent, talented, hard-working Data Scientists who are willing to work in different technologies, and languages C#, Java, Python, and PHP.
Software Fundamentals
· Data Structure:
· Software Design Methodologies (Waterfall/Agile):
Languages
· C#:-
· Python:-
AI/ML
· Data Analysis
· Colab
· TensorFlow
· Keras
· Hyperparameters
· Activation Functions
· Optimizers
· SML/LLM Models
· Agents
· Agentified AI
You don't have to know all of them, but at least two of the skills, and the ability and interest to adapt and learn. It may seem to be a lot of stuff to know, but the good news you don't need to implement them at the same time.
Silicon Interfaces' services global footprint. Software Services centers in North America, Europe, and Asia Pacific by VPN-based logins, in-person Customer site deployment (North America, Europe, and Asia Pacific (including India) and also Offshore projects from our state-of-the-art Software Development Centers based out of Mumbai.
We are a small specialized IT Services company catering to online services as well as Customer Projects from USA. (Please check , and also , and ).
The Job is based in Mumbai, India, and the company is currently doing (Work from Office) WfO
If you like to apply, please send an email as an application with your Resume to
The Email should have Subject as Data Science - AI ML Engineer positions at Silicon Interfaces
The Body of the mail should explain in brief your skills and experience.
Job Types: Full-time, Permanent, Fresher
Pay: ₹200, ₹300,000.00 per year
Benefits:
- Health insurance
- Paid sick time
- Paid time off
- Provident Fund
Work Location: In person
Data Science
Posted today
Job Viewed
Job Description
Job Role- Data Scientist (Sr. Consultant)
At Deloitte, we do not offer you just a job, but a career in the highly sought-after Risk Management field. We are the business leader in the risk market. We work with a vision to make the world more prosperous, trustworthy, and safe. Our clients, primarily based outside of India, are large, complex organizations that constantly evolve and innovate to build better products and services. In the process, they encounter various risks and the work we do to help them address these risks is increasingly important to their success—and to the strength of the economy and public security.
By joining us, you will get to work with diverse teams of professionals who design, manage, and implement risk- centric solutions across a variety of risk domains. In the process, you will gain exposure to the risk-centric challenges faced in today's world by organizations across a range of industry sectors and become subject matter experts in those areas and develop into a well-rounded professional who not only has the depth in few risk domains, but also has width of exposure to wide variety of risk domains.
So, if you are someone who believes in disrupting through innovation and execution of ideas, Deloitte Risk and Financial Advisory is the place to be
Work you'll do
The key job responsibilities will be to:
- Develop database schemas, tables and dictionaries
- Develop, implement and optimize stored procedures, functions and indexes
- Ensure the data quality and integrity in databases
- Create complex functions, scripts, stored procedures and triggers to support application development
- Fix any issues related to database performance and ensuring stability, reliability and security
- Design, create, and implement database systems based on the end user's requirements
- Prepare documentations for database applications
- Memory management for database systems
- Develop best practices for database design and development activities
The Team
Our Financial Technology practice develops and licenses a growing family of proprietary software products (see ) to assist financial institutions with a number of complex topics, such as accounting for credit deteriorated assets and the administration of investments in leveraged loans.
We are looking to add dedicated software engineers to our team. In addition to competitive compensation and benefits, we provide excellent opportunities for growth and learning and invest in our talent development.
Qualifications
Required:
- Bachelor's degree in computer science or related field
- At least 5 to 7 years of experience as a SQL developer, with strong understanding of Microsoft SQL Server database
- Strong experience with Python coding and libraries (Pandas, NumPy, PySpark etc.)
- Hands-on experience with machine learning algorithms and frameworks
- Understanding and implementation of AI and generative AI solutions
- Proficiency in data visualization & data analytics
- Knowledge of best practices when dealing with relational databases
- Capable of troubleshooting common database issues
- Familiar with tools that can aid with profiling server resource usage and optimizing it
- Knowledge in performance optimization techniques
- Excellent verbal and written communication
Our purpose
Deloitte's purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities.
Our people and culture
Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work.
Professional development
At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India.
Benefits to help you thrive
At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you.
Recruiting tips
From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters.
Requisition code:
Data Science
Posted today
Job Viewed
Job Description
Please note this is a 3 months FULL TIME INTERNSHIP
Role & responsibilities
- Train and fine-tune ML/LLM models
- Build scalable APIs with FastAPI
- Work on backend apps using Django
- Analyze datasets using pandas, NumPy, scikit-learn
- Collaborate with Full Stack Developers to deliver AI/ML features
- Eagerness to learn emerging AI/ML technologies and apply them to real-world problems
- understanding and Support in fine-tuning pretrained models (e.g., GPT, LLaMA, Mistral, BERT) for specific use cases.
- Learn to implement RAG pipelines with vector databases (e.g., FAISS, Pinecone) for context-aware AI/ML solutions.
Preferred candidate profile
- Python with strong OOP understanding
- Familiarity with AI frameworks like PyTorch, TensorFlow (Preferred)
- Exposure to ML tools (pandas, scikit-learn, NumPy)
- Strong understanding of Backend FastAPI / Django
- Interest in LLMs, APIs, and production-ready ML
- Good problem-solving, logical reasoning, and analytical skills
- Basic knowledge on Frontend ReactJS/Nextjs and Dashboards Academic project experience or previous work experience in AI, ML, NLP, or data engineering
- Good problem-solving, logical reasoning, and analytical skills
Be The First To Know
About the latest Recommendation systems Jobs in India !
Data Science
Posted today
Job Viewed
Job Description
Why Join Iris?
Are you ready to do the best work of your career at one of
India's Top 25 Best Workplaces in IT industry
? Do you want to grow in an award-winning culture that
truly values your talent and ambitions
?
Join Iris Software — one of the
fastest-growing IT services companies
— where
you own and shape your success story
.
About Us
At Iris Software, our vision is to be our client's most trusted technology partner, and the first choice for the industry's top professionals to realize their full potential.
With over 4,300 associates across India, U.S.A, and Canada, we help our enterprise clients thrive with technology-enabled transformation across financial services, healthcare, transportation & logistics, and professional services.
Our work covers complex, mission-critical applications with the latest technologies, such as high-value complex Application & Product Engineering, Data & Analytics, Cloud, DevOps, Data & MLOps, Quality Engineering, and Business Automation.
Working with Us
At Iris, every role is more than a job — it's a launchpad for growth.
Our Employee Value Proposition,
"Build Your Future. Own Your Journey."
reflects our belief that people thrive when they have ownership of their career and the right opportunities to shape it.
We foster a culture where your potential is valued, your voice matters, and your work creates real impact. With cutting-edge projects, personalized career development, continuous learning and mentorship, we support you to grow and become your best — both personally and professionally.
Curious what it's like to work at Iris? Head to this video for an inside look at the people, the passion, and the possibilities. Watch it here.
Job Description
We are seeking a Data Science Engineer to design, build, and optimize scalable data and machine learning systems. This role requires strong software engineering skills, a deep understanding of data science workflows, and the ability to work cross-functionally to translate business problems into production-level data solutions.
Key Responsibilities
- Design, implement, and maintain data science pipelines from data ingestion to model deployment.
- Collaborate with data scientists to operationalize ML models and algorithms in production environments.
- Develop robust APIs and services for ML model inference and integration.
- Build and optimize large-scale data processing systems using Spark, Pandas, or similar tools.
- Ensure data quality and pipeline reliability through rigorous testing, validation, and monitoring.
- Work with cloud infrastructure (AWS) for scalable ML deployment.
- Manage model versioning, feature engineering workflows, and experiment tracking.
- Optimize performance of models and pipelines for latency, cost, and throughput.
Required Qualifications
- Bachelor's or Master's degree in Computer Science, Data Science, Engineering, or a related field.
- 5+ years of experience in a data science, ML engineering, or software engineering role.
- Proficiency in Python (preferred) and SQL; knowledge of Java, Scala, or C++ is a plus.
- Experience with data science libraries like Scikit-learn, XGBoost, TensorFlow, or PyTorch.
- Familiarity with ML deployment tools such as ML flow, Sage Maker, or Vertex AI.
- Solid understanding of data structures, algorithms, and software engineering best practices.
- Experience working with databases (SQL, NoSQL) and data lakes (e.g., Delta Lake, Big Query).
Preferred Qualifications
- Experience with containerization and orchestration (Docker, Kubernetes).
- Experience working in Agile or cross-functional teams.
- Familiarity with streaming data platforms (Kafka, Spark Streaming, Flink).
Soft Skills
- Strong communication skills to bridge technical and business teams.
- Excellent problem-solving and analytical thinking.
- Self-motivated and capable of working independently or within a team.
- Passion for data and a curiosity-driven mindset.
Mandatory Competencies
Data Science and Machine Learning - Data Science and Machine Learning - AI/ML
Data Science and Machine Learning - Data Science and Machine Learning - Python
Database - Database Programming - SQL
Cloud - AWS - Tensorflow on AWS, AWS Glue, AWS EMR, Amazon Data Pipeline, AWS Redshift
Data Science and Machine Learning - Data Science and Machine Learning - Pytorch
Data Science and Machine Learning - Data Science and Machine Learning - AWS Sagemaker
Tech - Data Structure and Algorithms
Programming Language - Java - Core Java (java 8+)
Programming Language - Scala - Scala
DevOps/Configuration Mgmt - DevOps/Configuration Mgmt - Containerization (Docker, Kubernetes)
Agile - Agile - Extreme Programming
Middleware - Message Oriented Middleware - Messaging (JMS, ActiveMQ, RabitMQ, Kafka, SQS, ASB etc)
Beh - Communication and collaboration
Perks And Benefits For Irisians
Iris provides world-class benefits for a personalized employee experience. These benefits are designed to support financial, health and well-being needs of Irisians for a holistic professional and personal growth. Click here to view the benefits.
Data Science
Posted today
Job Viewed
Job Description
i m looking for data science & artificial intelligence trainer. well educated, minimum 2 years experienced trainer. its a freelancing job. flexible time.
Job Type: Full-time
Pay: ₹20,000.00 per month
Work Location: In person
Expected Start Date: 09/09/2025
Data Science
Posted today
Job Viewed
Job Description
We are seeking a highly skilled and motivated Lead DS/ML engineer to join our team. The role is critical to the development of a cutting-edge reporting platform designed to measure and optimize online marketing campaigns.
We are seeking a highly skilled Data Scientist / ML Engineer with a strong foundation in data engineering (ELT, data pipelines) and advanced machine learning to develop and deploy sophisticated models. The role focuses on building scalable data pipelines, developing ML models, and deploying solutions in production to support a cutting-edge reporting, insights, and recommendations platform for measuring and optimizing online marketing campaigns.
The ideal candidate should be comfortable working across data engineering, ML model lifecycle, and cloud-native technologies.
Job Description:
Key Responsibilities:
- Data Engineering & Pipeline Development
- Design, build, and maintain scalable ELT pipelines for ingesting, transforming, and processing large-scale marketing campaign data.
- Ensure high data quality, integrity, and governance using orchestration tools like Apache Airflow, Google Cloud Composer, or Prefect.
- Optimize data storage, retrieval, and processing using BigQuery, Dataflow, and Spark for both batch and real-time workloads.
- Implement data modeling and feature engineering for ML use cases.
- Machine Learning Model Development & Validation
- Develop and validate predictive and prescriptive ML models to enhance marketing campaign measurement and optimization.
- Experiment with different algorithms (regression, classification, clustering, reinforcement learning) to drive insights and recommendations.
- Leverage NLP, time-series forecasting, and causal inference models to improve campaign attribution and performance analysis.
- Optimize models for scalability, efficiency, and interpretability.
- MLOps & Model Deployment
- Deploy and monitor ML models in production using tools such as Vertex AI, MLflow, Kubeflow, or TensorFlow Serving.
- Implement CI/CD pipelines for ML models, ensuring seamless updates and retraining.
- Develop real-time inference solutions and integrate ML models into BI dashboards and reporting platforms.
- Cloud & Infrastructure Optimization
- Design cloud-native data processing solutions on Google Cloud Platform (GCP), leveraging services such as BigQuery, Cloud Storage, Cloud Functions, Pub/Sub, and Dataflow.
- Work on containerized deployment (Docker, Kubernetes) for scalable model inference.
- Implement cost-efficient, serverless data solutions where applicable.
- Business Impact & Cross-functional Collaboration
- Work closely with data analysts, marketing teams, and software engineers to align ML and data solutions with business objectives.
- Translate complex model insights into actionable business recommendations.
- Present findings and performance metrics to both technical and non-technical stakeholders.
Qualifications & Skills:
Educational Qualifications:
- Bachelor's or Master's degree in Computer Science, Data Science, Machine Learning, Artificial Intelligence, Statistics, or a related field.
- Certifications in Google Cloud (Professional Data Engineer, ML Engineer) is a plus.
Must-Have Skills:
- Experience: 5-10 years with the mentioned skillset & relevant hands-on experience
- Data Engineering: Experience with ETL/ELT pipelines, data ingestion, transformation, and orchestration (Airflow, Dataflow, Composer).
- ML Model Development: Strong grasp of statistical modeling, supervised/unsupervised learning, time-series forecasting, and NLP.
- Programming: Proficiency in Python (Pandas, NumPy, Scikit-learn, TensorFlow/PyTorch) and SQL for large-scale data processing.
- Cloud & Infrastructure: Expertise in GCP (BigQuery, Vertex AI, Dataflow, Pub/Sub, Cloud Storage) or equivalent cloud platforms.
- MLOps & Deployment: Hands-on experience with CI/CD pipelines, model monitoring, and version control (MLflow, Kubeflow, Vertex AI, or similar tools).
- Data Warehousing & Real-time Processing: Strong knowledge of modern data platforms for batch and streaming data processing.
Nice-to-Have Skills:
- Experience with Graph ML, reinforcement learning, or causal inference modeling.
- Working knowledge of BI tools (Looker, Tableau, Power BI) for integrating ML insights into dashboards.
- Familiarity with marketing analytics, attribution modeling, and A/B testing methodologies.
- Experience with distributed computing frameworks (Spark, Dask, Ray).
Location:
Bengaluru
Brand:
Merkle
Time Type:
Full time
Contract Type:
Permanent