14,189 AI jobs in India
Trainee Intern Data Science
Posted 10 days ago
Job Viewed
Job Description
Company Overview – WhatJobs Ltd
WhatJobs is a global job search engine and career platform operating in over 50 countries. We leverage advanced technology and AI-driven tools to connect millions of job seekers with opportunities, helping businesses and individuals achieve their goals.
Position: Data Science Trainee/Intern
Location: Commercial Street
Duration: 3 Months
Type: Internship/Traineeship (with potential for full-time opportunities)
Role Overview
We are looking for enthusiastic Data Science trainees/interns eager to explore the world of data analytics, machine learning, and business insights. You will work on real-world datasets, apply statistical and computational techniques, and contribute to data-driven decision-making at WhatJobs.
Key Responsibilities
- Collect, clean, and analyze datasets to derive meaningful insights.
- Assist in building and evaluating machine learning models.
- Work with visualization tools to present analytical results.
- Support the team in developing data pipelines and automation scripts.
- Research new tools, techniques, and best practices in data science.
Requirements
- Basic knowledge of Python and data science libraries (Pandas, NumPy, Matplotlib, Scikit-learn).
- Understanding of statistics, probability, and data analysis techniques.
- Familiarity with machine learning concepts.
- Knowledge of Google Data Studio and BigQuery for reporting and data management.
- Strong analytical skills and eagerness to learn.
- Good communication and teamwork abilities.
What We Offer
- Hands-on experience with real-world data science projects.
- Guidance and mentorship from experienced data professionals.
- Opportunity to work with a global technology platform.
- Certificate of completion and potential for full-time role.
Company Details
Data Science
Posted today
Job Viewed
Job Description
Company Description
FACE Prep is one of India's largest placement-focused skill development companies, specializing in job preparation for the tech sector. Since its inception in 2008, FACE Prep has helped millions of students kickstart their careers. The company offers a variety of programs, including masterclasses, self-paced courses, and workshops to help students acquire the skills needed for top-paying jobs in tech. FACE Prep's alumni work at leading tech companies like Google, Microsoft, Meta, Adobe, PayPal, and many more.
Role Description
This is a full-time, on-site role for an AI & ML Mentor (Faculty) located in Bengaluru. The AI & ML Mentor will be responsible for delivering quality education through workshops, bootcamps, and mentoring sessions. Day-to-day tasks include preparing instructional materials, guiding students through complex AI and ML concepts, and providing individual mentorship. The role involves staying updated with the latest advancements in AI and ML to ensure the curriculum remains current and effective.
Qualifications
- In-depth knowledge and expertise in Artificial Intelligence (AI) and Machine Learning (ML)
- Experience in developing and delivering instructional content, including workshops and bootcamps
- Proficiency in Python and related libraries such as TensorFlow, PyTorch, and scikit-learn
- Excellent communication and presentation skills
- Ability to mentor and guide students at different stages of learning
- Experience in the education sector or similar role is a plus
- Bachelor's or Master's degree in Computer Science, Data Science, or related field
Data Science
Posted today
Job Viewed
Job Description
Job Role- Data Scientist (Sr. Consultant)
At Deloitte, we do not offer you just a job, but a career in the highly sought-after Risk Management field. We are the business leader in the risk market. We work with a vision to make the world more prosperous, trustworthy, and safe. Our clients, primarily based outside of India, are large, complex organizations that constantly evolve and innovate to build better products and services. In the process, they encounter various risks and the work we do to help them address these risks is increasingly important to their success—and to the strength of the economy and public security.
By joining us, you will get to work with diverse teams of professionals who design, manage, and implement risk- centric solutions across a variety of risk domains. In the process, you will gain exposure to the risk-centric challenges faced in today's world by organizations across a range of industry sectors and become subject matter experts in those areas and develop into a well-rounded professional who not only has the depth in few risk domains, but also has width of exposure to wide variety of risk domains.
So, if you are someone who believes in disrupting through innovation and execution of ideas, Deloitte Risk and Financial Advisory is the place to be
Work you'll do
The key job responsibilities will be to:
- Develop database schemas, tables and dictionaries
- Develop, implement and optimize stored procedures, functions and indexes
- Ensure the data quality and integrity in databases
- Create complex functions, scripts, stored procedures and triggers to support application development
- Fix any issues related to database performance and ensuring stability, reliability and security
- Design, create, and implement database systems based on the end user's requirements
- Prepare documentations for database applications
- Memory management for database systems
- Develop best practices for database design and development activities
The Team
Our Financial Technology practice develops and licenses a growing family of proprietary software products (see ) to assist financial institutions with a number of complex topics, such as accounting for credit deteriorated assets and the administration of investments in leveraged loans.
We are looking to add dedicated software engineers to our team. In addition to competitive compensation and benefits, we provide excellent opportunities for growth and learning and invest in our talent development.
Qualifications
Required:
- Bachelor's degree in computer science or related field
- At least 5 to 7 years of experience as a SQL developer, with strong understanding of Microsoft SQL Server database
- Strong experience with Python coding and libraries (Pandas, NumPy, PySpark etc.)
- Hands-on experience with machine learning algorithms and frameworks
- Understanding and implementation of AI and generative AI solutions
- Proficiency in data visualization & data analytics
- Knowledge of best practices when dealing with relational databases
- Capable of troubleshooting common database issues
- Familiar with tools that can aid with profiling server resource usage and optimizing it
- Knowledge in performance optimization techniques
- Excellent verbal and written communication
Our purpose
Deloitte's purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities.
Our people and culture
Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work.
Professional development
At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India.
Benefits to help you thrive
At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you.
Recruiting tips
From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters.
Requisition code:
Data Science
Posted today
Job Viewed
Job Description
Please note this is a 3 months FULL TIME INTERNSHIP
Role & responsibilities
- Train and fine-tune ML/LLM models
- Build scalable APIs with FastAPI
- Work on backend apps using Django
- Analyze datasets using pandas, NumPy, scikit-learn
- Collaborate with Full Stack Developers to deliver AI/ML features
- Eagerness to learn emerging AI/ML technologies and apply them to real-world problems
- understanding and Support in fine-tuning pretrained models (e.g., GPT, LLaMA, Mistral, BERT) for specific use cases.
- Learn to implement RAG pipelines with vector databases (e.g., FAISS, Pinecone) for context-aware AI/ML solutions.
Preferred candidate profile
- Python with strong OOP understanding
- Familiarity with AI frameworks like PyTorch, TensorFlow (Preferred)
- Exposure to ML tools (pandas, scikit-learn, NumPy)
- Strong understanding of Backend FastAPI / Django
- Interest in LLMs, APIs, and production-ready ML
- Good problem-solving, logical reasoning, and analytical skills
- Basic knowledge on Frontend ReactJS/Nextjs and Dashboards Academic project experience or previous work experience in AI, ML, NLP, or data engineering
- Good problem-solving, logical reasoning, and analytical skills
Data Science
Posted today
Job Viewed
Job Description
Why Join Iris?
Are you ready to do the best work of your career at one of
India's Top 25 Best Workplaces in IT industry
? Do you want to grow in an award-winning culture that
truly values your talent and ambitions
?
Join Iris Software — one of the
fastest-growing IT services companies
— where
you own and shape your success story
.
About Us
At Iris Software, our vision is to be our client's most trusted technology partner, and the first choice for the industry's top professionals to realize their full potential.
With over 4,300 associates across India, U.S.A, and Canada, we help our enterprise clients thrive with technology-enabled transformation across financial services, healthcare, transportation & logistics, and professional services.
Our work covers complex, mission-critical applications with the latest technologies, such as high-value complex Application & Product Engineering, Data & Analytics, Cloud, DevOps, Data & MLOps, Quality Engineering, and Business Automation.
Working with Us
At Iris, every role is more than a job — it's a launchpad for growth.
Our Employee Value Proposition,
"Build Your Future. Own Your Journey."
reflects our belief that people thrive when they have ownership of their career and the right opportunities to shape it.
We foster a culture where your potential is valued, your voice matters, and your work creates real impact. With cutting-edge projects, personalized career development, continuous learning and mentorship, we support you to grow and become your best — both personally and professionally.
Curious what it's like to work at Iris? Head to this video for an inside look at the people, the passion, and the possibilities. Watch it here.
Job Description
We are seeking a Data Science Engineer to design, build, and optimize scalable data and machine learning systems. This role requires strong software engineering skills, a deep understanding of data science workflows, and the ability to work cross-functionally to translate business problems into production-level data solutions.
Key Responsibilities
- Design, implement, and maintain data science pipelines from data ingestion to model deployment.
- Collaborate with data scientists to operationalize ML models and algorithms in production environments.
- Develop robust APIs and services for ML model inference and integration.
- Build and optimize large-scale data processing systems using Spark, Pandas, or similar tools.
- Ensure data quality and pipeline reliability through rigorous testing, validation, and monitoring.
- Work with cloud infrastructure (AWS) for scalable ML deployment.
- Manage model versioning, feature engineering workflows, and experiment tracking.
- Optimize performance of models and pipelines for latency, cost, and throughput.
Required Qualifications
- Bachelor's or Master's degree in Computer Science, Data Science, Engineering, or a related field.
- 5+ years of experience in a data science, ML engineering, or software engineering role.
- Proficiency in Python (preferred) and SQL; knowledge of Java, Scala, or C++ is a plus.
- Experience with data science libraries like Scikit-learn, XGBoost, TensorFlow, or PyTorch.
- Familiarity with ML deployment tools such as ML flow, Sage Maker, or Vertex AI.
- Solid understanding of data structures, algorithms, and software engineering best practices.
- Experience working with databases (SQL, NoSQL) and data lakes (e.g., Delta Lake, Big Query).
Preferred Qualifications
- Experience with containerization and orchestration (Docker, Kubernetes).
- Experience working in Agile or cross-functional teams.
- Familiarity with streaming data platforms (Kafka, Spark Streaming, Flink).
Soft Skills
- Strong communication skills to bridge technical and business teams.
- Excellent problem-solving and analytical thinking.
- Self-motivated and capable of working independently or within a team.
- Passion for data and a curiosity-driven mindset.
Mandatory Competencies
Data Science and Machine Learning - Data Science and Machine Learning - AI/ML
Data Science and Machine Learning - Data Science and Machine Learning - Python
Database - Database Programming - SQL
Cloud - AWS - Tensorflow on AWS, AWS Glue, AWS EMR, Amazon Data Pipeline, AWS Redshift
Data Science and Machine Learning - Data Science and Machine Learning - Pytorch
Data Science and Machine Learning - Data Science and Machine Learning - AWS Sagemaker
Tech - Data Structure and Algorithms
Programming Language - Java - Core Java (java 8+)
Programming Language - Scala - Scala
DevOps/Configuration Mgmt - DevOps/Configuration Mgmt - Containerization (Docker, Kubernetes)
Agile - Agile - Extreme Programming
Middleware - Message Oriented Middleware - Messaging (JMS, ActiveMQ, RabitMQ, Kafka, SQS, ASB etc)
Beh - Communication and collaboration
Perks And Benefits For Irisians
Iris provides world-class benefits for a personalized employee experience. These benefits are designed to support financial, health and well-being needs of Irisians for a holistic professional and personal growth. Click here to view the benefits.
Data Science
Posted today
Job Viewed
Job Description
Data Science -Machine Learning - 7+ Years - Remote
We are looking for a professional as Data Scientist with proficiency over Python and SQL or Azure.
Your Future Employer: - You will be working with a prestigious organization known for its commitment to diversity, equality and inclusion. They offer a dynamic work environment, opportunities for career growth, and a supportive team culture.
Location: - Remote
Responsibilities
Superior analytical and problem-solving skills
High Proficiency in Python Coding along with good knowledge of SQL
Knowledge of using Python Libraries such as scikit-learn, scipy, pandas, numpy etc.
Proficient hands on experience over using NLP.
Deep rooted knowledge of Traditional Machine Learning Algorithms & Advanced modelling techniques (e.g. Time series forecasting and analysis etc.) and Text Analytics technique (NLTK, Genism, LDA etc.)
Must have hands on experience of building and deploying Predictive Model.
Requirements
Bachelors/Master's degree in economics, mathematics, computer science/engineering, operations research or related analytics areas; candidates with BA/BS degrees in the same fields from the top tier academic institutions are also welcome to apply
7+ years experience of working as a Data Scientist
Strong and in-depth understanding of statistics, data analytics
Superior analytical and problem-solving skills
Outstanding written and verbal communication skills
What is in it for you: -
A stimulating work environment with equal employment opportunity.
Work in a fast-paced environment in established brand.
Grow in a culture focused on training and mentoring.
Reach us: If this role is aligned with your career, kindly write me an email along with your updated resume at for a confidential discussion on the role.
Disclaimer: Crescendo Global in specializes in Senior to C-level niche recruitment. We are passionate about empowering job seekers and employers with an engaging memorable job search and leadership hiring experience. Crescendo Global does not discriminate on the basis of race, religion, color, origin, gender, sexual orientation, age, marital status, veteran status or disability status.
Note: We receive a lot of applications daily, so it may not be possible to respond to each one individually. Please assume that your profile has not been shortlisted if you don't hear from us in a week. Thank you for your understanding.
Scammers can misuse Crescendo Globals name for fake job offers. We never ask for money, purchases, or system upgrades. Verify all opportunities at and report fraud immediately. Stay alert
Profile Keywords: Data science, Data scientist, Python, SQL, Statistical Modelling, NLP, Machine Learning, Power BI Stochastic Modelling, GLM / Regression
Data Science
Posted today
Job Viewed
Job Description
Job description:
Silicon Interfaces is looking for Mumbai-based Data Science Engineers Years) for its Artificial Intelligence and Machine Learning group in the Software Department.
Experienced as Team Members and Years) Experienced as Team Leads.
Outstation candidates without accommodation in Mumbai need not apply.
The ideal candidate will be responsible for developing high-quality Small Language Models (SML), Large Language Models (LLM), Shared Memory API, Vertex DB for specific domains in Semiconductors. You will be required to work on Agents, Agenitfied AI Agents, as well as Roving Agents using standardized open models for deployment on industry and educational websites. They will also be responsible for designing and implementing testable and scalable code.
Roles and Responsibilities
- Develop quality software and models
- Analyze and maintain existing models
- Design highly scalable, testable code
- Discover and fix programming bugs
- Desired Candidate Profile
Bachelor's degree or equivalent experience in AI, AI/ML, Computer Science/Computer Engineering, or related field
- Development experience with programming languages
- SQL database or relational database skills
Skills Required: We are looking for intelligent, talented, hard-working Data Scientists who are willing to work in different technologies, and languages C#, Java, Python, and PHP.
Software Fundamentals
· Data Structure:
· Software Design Methodologies (Waterfall/Agile):
Languages
· C#:-
· Python:-
AI/ML
· Data Analysis
· Colab
· TensorFlow
· Keras
· Hyperparameters
· Activation Functions
· Optimizers
· SML/LLM Models
· Agents
· Agentified AI
You don't have to know all of them, but at least two of the skills, and the ability and interest to adapt and learn. It may seem to be a lot of stuff to know, but the good news you don't need to implement them at the same time.
Silicon Interfaces' services global footprint. Software Services centers in North America, Europe, and Asia Pacific by VPN-based logins, in-person Customer site deployment (North America, Europe, and Asia Pacific (including India) and also Offshore projects from our state-of-the-art Software Development Centers based out of Mumbai.
We are a small specialized IT Services company catering to online services as well as Customer Projects from USA. (Please check , and also , and ).
The Job is based in Mumbai, India, and the company is currently doing (Work from Office) WfO
If you like to apply, please send an email as an application with your Resume to
The Email should have Subject as Data Science - AI ML Engineer positions at Silicon Interfaces
The Body of the mail should explain in brief your skills and experience.
Job Types: Full-time, Permanent, Fresher
Pay: ₹200, ₹300,000.00 per year
Benefits:
- Health insurance
- Paid sick time
- Paid time off
- Provident Fund
Work Location: In person
Be The First To Know
About the latest Ai Jobs in India !
Data Science
Posted today
Job Viewed
Job Description
We are seeking a highly skilled and motivated Lead DS/ML engineer to join our team. The role is critical to the development of a cutting-edge reporting platform designed to measure and optimize online marketing campaigns.
We are seeking a highly skilled Data Scientist / ML Engineer with a strong foundation in data engineering (ELT, data pipelines) and advanced machine learning to develop and deploy sophisticated models. The role focuses on building scalable data pipelines, developing ML models, and deploying solutions in production to support a cutting-edge reporting, insights, and recommendations platform for measuring and optimizing online marketing campaigns.
The ideal candidate should be comfortable working across data engineering, ML model lifecycle, and cloud-native technologies.
Job Description:
Key Responsibilities:
- Data Engineering & Pipeline Development
- Design, build, and maintain scalable ELT pipelines for ingesting, transforming, and processing large-scale marketing campaign data.
- Ensure high data quality, integrity, and governance using orchestration tools like Apache Airflow, Google Cloud Composer, or Prefect.
- Optimize data storage, retrieval, and processing using BigQuery, Dataflow, and Spark for both batch and real-time workloads.
- Implement data modeling and feature engineering for ML use cases.
- Machine Learning Model Development & Validation
- Develop and validate predictive and prescriptive ML models to enhance marketing campaign measurement and optimization.
- Experiment with different algorithms (regression, classification, clustering, reinforcement learning) to drive insights and recommendations.
- Leverage NLP, time-series forecasting, and causal inference models to improve campaign attribution and performance analysis.
- Optimize models for scalability, efficiency, and interpretability.
- MLOps & Model Deployment
- Deploy and monitor ML models in production using tools such as Vertex AI, MLflow, Kubeflow, or TensorFlow Serving.
- Implement CI/CD pipelines for ML models, ensuring seamless updates and retraining.
- Develop real-time inference solutions and integrate ML models into BI dashboards and reporting platforms.
- Cloud & Infrastructure Optimization
- Design cloud-native data processing solutions on Google Cloud Platform (GCP), leveraging services such as BigQuery, Cloud Storage, Cloud Functions, Pub/Sub, and Dataflow.
- Work on containerized deployment (Docker, Kubernetes) for scalable model inference.
- Implement cost-efficient, serverless data solutions where applicable.
- Business Impact & Cross-functional Collaboration
- Work closely with data analysts, marketing teams, and software engineers to align ML and data solutions with business objectives.
- Translate complex model insights into actionable business recommendations.
- Present findings and performance metrics to both technical and non-technical stakeholders.
Qualifications & Skills:
Educational Qualifications:
- Bachelor's or Master's degree in Computer Science, Data Science, Machine Learning, Artificial Intelligence, Statistics, or a related field.
- Certifications in Google Cloud (Professional Data Engineer, ML Engineer) is a plus.
Must-Have Skills:
- Experience: 5-10 years with the mentioned skillset & relevant hands-on experience
- Data Engineering: Experience with ETL/ELT pipelines, data ingestion, transformation, and orchestration (Airflow, Dataflow, Composer).
- ML Model Development: Strong grasp of statistical modeling, supervised/unsupervised learning, time-series forecasting, and NLP.
- Programming: Proficiency in Python (Pandas, NumPy, Scikit-learn, TensorFlow/PyTorch) and SQL for large-scale data processing.
- Cloud & Infrastructure: Expertise in GCP (BigQuery, Vertex AI, Dataflow, Pub/Sub, Cloud Storage) or equivalent cloud platforms.
- MLOps & Deployment: Hands-on experience with CI/CD pipelines, model monitoring, and version control (MLflow, Kubeflow, Vertex AI, or similar tools).
- Data Warehousing & Real-time Processing: Strong knowledge of modern data platforms for batch and streaming data processing.
Nice-to-Have Skills:
- Experience with Graph ML, reinforcement learning, or causal inference modeling.
- Working knowledge of BI tools (Looker, Tableau, Power BI) for integrating ML insights into dashboards.
- Familiarity with marketing analytics, attribution modeling, and A/B testing methodologies.
- Experience with distributed computing frameworks (Spark, Dask, Ray).
Location:
Bengaluru
Brand:
Merkle
Time Type:
Full time
Contract Type:
Permanent
Data Science
Posted today
Job Viewed
Job Description
Data Science + Gen AI with below mandatory skills.
Must to Have : Agent Framework, RAG Framework, Chunking Strategies, LLMs, AI on cloud
Services, Open Source Frameworks like Langchain, Llama Index, Vector Database, Token
Management, Knowledge Graph, Vision
Exp Range - 4 to 6 years
RequirementsMajor Duties & Responsibilities
• Work with business stakeholders and cross-functional SMEs to deeply understand business context and key business
questions
• Create Proof of concepts (POCs) / Minimum Viable Products (MVPs), then guide them through to production deployment
and operationalization of projects
• Influence machine learning strategy for Digital programs and projects
• Make solution recommendations that appropriately balance speed to market and analytical soundness
• Explore design options to assess efficiency and impact, develop approaches to improve robustness and rigor
• Develop analytical / modelling solutions using a variety of commercial and open-source tools (e.g., Python, R,
TensorFlow)
• Formulate model-based solutions by combining machine learning algorithms with other techniques such as simulations.
• Design, adapt, and visualize solutions based on evolving requirements and communicate them through presentations,
scenarios, and stories.
• Create algorithms to extract information from large, multiparametric data sets.
• Deploy algorithms to production to identify actionable insights from large databases.
• Compare results from various methodologies and recommend optimal techniques.
• Design, adapt, and visualize solutions based on evolving requirements and communicate them through presentations,
scenarios, and stories.
• Develop and embed automated processes for predictive model validation, deployment, and implementation
• Work on multiple pillars of AI including cognitive engineering, conversational bots, and data science
• Ensure that solutions exhibit high levels of performance, security, scalability, maintainability, repeatability, appropriate
reusability, and reliability upon deployment
• Lead discussions at peer review and use interpersonal skills to positively influence decision making
• Provide thought leadership and subject matter expertise in machine learning techniques, tools, and concepts; make
impactful contributions to internal discussions on emerging practices
• Facilitate cross-geography sharing of new ideas, learnings, and best-practices
Required Qualifications
• Bachelor of Science or Bachelor of Engineering at a minimum.
• 4-6 years of work experience as a Data Scientist
• A combination of business focus, strong analytical and problem-solving skills, and programming knowledge to be able to
quickly cycle hypothesis through the discovery phase of a project
• Advanced skills with statistical/programming software (e.g., R, Python) and data querying languages (e.g., SQL,
Hadoop/Hive, Scala)
• Good hands-on skills in both feature engineering and hyperparameter optimization
• Experience producing high-quality code, tests, documentation
• Experience with Microsoft Azure or AWS data management tools such as Azure Data factory, data lake, Azure ML,
Synapse, Databricks
• Understanding of descriptive and exploratory statistics, predictive modelling, evaluation metrics, decision trees, machine
learning algorithms, optimization & forecasting techniques, and / or deep learning methodologies
• Proficiency in statistical concepts and ML algorithms
• Good knowledge of Agile principles and process
• Ability to lead, manage, build, and deliver customer business results through data scientists or professional services team
• Ability to share ideas in a compelling manner, to clearly summarize and communicate data analysis assumptions and
results
• Self-motivated and a proactive problem solver who can work independently and in teams
Work with one of the Big 4's in India
Data Science
Posted today
Job Viewed
Job Description
About The Business -
Tata Electronics Private Limited (TEPL) is a greenfield venture of the Tata Group with expertise in manufacturing precision components.
Tata Electronics (a wholly owned subsidiary of Tata Sons Pvt. Ltd.) is building India's first AI-enabled state-of-the-art Semiconductor Foundry. This facility will produce chips for applications such as power management IC, display drivers, microcontrollers (MCU) and high-performance computing logic, addressing the growing demand in markets such as automotive, computing and data storage, wireless communications and artificial intelligence.
Tata Electronics is a subsidiary of the Tata group. The Tata Group operates in more than 100 countries across six continents, with the mission 'To improve the quality of life of the communities we serve globally, through long term stakeholder value creation based on leadership with Trust.'
Job Responsibilities -
- Develop and implement advanced machine learning models to solve complex business problems, translating analytical findings into tangible business solutions.
- Develop computer vision algorithms and deploy them to production environments, ensuring seamless integration with existing systems and infrastructure.
- Understand business requirements by working closely with domain experts to define problem statements and success metrics that align with organizational goals.
- Collaborate with data engineers to design and implement efficient data pipelines that support model development, training, and deployment.
- Analyse large datasets to extract meaningful insights, identify patterns, and drive data-informed decision making across the organization.
- Work with the mother company's (PSMC) AI team to understand use cases, align strategies, and leverage existing resources and expertise.
- Optimize and fine-tune machine learning models for scalability, ensuring they can handle increased data volumes and maintain performance under varying conditions.
- Monitor model performance post-production deployment, implementing systems to track key metrics and ensure adherence to established performance standards.
- Ensure data quality and integrity by implementing rigorous validation processes and addressing data gaps or inconsistencies before model development.
- Communicate technical findings to non-technical stakeholders through clear visualizations, reports, and presentations that translate complex analysis into actionable insights.
Essential Attributes -
- Willing to work in semiconductor domain and building cutting edge AI/ML system.
- Proficiency in Computer vision/ Python code writing.
- Ability to understand and debug code.
- Understanding of Linux OS.
- Experience in working with large dataset and improving data pipelines.
- Deep Understanding of CNN, RNN, DNN 's Mathematics.
- Experience in computer vision model deployments.
- Excellent teamwork skills to work closely with Manufacturing, Quality, and IT teams.
- Ability to communicate technical issues and solutions effectively to cross-functional teams.
- Enthusiasm for learning new AI/ML technologies, tools.
Qualifications -
BE/ ME in Computer science, Machine Learning, Electronics Engineering, Applied mathematics, Statistics.
Experience -
2yrs - 5yrs