12,336 AI Models jobs in India

GPU Kernel Developer - AI Models

Bengaluru, Karnataka Advanced Micro Devices, Inc

Posted today

Job Viewed

Tap Again To Close

Job Description

WHAT YOU DO AT AMD CHANGES EVERYTHING At AMD, our mission is to build great products that accelerate next-generation computing experiences – from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you’ll discover the real differentiator is our culture. We push the limits of innovation to solve the world’s most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career. GPU Kernel Developer - AI Models THE ROLE: AMD is looking for a GPU kernel development engineer who is talented in developing high performance kernels for state-of-the-art and upcoming GPU hardware. You will be a member of a core team of incredibly talented industry specialists and will work with the very latest hardware and software technology. THE PERSON: Experienced in GPU kernel development and optimization for AI/HPC applications. Strong technical and analytical skills in GPU computing, hardware architecture, and deep understanding of HIP/CUDA/OpenCL/Triton development. Ability to work as part of a team, deliver to project scope, and communicate to a technical/non-technical audience. KEY RESPONSIBILITIES: Develop high performance GPU kernels for key AI operators on AMD GPUs Optimize GPU code using structured and disciplined methodology - profiling to identify gaps, roofline-analysis on hardware, identify key set of optimizations, establish uplift and line-of-sight, prototype and develop optimizations Support mission-critical workloads in NLP/LLM, Recommendation, Vision and Audio Collaborate and interact with system level performance architects, GPU hardware specialists, power/clock tuning teams, performance validation teams, and performance marketing teams to analyze and optimize training and inference for AI Work with open-source framework maintainers to understand their requirements – and have your code changes integrated upstream Debug, maintain and optimize GPU kernels, understand and drive AI operator performance (GEMM, Attention, Distributed scale-up/out communication, etc.) Apply your knowledge of software engineering best practices PREFERRED EXPERIENCE: Knowledge of GPU computing (HIP, CUDA, OpenCL, Triton) Knowledge and experience in optimizing GPU kernels Expertise in using profiling, debugging tools Core understanding of GPU hardware Excellent C/C++/Python programming and software design skills, including debugging, performance analysis, and test design. ACADEMIC CREDENTIALS: Masters or PhD or equivalent experience in Computer Science, Computer Engineering, or related field #LI-PK1 Benefits offered are described: AMD benefits at a glance. AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process.GPU Kernel Developer - AI Models THE ROLE: AMD is looking for a GPU kernel development engineer who is talented in developing high performance kernels for state-of-the-art and upcoming GPU hardware. You will be a member of a core team of incredibly talented industry specialists and will work with the very latest hardware and software technology. THE PERSON: Experienced in GPU kernel development and optimization for AI/HPC applications. Strong technical and analytical skills in GPU computing, hardware architecture, and deep understanding of HIP/CUDA/OpenCL/Triton development. Ability to work as part of a team, deliver to project scope, and communicate to a technical/non-technical audience. KEY RESPONSIBILITIES: Develop high performance GPU kernels for key AI operators on AMD GPUs Optimize GPU code using structured and disciplined methodology - profiling to identify gaps, roofline-analysis on hardware, identify key set of optimizations, establish uplift and line-of-sight, prototype and develop optimizations Support mission-critical workloads in NLP/LLM, Recommendation, Vision and Audio Collaborate and interact with system level performance architects, GPU hardware specialists, power/clock tuning teams, performance validation teams, and performance marketing teams to analyze and optimize training and inference for AI Work with open-source framework maintainers to understand their requirements – and have your code changes integrated upstream Debug, maintain and optimize GPU kernels, understand and drive AI operator performance (GEMM, Attention, Distributed scale-up/out communication, etc.) Apply your knowledge of software engineering best practices PREFERRED EXPERIENCE: Knowledge of GPU computing (HIP, CUDA, OpenCL, Triton) Knowledge and experience in optimizing GPU kernels Expertise in using profiling, debugging tools Core understanding of GPU hardware Excellent C/C++/Python programming and software design skills, including debugging, performance analysis, and test design. ACADEMIC CREDENTIALS: Masters or PhD or equivalent experience in Computer Science, Computer Engineering, or related field #LI-PK1
Benefits offered are described: AMD benefits at a glance. AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process.
This advertiser has chosen not to accept applicants from your region.

Applied Research Scientist - AI Models

Bengaluru, Karnataka Advanced Micro Devices, Inc

Posted today

Job Viewed

Tap Again To Close

Job Description

WHAT YOU DO AT AMD CHANGES EVERYTHING At AMD, our mission is to build great products that accelerate next-generation computing experiences – from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you’ll discover the real differentiator is our culture. We push the limits of innovation to solve the world’s most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career. THE ROLE: The AI Models team is looking for exceptional machine learning scientists and engineers to explore and innovate on training and inference techniques for large language models (LLMs), large multimodal models (LMMs), image/video generation and other foundation models. You will be part of a world-class research and development team focussing on efficient and scalable pre-training, instruction tuning, alignment and optimization. As an early member of the team, you can help us shape the direction and strategy to fulfill this important charter. THE PERSON: This role is for you if you are passionate about reading through the latest literature, coming up with novel ideas, and implementing those through high quality code to push the boundaries on scale and performance. The ideal candidate will have both theoretical expertise and hands-on experience with developing LLMs, LMMs, and/or diffusion models. We are looking for someone who is familiar with hyper-parameter tuning methods, data preprocessing & encoding techniques and distributed training approaches for large models. KEY RESPONSIBILITIES: Pre-train and post-train models over large GPU clusters while optimizing for various trade-offs. Improve upon the state-of-the-art in Generative AI model architectures, data and training techniques. Accelerate the training and inference speed across AMD accelerators. Build agentic frameworks to solve various kinds of problems Publish your research at top-tier conferences, workshops and/or through technical blogs. Engage with academia and open-source ML communities. Drive continuous improvement of infrastructure and development ecosystem. PREFERRED EXPERIENCE: Strong development and debugging skills in Python. Experience in deep learning frameworks (like PyTorch or TensorFlow) and distributed training tools (like DeepSpeed or Pytorch Distributed). Experience with fine-tuning methods (like RLHF & DPO) as well as parameter efficient techniques (like LoRA & DoRA). Solid understanding of various types of transformers and state space models. Strong publication record in top-tier conferences, workshops or journals. Solid communication and problem-solving skills. Passionate about learning new stuffs in this domain as well as innovating on top of it ACADEMIC CREDENTIALS: Advanced degree (Master’s or PhD) in machine learning, computer science, artificial intelligence, or a related field is expected. Exceptional Bachelor’s degree candidates may also be considered. #LI-NS2 Benefits offered are described: AMD benefits at a glance. AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process.THE ROLE: The AI Models team is looking for exceptional machine learning scientists and engineers to explore and innovate on training and inference techniques for large language models (LLMs), large multimodal models (LMMs), image/video generation and other foundation models. You will be part of a world-class research and development team focussing on efficient and scalable pre-training, instruction tuning, alignment and optimization. As an early member of the team, you can help us shape the direction and strategy to fulfill this important charter. THE PERSON: This role is for you if you are passionate about reading through the latest literature, coming up with novel ideas, and implementing those through high quality code to push the boundaries on scale and performance. The ideal candidate will have both theoretical expertise and hands-on experience with developing LLMs, LMMs, and/or diffusion models. We are looking for someone who is familiar with hyper-parameter tuning methods, data preprocessing & encoding techniques and distributed training approaches for large models. KEY RESPONSIBILITIES: Pre-train and post-train models over large GPU clusters while optimizing for various trade-offs. Improve upon the state-of-the-art in Generative AI model architectures, data and training techniques. Accelerate the training and inference speed across AMD accelerators. Build agentic frameworks to solve various kinds of problems Publish your research at top-tier conferences, workshops and/or through technical blogs. Engage with academia and open-source ML communities. Drive continuous improvement of infrastructure and development ecosystem. PREFERRED EXPERIENCE: Strong development and debugging skills in Python. Experience in deep learning frameworks (like PyTorch or TensorFlow) and distributed training tools (like DeepSpeed or Pytorch Distributed). Experience with fine-tuning methods (like RLHF & DPO) as well as parameter efficient techniques (like LoRA & DoRA). Solid understanding of various types of transformers and state space models. Strong publication record in top-tier conferences, workshops or journals. Solid communication and problem-solving skills. Passionate about learning new stuffs in this domain as well as innovating on top of it ACADEMIC CREDENTIALS: Advanced degree (Master’s or PhD) in machine learning, computer science, artificial intelligence, or a related field is expected. Exceptional Bachelor’s degree candidates may also be considered. #LI-NS2
Benefits offered are described: AMD benefits at a glance. AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process.
This advertiser has chosen not to accept applicants from your region.

Principal Product Manager - Technical - CI/CD; Microservices, AI models, Cloud

Pune, Maharashtra Mastercard

Posted 18 days ago

Job Viewed

Tap Again To Close

Job Description

**Our Purpose**
_Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we're helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential._
**Title and Summary**
Principal Product Manager - Technical - CI/CD; Microservices, AI models, Cloud
Job Summary
Mastercard is seeking a strategic and technically adept Principal Product Manager to join the AIDPE group within Services. This role is pivotal in driving technology transformation initiatives by integrating cross-functional expertise across Architecture, Engineering, Technical Program Management, and Finance. The ideal candidate will define vision, establish OKRs, and develop actionable roadmaps that align with Mastercard's mission to power an inclusive, digital economy.
___
Key Responsibilities
- Lead strategic technology transformation business cases in collaboration with Architecture, Engineering, TPMs, and Finance.
- Facilitate OKR definition workshops and manage the full OKR lifecycle using Aha!
- Conduct innovation sessions aligned to OKRs; apply prioritization frameworks and develop solution whitepapers.
- Create and manage Aha! Initiatives and EPICs to deliver iteratively against OKRs.
- Collaborate with Engineering PMTs for Epic elaboration, feature breakdown, and roadmap refinement.
- Run Epic refinement sessions and support PI slotting and project planning.
- Monitor delivery progress, manage risks, and ensure value realization through demos, UAT, and feedback loops.
- Build iterative solution delivery roadmaps and align scope and schedules with Mastercard Technology.
- Develop strategic business cases and ROM estimates for transformation initiatives.
___
Required Qualifications
- Bachelor's degree in Information Technology, Computer Science, Management Information Systems, or equivalent experience.
- Proven ability to lead in a matrixed environment with autonomy.
- Strong problem-solving skills using both quantitative and qualitative methods.
- Experience in agile delivery methodologies (Scrum, Kanban) and CI/CD.
- Proficiency in cloud technologies (IaaS, PaaS, serverless), microservices, NoSQL databases, and distributed systems.
- Familiarity with AI/ML technologies.
- Ability to use data and metrics to support assumptions and business cases.
- Knowledge of the financial services industry, especially retail banking and payments.
___
Preferred Qualifications
- Advanced degree in a technical or business discipline.
- Experience with Aha! and product scorecard frameworks.
- Strong communication and stakeholder management skills.
- Experience in strategic planning and OKR lifecycle management.
**Corporate Security Responsibility**
All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must:
+ Abide by Mastercard's security policies and practices;
+ Ensure the confidentiality and integrity of the information being accessed;
+ Report any suspected information security violation or breach, and
+ Complete all periodic mandatory security trainings in accordance with Mastercard's guidelines.
This advertiser has chosen not to accept applicants from your region.

Staff Engineer/Tech Lead - AI/ML [ Natural Language Processing, Transformers, Gen AI, LLM, Neural...

Bangalore, Karnataka Nutanix

Posted 18 days ago

Job Viewed

Tap Again To Close

Job Description

**Hungry, Humble, Honest, with Heart.**
**The Opportunity**
We are reimagining observability at Nutanix with **Panacea.ai** , our next-gen AI-driven log and metrics analyzer. In version 1.0, we leveraged regex-based filters to surface anomalies. Now, we're building **Panacea.ai** -powered by **AI/ML, ModernBERT, and LLMs** -to deliver intelligent, context-rich anomaly detection, automated root cause analysis (Auto-RCA), and continuous learning from user feedback.As a **Staff Engineer (MTS-6)** , you will **own the architecture and AI/ML systems that power both log and metrics analysis** , enabling automated diagnostics and reducing triage time for QA failures, regression runs, and customer issues. You'll also help define and drive the central AI charter at Nutanix, building reusable components, model infrastructure, and scalable ML services.
**About the Team**
The **Panacea** team has a passionate set of engineers across India and US office. We move fast, collaborate closely, and care deeply about quality and ownership. Our mission is to deliver **AI/ML-powered developer productivity tools** that solve real engineering and support pain points at scale.
Why Join Us
+ Build **AI-first observability tools** that redefine how engineers triage and troubleshoot.
+ Own systems that reduce hours of manual work in **engineering and SRE workflows** .
+ Collaborate with a **tight-knit team of high-ownership engineers** who are passionate about impact and innovation.
+ Hybrid work model that supports flexibility and deep focus.
+ Help shape the **central AI charter** at Nutanix and influence future AI products across the company.
**Your Role**
+ **AI-Powered Observability Platform** : Own the vision, architecture, and delivery of Panacea's ML-based log and metrics analyzer that reduces triage time and improves engineering efficiency.
+ **AI/ML-powered Log Analyzer Tool** : Use deep learning (e.g., **ModernBERT** ) to represent log messages, detect anomalies, group patterns, and surface actionable insights to users.
+ **Metrics Anomaly Detection Engine** : Build robust ML models to detect anomalies in time-series metrics like **CPU, memory, disk I/O, network traffic, service health** , and more-automatically identifying performance degradation or system regressions across distributed environments.
+ **Auto-RCA Engine** : Combine log and metrics signals with graph-based correlation and LLM-powered summarization to automatically diagnose the root cause of system failures.
+ **Feedback Loop & Continuous Learning** : Build infrastructure for incorporating user feedback to continuously retrain and improve anomaly detection systems.
+ **LLM Integration** : Integrate LLMs for user queries, problem summarization, anomaly explanation, and contextual recommendations.
+ **Central AI Charter** : Contribute to Nutanix's foundational AI platform by defining shared tooling, datasets, governance, and reusable ML components across products.
Responsibilities
+ Architect and scale ML pipelines for **real-time and batch-based anomaly detection** in both logs and time-series metrics.
+ Build and fine-tune **ModernBERT** and other transformer-based models for log understanding, anomaly classification, and summarization.
+ Develop unsupervised and semi-supervised ML models for **detecting anomalies in system metrics** (CPU, memory, network throughput, latency, etc.).
+ Implement correlation models to connect anomalies across logs and metrics to form a cohesive RCA narrative.
+ Own the entire ML lifecycle: data ingestion, feature extraction, model training, evaluation, deployment, and monitoring.
+ Build explainable AI systems that increase adoption and trust within engineering, QA, and support teams.
+ Collaborate with cross-functional stakeholders (SRE, QA, Dev) to deeply understand pain points and translate them into intelligent tooling.
+ Drive technical excellence through code and design reviews, mentoring, and setting engineering best practices.
**What You Will Bring**
+ **Educational Background** : B.Tech/M.Tech in Computer Science, Machine Learning, AI, or related fields.
+ **Experience** : 12+ years of engineering experience , including designing , developing and deploying AI/ML systems at scale.
+ **ML Expertise** :
+ Strong in time-series anomaly detection, statistical modeling, supervised/unsupervised learning.
+ Experience building ML models for **metrics data** (CPU, memory, IOPS, network, etc.) using models like Isolation Forest, Prophet, LSTM, or deep autoencoders.
+ Expertise in NLP using **ModernBERT, BERT, or** log classification, clustering, and summarization.
+ Experience with LLMs for downstream tasks like summarization, root cause reasoning, or intelligent Q&A.
+ **Engineering Skills** : Strong Python background, hands-on with ML libraries (PyTorch, TensorFlow, Scikit-learn), time-series frameworks, and MLOps tools. Familiar with data pipelines and serving models.
+ **Observability Knowledge** : Hands-on with logs, metrics, traces, and popular monitoring tools (e.g., Prometheus, Grafana, ELK).
+ **Leadership** : Ability to independently drive projects from requirements to delivery, mentor junior engineers, and deliver business impact.
**Work Arrangement**
Hybrid: This role operates in a hybrid capacity, blending the benefits of remote work with the advantages of in-person collaboration. For most roles, that will mean coming into an office a minimum of 2 - 3 days per week, however certain roles and/or teams may require more frequent in-office presence. Additional team-specific guidance and norms will be provided by your manager.
We're an Equal Opportunity Employer Nutanix is an Equal Employment Opportunity and (in the U.S.) an Affirmative Action employer. Qualified applicants are considered for employment opportunities without regard to race, color, religion, sex, sexual orientation, gender identity or expression, national origin, age, marital status, protected veteran status, disability status or any other category protected by applicable law. We hire and promote individuals solely on the basis of qualifications for the job to be filled. We strive to foster an inclusive working environment that enables all our Nutants to be themselves and to do great work in a safe and welcoming environment, free of unlawful discrimination, intimidation or harassment. As part of this commitment, we will ensure that persons with disabilities are provided reasonable accommodations. If you need a reasonable accommodation, please let us know by contacting
This advertiser has chosen not to accept applicants from your region.

Staff Engineer/Tech Lead – AI/ML [ Natural Language Processing, Transformers, Gen AI, LLM, Neural...

Bengaluru, Karnataka Nutanix

Posted today

Job Viewed

Tap Again To Close

Job Description

The Opportunity

We are reimagining observability at Nutanix with , our next-gen AI-driven log and metrics analyzer. In version 1.0, we leveraged regex-based filters to surface anomalies. Now, we’re building —powered by AI/ML, ModernBERT, and LLMs —to deliver intelligent, context-rich anomaly detection, automated root cause analysis (Auto-RCA), and continuous learning from user a Staff Engineer (MTS-6) , you will own the architecture and AI/ML systems that power both log and metrics analysis , enabling automated diagnostics and reducing triage time for QA failures, regression runs, and customer issues. You’ll also help define and drive the central AI charter at Nutanix, building reusable components, model infrastructure, and scalable ML services.


About the Team

The Panacea team has a passionate set of engineers across India and US office. We move fast, collaborate closely, and care deeply about quality and ownership. Our mission is to deliver AI/ML-powered developer productivity tools that solve real engineering and support pain points at scale.

Why Join Us
 

  • Build AI-first observability tools that redefine how engineers triage and troubleshoot.
  • Own systems that reduce hours of manual work in engineering and SRE workflows .
  • Collaborate with a tight-knit team of high-ownership engineers who are passionate about impact and innovation.
  • Hybrid work model that supports flexibility and deep focus.
  • Help shape the central AI charter at Nutanix and influence future AI products across the company.

  • Your Role

  • AI-Powered Observability Platform : Own the vision, architecture, and delivery of Panacea’s ML-based log and metrics analyzer that reduces triage time and improves engineering efficiency.
  • AI/ML-powered Log Analyzer Tool : Use deep learning (., ModernBERT ) to represent log messages, detect anomalies, group patterns, and surface actionable insights to users.
  • Metrics Anomaly Detection Engine : Build robust ML models to detect anomalies in time-series metrics like CPU, memory, disk I/O, network traffic, service health , and more—automatically identifying performance degradation or system regressions across distributed environments.
  • Auto-RCA Engine : Combine log and metrics signals with graph-based correlation and LLM-powered summarization to automatically diagnose the root cause of system failures.
  • Feedback Loop & Continuous Learning : Build infrastructure for incorporating user feedback to continuously retrain and improve anomaly detection systems.
  • LLM Integration : Integrate LLMs for user queries, problem summarization, anomaly explanation, and contextual recommendations.
  • Central AI Charter : Contribute to Nutanix’s foundational AI platform by defining shared tooling, datasets, governance, and reusable ML components across products.
  • Responsibilities
     

  • Architect and scale ML pipelines for real-time and batch-based anomaly detection in both logs and time-series metrics.
  • Build and fine-tune ModernBERT and other transformer-based models for log understanding, anomaly classification, and summarization.
  • Develop unsupervised and semi-supervised ML models for detecting anomalies in system metrics (CPU, memory, network throughput, latency, .
  • Implement correlation models to connect anomalies across logs and metrics to form a cohesive RCA narrative.
  • Own the entire ML lifecycle: data ingestion, feature extraction, model training, evaluation, deployment, and monitoring.
  • Build explainable AI systems that increase adoption and trust within engineering, QA, and support teams.
  • Collaborate with cross-functional stakeholders (SRE, QA, Dev) to deeply understand pain points and translate them into intelligent tooling.
  • Drive technical excellence through code and design reviews, mentoring, and setting engineering best practices.

  • What You Will Bring

  • Educational Background : / in Computer Science, Machine Learning, AI, or related fields.
  • Experience : 12+ years of engineering experience , including designing , developing and deploying AI/ML systems at scale.
  • ML Expertise :Strong in time-series anomaly detection, statistical modeling, supervised/unsupervised learning.Experience building ML models for metrics data (CPU, memory, IOPS, network, using models like Isolation Forest, Prophet, LSTM, or deep autoencoders.Expertise in NLP using ModernBERT, BERT, or log classification, clustering, and summarization.Experience with LLMs for downstream tasks like summarization, root cause reasoning, or intelligent Q&A.
  • Engineering Skills : Strong Python background, hands-on with ML libraries (PyTorch, TensorFlow, Scikit-learn), time-series frameworks, and MLOps tools. Familiar with data pipelines and serving models.
  • Observability Knowledge : Hands-on with logs, metrics, traces, and popular monitoring tools (., Prometheus, Grafana, ELK).
  • Leadership : Ability to independently drive projects from requirements to delivery, mentor junior engineers, and deliver business impact.

  • Work Arrangement

    Hybrid: This role operates in a hybrid capacity, blending the benefits of remote work with the advantages of in-person collaboration. For most roles, that will mean coming into an office a minimum of 2 - 3 days per week, however certain roles and/or teams may require more frequent in-office presence. Additional team-specific guidance and norms will be provided by your manager.


    --

    This advertiser has chosen not to accept applicants from your region.

    Principal AI Engineer - Generative Models

    560001 Bangalore, Karnataka ₹140000 Annually WhatJobs

    Posted 4 days ago

    Job Viewed

    Tap Again To Close

    Job Description

    full-time
    Our client is seeking a highly experienced and visionary Principal AI Engineer to lead the development of cutting-edge generative models in Bengaluru, Karnataka, IN . This senior role will be pivotal in shaping the future of AI innovation, focusing on creating advanced models capable of generating novel content, data, and solutions. The Principal AI Engineer will be responsible for architecting, developing, and deploying sophisticated generative AI systems, including large language models (LLMs), diffusion models, and GANs. You will translate complex business challenges into AI-driven solutions, leading the research and implementation of state-of-the-art techniques. Key responsibilities include designing robust ML pipelines, optimizing model performance, and ensuring scalability and efficiency of AI systems. The ideal candidate will possess a Master's or PhD in Computer Science, AI, Machine Learning, or a closely related field, with a strong portfolio of contributions to generative AI research and development. Extensive experience with Python and ML frameworks such as TensorFlow, PyTorch, and JAX is required. A deep understanding of mathematical principles underpinning AI, including linear algebra, calculus, and probability, is essential. You must have proven experience in building and training large-scale generative models and a solid grasp of MLOps practices. Excellent problem-solving, analytical, and communication skills are crucial for collaborating with cross-functional teams and mentoring junior engineers. This role demands a proactive, innovative mindset and the ability to stay abreast of the rapidly evolving AI landscape. We are looking for a technical leader who can drive significant advancements in generative AI, delivering impactful products and solutions for our client. This is an exceptional opportunity to work at the forefront of AI technology and influence its future direction.
    This advertiser has chosen not to accept applicants from your region.

    Lead AI Researcher - Generative Models

    390007 Vadodara, Gujarat ₹1800000 Annually WhatJobs

    Posted 4 days ago

    Job Viewed

    Tap Again To Close

    Job Description

    full-time
    Our client is at the forefront of innovation in AI and Emerging Technologies, seeking a brilliant Lead AI Researcher specializing in Generative Models. This is a unique opportunity to shape the future of AI by pushing the boundaries of what's possible. You will lead a team of talented researchers and engineers in developing groundbreaking generative AI solutions. Your responsibilities will include defining research roadmaps, designing and implementing novel algorithms, and publishing findings in top-tier academic conferences and journals. This role requires a deep understanding of machine learning, deep learning, natural language processing, and computer vision, with a particular emphasis on transformer architectures, diffusion models, and GANs.

    Key Responsibilities:
    • Lead research initiatives in generative AI, focusing on areas like text-to-image, text-to-video, content generation, and data augmentation.
    • Develop and implement state-of-the-art generative models and algorithms.
    • Mentor and guide a team of AI researchers and engineers, fostering a culture of innovation and scientific rigor.
    • Collaborate with product teams to translate research breakthroughs into practical applications.
    • Stay abreast of the latest advancements in AI and machine learning through continuous learning and participation in the research community.
    • Design and conduct experiments to evaluate model performance and identify areas for improvement.
    • Publish research findings in leading AI conferences (e.g., NeurIPS, ICML, ICLR) and journals.
    • Contribute to the intellectual property portfolio through patents and publications.
    • Present research findings to both technical and non-technical audiences.
    • Contribute to the strategic direction of the company's AI research efforts.

    Qualifications:
    • Ph.D. or Master's degree in Computer Science, Artificial Intelligence, Machine Learning, or a related field with a strong research focus.
    • 5+ years of research experience in AI, with a proven track record of publications in top venues.
    • Deep expertise in generative models (e.g., GANs, VAEs, Diffusion Models, Transformers).
    • Proficiency in programming languages such as Python and deep learning frameworks (e.g., TensorFlow, PyTorch).
    • Experience with large-scale dataset manipulation and distributed training.
    • Strong analytical, problem-solving, and critical thinking skills.
    • Excellent leadership and team management abilities.
    • Outstanding communication and presentation skills.
    • Ability to work effectively in a hybrid work environment, balancing remote and in-office collaboration.
    • Demonstrated ability to lead complex research projects from conception to completion.

    This role is based in Vadodara, Gujarat , and offers a competitive salary, comprehensive benefits, and the opportunity to work on cutting-edge AI challenges. Our client is committed to fostering an inclusive and collaborative research environment where groundbreaking ideas can flourish. Join us and help define the next generation of artificial intelligence.
    This advertiser has chosen not to accept applicants from your region.
    Be The First To Know

    About the latest Ai models Jobs in India !

    Principal AI Engineer - Generative Models

    201301 Noida, Uttar Pradesh ₹3000000 Annually WhatJobs

    Posted 4 days ago

    Job Viewed

    Tap Again To Close

    Job Description

    full-time
    Our client is at the cutting edge of artificial intelligence innovation and is actively seeking a highly accomplished Principal AI Engineer specializing in Generative Models. This fully remote position is ideal for a seasoned professional eager to spearhead the development of next-generation AI applications. You will lead research and development efforts in areas such as large language models (LLMs), diffusion models, and other generative AI techniques. Your primary responsibility will be to design, build, and deploy sophisticated generative AI systems that solve complex challenges and create novel user experiences. Key duties include conducting advanced research into generative algorithms, developing scalable AI architectures, and implementing robust training pipelines for massive datasets. You will collaborate closely with a world-class team of researchers and engineers, translating scientific breakthroughs into practical, production-ready solutions. A strong publication record in top-tier AI conferences or journals is highly valued. The ideal candidate will possess deep expertise in machine learning frameworks (e.g., TensorFlow, PyTorch), advanced programming skills (Python), and a thorough understanding of deep learning, NLP, and computer vision. Experience with distributed training, MLOps, and cloud platforms (AWS, GCP, Azure) is essential. You will mentor junior engineers, contribute to technical strategy, and stay at the forefront of AI advancements. This role demands exceptional problem-solving abilities, a creative mindset, and the capacity to lead complex, long-term projects with minimal supervision. The ability to clearly articulate technical concepts and influence technical direction is crucial. This role is exclusively remote, providing the ultimate flexibility, with the understanding that contributions will be made to projects based out of regions including Noida, Uttar Pradesh, IN .
    This advertiser has chosen not to accept applicants from your region.

    Principal AI Researcher - Generative Models

    122001 Gurgaon, Haryana ₹3000000 Annually WhatJobs

    Posted 4 days ago

    Job Viewed

    Tap Again To Close

    Job Description

    full-time
    Our client is at the forefront of AI innovation, pushing the boundaries of machine learning and artificial intelligence. We are seeking a brilliant and highly accomplished Principal AI Researcher specializing in Generative Models to join our esteemed research team. This role, based in **Gurugram, Haryana, IN **, offers a unique opportunity to conduct groundbreaking research, develop novel algorithms, and contribute to the next generation of AI-powered products and services. You will lead ambitious research projects, collaborate with world-class scientists, and publish your findings in top-tier academic conferences and journals. The ideal candidate possesses a deep theoretical understanding of AI and machine learning, combined with practical experience in implementing and optimizing advanced generative models.

    Responsibilities:
    • Lead cutting-edge research in generative AI, focusing on areas such as large language models (LLMs), diffusion models, GANs, and VAEs.
    • Develop novel algorithms and methodologies to advance the state-of-the-art in generative AI.
    • Design, implement, and evaluate AI models for a wide range of applications, including natural language generation, image synthesis, and data augmentation.
    • Collaborate with engineering teams to translate research prototypes into production-ready systems.
    • Stay abreast of the latest research trends and breakthroughs in AI and machine learning globally.
    • Publish research findings in leading AI conferences (e.g., NeurIPS, ICML, ICLR) and journals.
    • Mentor junior researchers and contribute to the growth of the research team.
    • Identify and explore new research opportunities and potential applications for AI.
    • Develop and maintain strong relationships with the academic and research communities.
    • Contribute to the company's intellectual property portfolio through patents and publications.
    • Ensure ethical considerations and responsible AI development practices are integrated into research.
    • Design and conduct experiments to rigorously test and validate AI models.
    • Contribute to the strategic direction of AI research within the organization.
    • Present research findings to technical and non-technical audiences.
    • Analyze complex datasets to train and fine-tune generative models.

    Qualifications:
    • Ph.D. in Computer Science, Artificial Intelligence, Machine Learning, or a related quantitative field.
    • Minimum of 8 years of post-doctoral or industry research experience in AI/ML, with a significant focus on generative models.
    • Demonstrated track record of high-impact publications in top-tier AI conferences and journals.
    • Deep theoretical understanding and practical experience with various generative models (LLMs, GANs, VAEs, Diffusion Models).
    • Proficiency in programming languages such as Python, and deep learning frameworks (e.g., TensorFlow, PyTorch).
    • Experience with large-scale data processing and distributed training environments.
    • Strong analytical and problem-solving skills.
    • Excellent communication, presentation, and collaboration abilities.
    • Ability to lead research projects and mentor junior team members.
    • Experience working in a fast-paced, innovative research environment.
    • Knowledge of ethical AI principles and practices.
    • While the role is based in **Gurugram, Haryana, IN **, a Yes remote option is available, indicating a strong preference for remote collaboration and contribution.
    This position offers a highly competitive salary, significant research funding, and the opportunity to shape the future of AI. Join a dynamic team of innovators and make a lasting impact on the field.
    This advertiser has chosen not to accept applicants from your region.

    Lead AI Engineer - Generative Models

    520001 Krishna, Andhra Pradesh ₹1400000 Annually WhatJobs

    Posted 4 days ago

    Job Viewed

    Tap Again To Close

    Job Description

    full-time
    Our client, a pioneering firm at the forefront of AI innovation, is seeking an exceptional Lead AI Engineer specializing in Generative Models to spearhead groundbreaking projects in Vijayawada, Andhra Pradesh, IN . This is a high-impact role for a visionary leader who is passionate about pushing the boundaries of artificial intelligence, particularly in the realm of content generation, synthesis, and manipulation. You will lead a team of talented AI engineers and researchers, guiding the development and deployment of state-of-the-art generative AI systems. Responsibilities include defining the technical vision for generative AI initiatives, architecting scalable solutions, and ensuring the successful implementation of complex models such as GANs, VAEs, Transformers, and Diffusion Models. You will work on diverse applications, potentially spanning creative content generation, synthetic data creation, and advanced simulation. The ideal candidate will have a proven track record of successfully leading AI projects from concept to production, coupled with deep expertise in deep learning, model optimization, and MLOps. You must possess strong leadership qualities, excellent problem-solving abilities, and a knack for mentoring and inspiring a technical team. A deep understanding of the underlying mathematical principles and algorithmic advancements in generative modeling is essential. You will collaborate closely with product managers, designers, and other stakeholders to translate cutting-edge research into tangible, value-generating products. This role demands excellent communication skills, the ability to articulate complex technical strategies, and a passion for innovation. Prior experience in leading AI engineering teams and a significant contribution to the field through publications or open-source projects will be highly valued. If you are a seasoned AI leader eager to shape the future of generative AI, this is an unparalleled opportunity.

    Key Responsibilities:
    • Lead the design, development, and implementation of advanced generative AI models and systems.
    • Architect scalable and robust AI solutions for content generation, synthesis, and manipulation.
    • Manage and mentor a team of AI engineers and researchers, fostering a culture of innovation.
    • Oversee the end-to-end lifecycle of generative AI projects, from research and development to deployment and monitoring.
    • Collaborate with cross-functional teams to define project requirements and translate them into technical specifications.
    • Drive research into new generative model architectures and techniques.
    • Ensure the performance, scalability, and reliability of deployed AI systems.
    • Contribute to the company's intellectual property through patents and publications.
    • Stay abreast of the latest advancements in generative AI and relevant research fields.
    • Present technical strategies and project updates to senior leadership and stakeholders.
    Qualifications:
    • Master's or Ph.D. in Computer Science, Artificial Intelligence, or a related field with a focus on deep learning and generative models.
    • 8+ years of experience in AI/ML engineering, with a significant portion dedicated to generative models (GANs, VAEs, Transformers, Diffusion Models).
    • Proven experience in leading and managing AI engineering teams.
    • Deep understanding of deep learning theory, algorithms, and frameworks (e.g., PyTorch, TensorFlow).
    • Experience with MLOps best practices and tools for model deployment and management.
    • Strong programming skills in Python.
    • Excellent analytical, problem-solving, and leadership capabilities.
    • Demonstrated ability to publish research or contribute to the AI community.
    This advertiser has chosen not to accept applicants from your region.
     

    Nearby Locations

    Other Jobs Near Me

    Industry

    1. request_quote Accounting
    2. work Administrative
    3. eco Agriculture Forestry
    4. smart_toy AI & Emerging Technologies
    5. school Apprenticeships & Trainee
    6. apartment Architecture
    7. palette Arts & Entertainment
    8. directions_car Automotive
    9. flight_takeoff Aviation
    10. account_balance Banking & Finance
    11. local_florist Beauty & Wellness
    12. restaurant Catering
    13. volunteer_activism Charity & Voluntary
    14. science Chemical Engineering
    15. child_friendly Childcare
    16. foundation Civil Engineering
    17. clean_hands Cleaning & Sanitation
    18. diversity_3 Community & Social Care
    19. construction Construction
    20. brush Creative & Digital
    21. currency_bitcoin Crypto & Blockchain
    22. support_agent Customer Service & Helpdesk
    23. medical_services Dental
    24. medical_services Driving & Transport
    25. medical_services E Commerce & Social Media
    26. school Education & Teaching
    27. electrical_services Electrical Engineering
    28. bolt Energy
    29. local_mall Fmcg
    30. gavel Government & Non Profit
    31. emoji_events Graduate
    32. health_and_safety Healthcare
    33. beach_access Hospitality & Tourism
    34. groups Human Resources
    35. precision_manufacturing Industrial Engineering
    36. security Information Security
    37. handyman Installation & Maintenance
    38. policy Insurance
    39. code IT & Software
    40. gavel Legal
    41. sports_soccer Leisure & Sports
    42. inventory_2 Logistics & Warehousing
    43. supervisor_account Management
    44. supervisor_account Management Consultancy
    45. supervisor_account Manufacturing & Production
    46. campaign Marketing
    47. build Mechanical Engineering
    48. perm_media Media & PR
    49. local_hospital Medical
    50. local_hospital Military & Public Safety
    51. local_hospital Mining
    52. medical_services Nursing
    53. local_gas_station Oil & Gas
    54. biotech Pharmaceutical
    55. checklist_rtl Project Management
    56. shopping_bag Purchasing
    57. home_work Real Estate
    58. person_search Recruitment Consultancy
    59. store Retail
    60. point_of_sale Sales
    61. science Scientific Research & Development
    62. wifi Telecoms
    63. psychology Therapy
    64. pets Veterinary
    View All AI Models Jobs