2680 AI Specialists jobs in Bengaluru
Generative AI/ML Development
Posted 8 days ago
Job Viewed
Job Description
**Location:** Bangalore
**Full/ Part-time:** Full Time.
**Build a career with confidence:**
Carrier Global Corporation, global leader in intelligent climate and energy solutions is committed to creating solutions that matter for people and our planet for generations to come. From the beginning, we've led in inventing new technologies and entirely new industries. Today, we continue to lead because we have a world-class, diverse workforce that puts the customer at the center of everything we do.
**About the Role:**
Seeking a seasoned Generative AI/ML with Python development professionals with expertise into deploying applications in production environment.
**Job Description:**
As an AI/ML Engineer, experiment, design, develop, and deploy applications powered by traditional Machine Learning models as well as Large Language Models (LLMs)
**Roles and Responsibilities:**
- Implement end-to-end AI/ML pipelines including collection/exploration/cleaning/validation of data, feature engineering, model training, versioning, evaluation, deployment and monitoring models using Cloud Provider services, such as Azure ML and Open-Source Tools.
- Design, develop, deploy, evaluate applications using LLMs (such as GPT, Claude, Gemini etc).
- Analyze large scale structured (IoT & Non-IoT) and unstructured data to understand trends/patterns and guide business decisions.
**Minimum Requirements:**
- 5 to 8 years of professional experience in software development, including at least 5 years of developing and deploying Machine Learning models in production, with a recent focus on Generative AI/LLMs.
- Strong programming skills in Python, with hands-on experience in libraries/frameworks such as pandas, Numpy, scikit-learn, XGBoost, PyTorch, TensorFlow etc.
- Expertise in software development life cycle (unit/integration testing, code review, version control, build process).
- Exposure to RESTful API designing and development using frameworks like Flask, FastAPI etc.
- Familiarity with techniques such as Prompt Engineering, Function/Tool Calling, Retrieval-Augmented Generation (RAG), Agents, LLM based Reasoning and evaluation using OpenAI/Anthropic/Google GenAI Python SDK (Good to have)
**Benefits:**
We are committed to offering competitive benefits programs for all our employees and enhancing our programs when necessary.
+ Make yourself a priority with flexible schedules, parental leave.
+ Drive forward your career through professional development opportunities.
+ Achieve your personal goals with our Employee Assistance Programme.
**Our commitment to you:**
Our greatest assets are the expertise, creativity, and passion of our employees. We strive to provide a great place to work that attracts, develops, and retains the best talent, promotes employee engagement, fosters teamwork and ultimately drives innovation for the benefit of our customers. We strive to create an environment where you feel that you belong, with diversity and inclusion as the engine to growth and innovation. We develop and deploy best-in-class programs and practices, providing enriching career opportunities, listening to employee feedback, and always challenging ourselves to do better.
**_This is The Carrier Way._**
**_Join us and make a difference._**
**_Apply Now!_**
**Carrier is An Equal** **Opportunity/Affirmative** **Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or veteran status, age or any other federally protected class.**
**Job Applicant's Privacy Notice:**
Click on this link ( to read the Job Applicant's Privacy Notice
Generative AI/ML Development
Posted today
Job Viewed
Job Description
Role: Generative AI/ML Development
Location: Bangalore
Full/ Part-time: Full Time.
Build a career with confidence:
Carrier Global Corporation, global leader in intelligent climate and energy solutions is committed to creating solutions that matter for people and our planet for generations to come. From the beginning, we've led in inventing new technologies and entirely new industries. Today, we continue to lead because we have a world-class, diverse workforce that puts the customer at the center of everything we do.
About the Role:
Seeking a seasoned Generative AI/ML with Python development professionals with expertise into deploying applications in production environment.
Job Description:
As an AI/ML Engineer, experiment, design, develop, and deploy applications powered by traditional Machine Learning models as well as Large Language Models (LLMs)
Roles and Responsibilities:
- Implement end-to-end AI/ML pipelines including collection/exploration/cleaning/validation of data, feature engineering, model training, versioning, evaluation, deployment and monitoring models using Cloud Provider services, such as Azure ML and Open-Source Tools.
- Design, develop, deploy, evaluate applications using LLMs (such as GPT, Claude, Gemini etc).
- Analyze large scale structured (IoT & Non-IoT) and unstructured data to understand trends/patterns and guide business decisions.
Minimum Requirements:
- 5 to 8 years of professional experience in software development, including at least 5 years of developing and deploying Machine Learning models in production, with a recent focus on Generative AI/LLMs.
- Strong programming skills in Python, with hands-on experience in libraries/frameworks such as pandas, Numpy, scikit-learn, XGBoost, PyTorch, TensorFlow etc.
- Expertise in software development life cycle (unit/integration testing, code review, version control, build process).
- Exposure to RESTful API designing and development using frameworks like Flask, FastAPI etc.
- Familiarity with techniques such as Prompt Engineering, Function/Tool Calling, Retrieval-Augmented Generation (RAG), Agents, LLM based Reasoning and evaluation using OpenAI/Anthropic/Google GenAI Python SDK (Good to have)
Benefits:
We are committed to offering competitive benefits programs for all our employees and enhancing our programs when necessary.
Make yourself a priority with flexible schedules, parental leave.
Drive forward your career through professional development opportunities.
Achieve your personal goals with our Employee Assistance Programme.
Our commitment to you:
Our greatest assets are the expertise, creativity, and passion of our employees. We strive to provide a great place to work that attracts, develops, and retains the best talent, promotes employee engagement, fosters teamwork and ultimately drives innovation for the benefit of our customers. We strive to create an environment where you feel that you belong, with diversity and inclusion as the engine to growth and innovation. We develop and deploy best-in-class programs and practices, providing enriching career opportunities, listening to employee feedback, and always challenging ourselves to do better.
Carrier is An Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or veteran status, age or any other federally protected class.
Generative AI/ML Development
Posted today
Job Viewed
Job Description
Role: Generative AI/ML Development
Location: Bangalore
Full/ Part-time: Full Time.
Build a career with confidence:
Carrier Global Corporation, global leader in intelligent climate and energy solutions is committed to creating solutions that matter for people and our planet for generations to come. From the beginning, we've led in inventing new technologies and entirely new industries. Today, we continue to lead because we have a world-class, diverse workforce that puts the customer at the center of everything we do.
About the Role:
Seeking a seasoned Generative AI/ML with Python development professionals with expertise into deploying applications in production environment.
Job Description:
As an AI/ML Engineer, experiment, design, develop, and deploy applications powered by traditional Machine Learning models as well as Large Language Models (LLMs)
Roles and Responsibilities:
- Implement end-to-end AI/ML pipelines including collection/exploration/cleaning/validation of data, feature engineering, model training, versioning, evaluation, deployment and monitoring models using Cloud Provider services, such as Azure ML and Open-Source Tools.
- Design, develop, deploy, evaluate applications using LLMs (such as GPT, Claude, Gemini etc).
- Analyze large scale structured (IoT & Non-IoT) and unstructured data to understand trends/patterns and guide business decisions.
Minimum Requirements:
- 5 to 8 years of professional experience in software development, including at least 5 years of developing and deploying Machine Learning models in production, with a recent focus on Generative AI/LLMs.
- Strong programming skills in Python, with hands-on experience in libraries/frameworks such as pandas, Numpy, scikit-learn, XGBoost, PyTorch, TensorFlow etc.
- Expertise in software development life cycle (unit/integration testing, code review, version control, build process).
- Exposure to RESTful API designing and development using frameworks like Flask, FastAPI etc.
- Familiarity with techniques such as Prompt Engineering, Function/Tool Calling, Retrieval-Augmented Generation (RAG), Agents, LLM based Reasoning and evaluation using OpenAI/Anthropic/Google GenAI Python SDK (Good to have)
Benefits:
We are committed to offering competitive benefits programs for all our employees and enhancing our programs when necessary.
Make yourself a priority with flexible schedules, parental leave.
Drive forward your career through professional development opportunities.
Achieve your personal goals with our Employee Assistance Programme.
Our commitment to you:
Our greatest assets are the expertise, creativity, and passion of our employees. We strive to provide a great place to work that attracts, develops, and retains the best talent, promotes employee engagement, fosters teamwork and ultimately drives innovation for the benefit of our customers. We strive to create an environment where you feel that you belong, with diversity and inclusion as the engine to growth and innovation. We develop and deploy best-in-class programs and practices, providing enriching career opportunities, listening to employee feedback, and always challenging ourselves to do better.
This is The Carrier Way.
Join us and make a difference.
Apply Now!
Carrier is An Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or veteran status, age or any other federally protected class.
Job Applicant's Privacy Notice:
Click on this link to read the Job Applicant's Privacy Notice
Artificial Intelligence Engineer (GCP Vertex AI)
Posted 7 days ago
Job Viewed
Job Description
Job Summary:
We are seeking a hands-on AI Engineer to design, build, and deploy intelligent AI agents using GCP Vertex AI , LangChain , and modern UI tools like Streamlit . The ideal candidate will bring together skills in large language models (LLMs), agent orchestration , MLOps , and user-friendly interface development to create powerful and accessible AI solutions.
- Key Responsibilities: Design and implement LLM-based agents using LangChain , integrated with GCP Vertex AI services.
- Build interactive UIs using Streamlit or similar frameworks to showcase and test AI agent capabilities.
- Develop end-to-end ML pipelines for training, evaluation, and deployment using tools like Vertex Pipelines , Kubeflow , or Airflow .
- Integrate with APIs, vector databases, and knowledge sources to enable RAG (Retrieval-Augmented Generation) workflows.
- Deploy scalable, secure AI services using CI/CD pipelines , infrastructure-as-code, and version-controlled model registries.
- Monitor model performance, manage experiments, and optimize agent behavior in production environments.
- Work cross-functionally with product, design, and engineering teams to deliver intuitive, high-impact AI-powered applications.
- Required Qualifications: Relevant of 3–6 years of hands-on experience in AI/ML engineering , including recent work with LLMs and LangChain .
- Proficiency with GCP Vertex AI tools such as Pipelines, Model Registry, Training, and Endpoints.
- Strong Python programming skills , with experience in FastAPI , Flask , or similar web frameworks.
- Demonstrated experience building interactive dashboards or tools using Streamlit , Gradio , or Dash .
- Knowledge of MLOps workflows , including tools like MLflow , Weights & Biases , or Vertex AI Experiments .
- Experience working with vector stores (e.g., FAISS, Pinecone, Weaviate) in agent pipelines.
- Familiarity with retrieval-based QA , embeddings, and prompt engineering techniques.
- Experience with LangGraph or similar agent orchestration frameworks.
- Preferred Qualifications: Familiarity with cloud-native deployment and DevOps tools (Terraform, Docker, GCP Cloud Build).
- Background in UX/UI design thinking or rapid prototyping for AI-driven applications.
- Experience integrating LLMs with external APIs or private knowledge sources.
Artificial Intelligence Engineer (GCP Vertex AI)
Posted 1 day ago
Job Viewed
Job Description
We are seeking a hands-on AI Engineer to design, build, and deploy intelligent AI agents using GCP Vertex AI , LangChain , and modern UI tools like Streamlit . The ideal candidate will bring together skills in large language models (LLMs), agent orchestration , MLOps , and user-friendly interface development to create powerful and accessible AI solutions.
Key Responsibilities: Design and implement LLM-based agents using LangChain , integrated with GCP Vertex AI services.
Build interactive UIs using Streamlit or similar frameworks to showcase and test AI agent capabilities.
Develop end-to-end ML pipelines for training, evaluation, and deployment using tools like Vertex Pipelines , Kubeflow , or Airflow .
Integrate with APIs, vector databases, and knowledge sources to enable RAG (Retrieval-Augmented Generation) workflows.
Deploy scalable, secure AI services using CI/CD pipelines , infrastructure-as-code, and version-controlled model registries.
Monitor model performance, manage experiments, and optimize agent behavior in production environments.
Work cross-functionally with product, design, and engineering teams to deliver intuitive, high-impact AI-powered applications.
Required Qualifications: Relevant of 3–6 years of hands-on experience in AI/ML engineering , including recent work with LLMs and LangChain .
Proficiency with GCP Vertex AI tools such as Pipelines, Model Registry, Training, and Endpoints.
Strong Python programming skills , with experience in FastAPI , Flask , or similar web frameworks.
Demonstrated experience building interactive dashboards or tools using Streamlit , Gradio , or Dash .
Knowledge of MLOps workflows , including tools like MLflow , Weights & Biases , or Vertex AI Experiments .
Experience working with vector stores (e.g., FAISS, Pinecone, Weaviate) in agent pipelines.
Familiarity with retrieval-based QA , embeddings, and prompt engineering techniques.
Experience with LangGraph or similar agent orchestration frameworks.
Preferred Qualifications: Familiarity with cloud-native deployment and DevOps tools (Terraform, Docker, GCP Cloud Build).
Background in UX/UI design thinking or rapid prototyping for AI-driven applications.
Experience integrating LLMs with external APIs or private knowledge sources.
Artificial Intelligence Engineer (GCP Vertex AI)
Posted today
Job Viewed
Job Description
Job Summary:
We are seeking a hands-on AI Engineer to design, build, and deploy intelligent AI agents using GCP Vertex AI , LangChain , and modern UI tools like Streamlit . The ideal candidate will bring together skills in large language models (LLMs), agent orchestration , MLOps , and user-friendly interface development to create powerful and accessible AI solutions.
- Key Responsibilities: Design and implement LLM-based agents using LangChain , integrated with GCP Vertex AI services.
- Build interactive UIs using Streamlit or similar frameworks to showcase and test AI agent capabilities.
- Develop end-to-end ML pipelines for training, evaluation, and deployment using tools like Vertex Pipelines , Kubeflow , or Airflow .
- Integrate with APIs, vector databases, and knowledge sources to enable RAG (Retrieval-Augmented Generation) workflows.
- Deploy scalable, secure AI services using CI/CD pipelines , infrastructure-as-code, and version-controlled model registries.
- Monitor model performance, manage experiments, and optimize agent behavior in production environments.
- Work cross-functionally with product, design, and engineering teams to deliver intuitive, high-impact AI-powered applications.
- Required Qualifications: Relevant of 3–6 years of hands-on experience in AI/ML engineering , including recent work with LLMs and LangChain .
- Proficiency with GCP Vertex AI tools such as Pipelines, Model Registry, Training, and Endpoints.
- Strong Python programming skills , with experience in FastAPI , Flask , or similar web frameworks.
- Demonstrated experience building interactive dashboards or tools using Streamlit , Gradio , or Dash .
- Knowledge of MLOps workflows , including tools like MLflow , Weights & Biases , or Vertex AI Experiments .
- Experience working with vector stores (e.g., FAISS, Pinecone, Weaviate) in agent pipelines.
- Familiarity with retrieval-based QA , embeddings, and prompt engineering techniques.
- Experience with LangGraph or similar agent orchestration frameworks.
- Preferred Qualifications: Familiarity with cloud-native deployment and DevOps tools (Terraform, Docker, GCP Cloud Build).
- Background in UX/UI design thinking or rapid prototyping for AI-driven applications.
- Experience integrating LLMs with external APIs or private knowledge sources.
Artificial Intelligence Engineer (GCP Vertex AI)
Posted 9 days ago
Job Viewed
Job Description
Job Summary:
We are seeking a hands-on AI Engineer to design, build, and deploy intelligent AI agents using GCP Vertex AI , LangChain , and modern UI tools like Streamlit . The ideal candidate will bring together skills in large language models (LLMs), agent orchestration , MLOps , and user-friendly interface development to create powerful and accessible AI solutions.
- Key Responsibilities: Design and implement LLM-based agents using LangChain , integrated with GCP Vertex AI services.
- Build interactive UIs using Streamlit or similar frameworks to showcase and test AI agent capabilities.
- Develop end-to-end ML pipelines for training, evaluation, and deployment using tools like Vertex Pipelines , Kubeflow , or Airflow .
- Integrate with APIs, vector databases, and knowledge sources to enable RAG (Retrieval-Augmented Generation) workflows.
- Deploy scalable, secure AI services using CI/CD pipelines , infrastructure-as-code, and version-controlled model registries.
- Monitor model performance, manage experiments, and optimize agent behavior in production environments.
- Work cross-functionally with product, design, and engineering teams to deliver intuitive, high-impact AI-powered applications.
- Required Qualifications: Relevant of 3–6 years of hands-on experience in AI/ML engineering , including recent work with LLMs and LangChain .
- Proficiency with GCP Vertex AI tools such as Pipelines, Model Registry, Training, and Endpoints.
- Strong Python programming skills , with experience in FastAPI , Flask , or similar web frameworks.
- Demonstrated experience building interactive dashboards or tools using Streamlit , Gradio , or Dash .
- Knowledge of MLOps workflows , including tools like MLflow , Weights & Biases , or Vertex AI Experiments .
- Experience working with vector stores (e.g., FAISS, Pinecone, Weaviate) in agent pipelines.
- Familiarity with retrieval-based QA , embeddings, and prompt engineering techniques.
- Experience with LangGraph or similar agent orchestration frameworks.
- Preferred Qualifications: Familiarity with cloud-native deployment and DevOps tools (Terraform, Docker, GCP Cloud Build).
- Background in UX/UI design thinking or rapid prototyping for AI-driven applications.
- Experience integrating LLMs with external APIs or private knowledge sources.
Be The First To Know
About the latest Ai specialists Jobs in Bengaluru !
Artificial Intelligence Engineer (GCP Vertex AI)
Posted today
Job Viewed
Job Description
Job Summary:
We are seeking a hands-on AI Engineer to design, build, and deploy intelligent AI agents using GCP Vertex AI , LangChain , and modern UI tools like Streamlit . The ideal candidate will bring together skills in large language models (LLMs), agent orchestration , MLOps , and user-friendly interface development to create powerful and accessible AI solutions.
- Key Responsibilities: Design and implement LLM-based agents using LangChain , integrated with GCP Vertex AI services.
- Build interactive UIs using Streamlit or similar frameworks to showcase and test AI agent capabilities.
- Develop end-to-end ML pipelines for training, evaluation, and deployment using tools like Vertex Pipelines , Kubeflow , or Airflow .
- Integrate with APIs, vector databases, and knowledge sources to enable RAG (Retrieval-Augmented Generation) workflows.
- Deploy scalable, secure AI services using CI/CD pipelines , infrastructure-as-code, and version-controlled model registries.
- Monitor model performance, manage experiments, and optimize agent behavior in production environments.
- Work cross-functionally with product, design, and engineering teams to deliver intuitive, high-impact AI-powered applications.
- Required Qualifications: Relevant of 3–6 years of hands-on experience in AI/ML engineering , including recent work with LLMs and LangChain .
- Proficiency with GCP Vertex AI tools such as Pipelines, Model Registry, Training, and Endpoints.
- Strong Python programming skills , with experience in FastAPI , Flask , or similar web frameworks.
- Demonstrated experience building interactive dashboards or tools using Streamlit , Gradio , or Dash .
- Knowledge of MLOps workflows , including tools like MLflow , Weights & Biases , or Vertex AI Experiments .
- Experience working with vector stores (e.g., FAISS, Pinecone, Weaviate) in agent pipelines.
- Familiarity with retrieval-based QA , embeddings, and prompt engineering techniques.
- Experience with LangGraph or similar agent orchestration frameworks.
- Preferred Qualifications: Familiarity with cloud-native deployment and DevOps tools (Terraform, Docker, GCP Cloud Build).
- Background in UX/UI design thinking or rapid prototyping for AI-driven applications.
- Experience integrating LLMs with external APIs or private knowledge sources.
Senior AI Research & Development Engineer
Posted today
Job Viewed
Job Description
TL;DR : This role blends deep, publishable AI research with hands-on system development , ensuring you can conduct research, build, validate, and ship groundbreaking AI capabilities end‑to‑end.
You're a bridge between science and software. You’ll innovate on LLMs, prompt systems, evaluation frameworks, and algorithm pipelines—and bring them to production-grade systems that power real users. Expect to author experimental prototypes rich enough for academic-quality write-ups, then productize those same ideas into robust, reliable software.
Research Prototyping
- Tackle experiments in LLM training, LLM agent architectures, fine tuning, RL, and new evaluation methodologies.
- Build code-first prototypes to test hypotheses and iterate rapidly—refactor them into production systems when experiments succeed.
Model & Architecture Innovation
- Develop and benchmark improvements in LLM architecture, fine‑tuning workflows, and generation control.
- Optimize for performance, resource efficiency, and interpretability.
Production Engineering
- Own infrastructure: from data pipelines and experiment sandbox to model serving, batch/online inference, logging, and CI/CD.
- Treat experiments as software: automated tests, clear versioning, reproducibility, and documentation.
Scientific Experiment Tooling
- Create frameworks to automate scientific cycles: data generation, metric computation, model comparison dashboards.
- Enable internal teams to run reproducible experiments at scale.
Knowledge Sharing & Publication
- Write up interesting findings—internal memos, tech blogs, or scholarship-worthy results.
- Participate in code and science critique sessions with peers.
Master’s or Ph.D. in CS, ML, or related (or equivalent experience)
- Deep familiarity with LLMs, fine‑tuning, reinforcment learning.
- Expert in Python + frameworks like PyTorch or JAX; experience with Hugging Face or deployed LLM APIs
- Strong software engineering intuition: modular design, CI/CD, containerization, automated testing
- Demonstrated track record in both research and production settings (e.g., prototypes published internally or externally)
- Bonus: familiarity with Kubernetes, JAX compiler stacks (e.g. Flax, Alpa), Hugging Face Accelerate, or MLOps toolchains
Research Innovation
Prototypes demonstrating clear scientific advances (e.g. better evaluation metrics or prompt robustness). Papers submitted or draft-ready internal research docs.
Engineering Quality & Velocity
Experimental pipelines and prototypes are production-ready within sprint cycles; clean codebases, smooth deployments, few bugs.
System Impact & Use
R&D code drives real product features—high user adoption, improved reliability or performance.
Senior AI Research & Development Engineer
Posted today
Job Viewed
Job Description
TL;DR : This role blends deep, publishable AI research with hands-on system development , ensuring you can conduct research, build, validate, and ship groundbreaking AI capabilities end‑to‑end.
You're a bridge between science and software. You’ll innovate on LLMs, prompt systems, evaluation frameworks, and algorithm pipelines—and bring them to production-grade systems that power real users. Expect to author experimental prototypes rich enough for academic-quality write-ups, then productize those same ideas into robust, reliable software.
Research Prototyping
- Tackle experiments in LLM training, LLM agent architectures, fine tuning, RL, and new evaluation methodologies.
- Build code-first prototypes to test hypotheses and iterate rapidly—refactor them into production systems when experiments succeed.
Model & Architecture Innovation
- Develop and benchmark improvements in LLM architecture, fine‑tuning workflows, and generation control.
- Optimize for performance, resource efficiency, and interpretability.
Production Engineering
- Own infrastructure: from data pipelines and experiment sandbox to model serving, batch/online inference, logging, and CI/CD.
- Treat experiments as software: automated tests, clear versioning, reproducibility, and documentation.
Scientific Experiment Tooling
- Create frameworks to automate scientific cycles: data generation, metric computation, model comparison dashboards.
- Enable internal teams to run reproducible experiments at scale.
Knowledge Sharing & Publication
- Write up interesting findings—internal memos, tech blogs, or scholarship-worthy results.
- Participate in code and science critique sessions with peers.
Master’s or Ph.D. in CS, ML, or related (or equivalent experience)
- Deep familiarity with LLMs, fine‑tuning, reinforcment learning.
- Expert in Python + frameworks like PyTorch or JAX; experience with Hugging Face or deployed LLM APIs
- Strong software engineering intuition: modular design, CI/CD, containerization, automated testing
- Demonstrated track record in both research and production settings (e.g., prototypes published internally or externally)
- Bonus: familiarity with Kubernetes, JAX compiler stacks (e.g. Flax, Alpa), Hugging Face Accelerate, or MLOps toolchains
Research Innovation
Prototypes demonstrating clear scientific advances (e.g. better evaluation metrics or prompt robustness). Papers submitted or draft-ready internal research docs.
Engineering Quality & Velocity
Experimental pipelines and prototypes are production-ready within sprint cycles; clean codebases, smooth deployments, few bugs.
System Impact & Use
R&D code drives real product features—high user adoption, improved reliability or performance.