1450 AI Specialists jobs in Hyderabad
Artificial Intelligence (AI)
Posted today
Job Viewed
Job Description
• Strong background in mathematical, numerical, and scientific computing using Python.
• Knowledge in Artificial Intelligence/Machine learning.
• Experience working with SCRUM software development methodology
• Strong experience with implementing Web services, Web clients, and JSON protocol is required.
• Experience with Python Metaprogramming.
• Strong analytical and problem-solving skills.
• Design, develop and debug enterprise-grade software products and systems.
AI (Artificial Intelligence) Developer/Lead
Posted 11 days ago
Job Viewed
Job Description
We are seeking a highly skilled and innovative AI Lead/Developer with proven experience in designing, developing, and deploying AI Agents and Conversational AI Chatbots using Azure Cloud Services . You will play a critical role in transforming enterprise workflows through intelligent automation, integrating with modern AI platforms, and delivering scalable solutions that drive business outcomes.
RequirementsResearch, design and develop intelligent AI agents , AI/GenAI apps and chatbots using Azure OpenAI, Azure AI foundry, Semantic Kernel, Vector Databases, Azure AI Agent Service, Azure AI Model Inference, Azure AI Search, Azure Bot Services, Cognitive Services, Azure Machine Learning. Etc Lead architecture and implementation of AI workflows, including prompt engineering , RAG (Retrieval-Augmented Generation) , and multi-turn conversational flows .Build and fine-tune LLM-based applications using Azure OpenAI (GPT models) for various enterprise use cases (customer support, internal tools, etc.).Integrate AI agents with backend services, APIs, databases, and third-party platforms via Azure Logic Apps, Azure Functions, and REST APIs.Design secure and scalable cloud architecture using Azure App Services, Azure Kubernetes Service (AKS), Azure API Management , etc.Collaborate with product managers, UX designers, and business stakeholders to define AI use cases and user interaction strategies.Conduct performance tuning, A/B testing, and continuous improvement of AI agent responses and model accuracy.Provide technical leadership, mentoring junior developers and contributing to architectural decisions.Stay up to date with advancements in Generative AI, LLM orchestration, and Azure AI ecosystem.Required Skills & Experience:
4+ years of experience in AI/ML or software development, with at least 2+ years focused on Azure AI and chatbot development.Strong knowledge of Azure OpenAI Service , Azure Bot Framework, Azure Cognitive Services (LUIS, QnA Maker, Speech).Experience with Python , Node.js , or C# for bot development.Familiarity with LangChain , Semantic Kernel , or other agent orchestration frameworks (preferred).Hands-on experience deploying AI solutions using Azure ML , Azure DevOps , and containerization (Docker/Kubernetes) .Deep understanding of natural language processing (NLP) , LLMs , and prompt engineering techniques .Experience with RAG pipelines , vector databases (e.g., Azure Cognitive Search or Pinecone), and knowledge grounding.Proven experience integrating chatbots with enterprise platforms (MS Teams, Slack, Web, CRM, etc.).Strong problem-solving skills, analytical mindset, and passion for emerging AI technologies.Preferred Qualifications:
Microsoft Certified: Azure AI Engineer Associate or equivalent.Familiarity with Ethical AI , responsible AI design principles , and data governance .Prior experience in building multilingual and voice-enabled agents.Experience with CI/CD for AI pipelines using Azure DevOps or GitHub Actions.Benefits· Attractive salary packages with performance-based incentives.· Opportunities for professional certifications (e.g., AWS, Kubernetes, Terraform).· Access to training programs, workshops, and learning resources.· Comprehensive health insurance coverage for employees and their families.· Wellness programs and mental health support.· Hands-on experience with large-scale, innovative cloud solutions.· Opportunities to work with modern tools and technologies.· Inclusive, supportive, and team-oriented environment.· Opportunities to collaborate with global clients and cross-functional teams.· Regular performance reviews with rewards for outstanding contributions.· Employee appreciation events and programs.Artificial Intelligence Engineer (GCP Vertex AI)
Posted 3 days ago
Job Viewed
Job Description
Job Summary:
We are seeking a hands-on AI Engineer to design, build, and deploy intelligent AI agents using GCP Vertex AI , LangChain , and modern UI tools like Streamlit . The ideal candidate will bring together skills in large language models (LLMs), agent orchestration , MLOps , and user-friendly interface development to create powerful and accessible AI solutions.
- Key Responsibilities: Design and implement LLM-based agents using LangChain , integrated with GCP Vertex AI services.
- Build interactive UIs using Streamlit or similar frameworks to showcase and test AI agent capabilities.
- Develop end-to-end ML pipelines for training, evaluation, and deployment using tools like Vertex Pipelines , Kubeflow , or Airflow .
- Integrate with APIs, vector databases, and knowledge sources to enable RAG (Retrieval-Augmented Generation) workflows.
- Deploy scalable, secure AI services using CI/CD pipelines , infrastructure-as-code, and version-controlled model registries.
- Monitor model performance, manage experiments, and optimize agent behavior in production environments.
- Work cross-functionally with product, design, and engineering teams to deliver intuitive, high-impact AI-powered applications.
- Required Qualifications: Relevant of 3–6 years of hands-on experience in AI/ML engineering , including recent work with LLMs and LangChain .
- Proficiency with GCP Vertex AI tools such as Pipelines, Model Registry, Training, and Endpoints.
- Strong Python programming skills , with experience in FastAPI , Flask , or similar web frameworks.
- Demonstrated experience building interactive dashboards or tools using Streamlit , Gradio , or Dash .
- Knowledge of MLOps workflows , including tools like MLflow , Weights & Biases , or Vertex AI Experiments .
- Experience working with vector stores (e.g., FAISS, Pinecone, Weaviate) in agent pipelines.
- Familiarity with retrieval-based QA , embeddings, and prompt engineering techniques.
- Experience with LangGraph or similar agent orchestration frameworks.
- Preferred Qualifications: Familiarity with cloud-native deployment and DevOps tools (Terraform, Docker, GCP Cloud Build).
- Background in UX/UI design thinking or rapid prototyping for AI-driven applications.
- Experience integrating LLMs with external APIs or private knowledge sources.
Artificial Intelligence Engineer (GCP Vertex AI)
Posted 3 days ago
Job Viewed
Job Description
Job Summary:
We are seeking a hands-on AI Engineer to design, build, and deploy intelligent AI agents using GCP Vertex AI , LangChain , and modern UI tools like Streamlit . The ideal candidate will bring together skills in large language models (LLMs), agent orchestration , MLOps , and user-friendly interface development to create powerful and accessible AI solutions.
- Key Responsibilities: Design and implement LLM-based agents using LangChain , integrated with GCP Vertex AI services.
- Build interactive UIs using Streamlit or similar frameworks to showcase and test AI agent capabilities.
- Develop end-to-end ML pipelines for training, evaluation, and deployment using tools like Vertex Pipelines , Kubeflow , or Airflow .
- Integrate with APIs, vector databases, and knowledge sources to enable RAG (Retrieval-Augmented Generation) workflows.
- Deploy scalable, secure AI services using CI/CD pipelines , infrastructure-as-code, and version-controlled model registries.
- Monitor model performance, manage experiments, and optimize agent behavior in production environments.
- Work cross-functionally with product, design, and engineering teams to deliver intuitive, high-impact AI-powered applications.
- Required Qualifications: Relevant of 3–6 years of hands-on experience in AI/ML engineering , including recent work with LLMs and LangChain .
- Proficiency with GCP Vertex AI tools such as Pipelines, Model Registry, Training, and Endpoints.
- Strong Python programming skills , with experience in FastAPI , Flask , or similar web frameworks.
- Demonstrated experience building interactive dashboards or tools using Streamlit , Gradio , or Dash .
- Knowledge of MLOps workflows , including tools like MLflow , Weights & Biases , or Vertex AI Experiments .
- Experience working with vector stores (e.g., FAISS, Pinecone, Weaviate) in agent pipelines.
- Familiarity with retrieval-based QA , embeddings, and prompt engineering techniques.
- Experience with LangGraph or similar agent orchestration frameworks.
- Preferred Qualifications: Familiarity with cloud-native deployment and DevOps tools (Terraform, Docker, GCP Cloud Build).
- Background in UX/UI design thinking or rapid prototyping for AI-driven applications.
- Experience integrating LLMs with external APIs or private knowledge sources.
Senior AI Research & Development Engineer
Posted today
Job Viewed
Job Description
TL;DR : This role blends deep, publishable AI research with hands-on system development , ensuring you can conduct research, build, validate, and ship groundbreaking AI capabilities end‑to‑end.
You're a bridge between science and software. You’ll innovate on LLMs, prompt systems, evaluation frameworks, and algorithm pipelines—and bring them to production-grade systems that power real users. Expect to author experimental prototypes rich enough for academic-quality write-ups, then productize those same ideas into robust, reliable software.
Research Prototyping
- Tackle experiments in LLM training, LLM agent architectures, fine tuning, RL, and new evaluation methodologies.
- Build code-first prototypes to test hypotheses and iterate rapidly—refactor them into production systems when experiments succeed.
Model & Architecture Innovation
- Develop and benchmark improvements in LLM architecture, fine‑tuning workflows, and generation control.
- Optimize for performance, resource efficiency, and interpretability.
Production Engineering
- Own infrastructure: from data pipelines and experiment sandbox to model serving, batch/online inference, logging, and CI/CD.
- Treat experiments as software: automated tests, clear versioning, reproducibility, and documentation.
Scientific Experiment Tooling
- Create frameworks to automate scientific cycles: data generation, metric computation, model comparison dashboards.
- Enable internal teams to run reproducible experiments at scale.
Knowledge Sharing & Publication
- Write up interesting findings—internal memos, tech blogs, or scholarship-worthy results.
- Participate in code and science critique sessions with peers.
Master’s or Ph.D. in CS, ML, or related (or equivalent experience)
- Deep familiarity with LLMs, fine‑tuning, reinforcment learning.
- Expert in Python + frameworks like PyTorch or JAX; experience with Hugging Face or deployed LLM APIs
- Strong software engineering intuition: modular design, CI/CD, containerization, automated testing
- Demonstrated track record in both research and production settings (e.g., prototypes published internally or externally)
- Bonus: familiarity with Kubernetes, JAX compiler stacks (e.g. Flax, Alpa), Hugging Face Accelerate, or MLOps toolchains
Research Innovation
Prototypes demonstrating clear scientific advances (e.g. better evaluation metrics or prompt robustness). Papers submitted or draft-ready internal research docs.
Engineering Quality & Velocity
Experimental pipelines and prototypes are production-ready within sprint cycles; clean codebases, smooth deployments, few bugs.
System Impact & Use
R&D code drives real product features—high user adoption, improved reliability or performance.
Senior AI Research & Development Engineer
Posted 3 days ago
Job Viewed
Job Description
TL;DR : This role blends deep, publishable AI research with hands-on system development , ensuring you can conduct research, build, validate, and ship groundbreaking AI capabilities end‑to‑end.
You're a bridge between science and software. You’ll innovate on LLMs, prompt systems, evaluation frameworks, and algorithm pipelines—and bring them to production-grade systems that power real users. Expect to author experimental prototypes rich enough for academic-quality write-ups, then productize those same ideas into robust, reliable software.
Research Prototyping
- Tackle experiments in LLM training, LLM agent architectures, fine tuning, RL, and new evaluation methodologies.
- Build code-first prototypes to test hypotheses and iterate rapidly—refactor them into production systems when experiments succeed.
Model & Architecture Innovation
- Develop and benchmark improvements in LLM architecture, fine‑tuning workflows, and generation control.
- Optimize for performance, resource efficiency, and interpretability.
Production Engineering
- Own infrastructure: from data pipelines and experiment sandbox to model serving, batch/online inference, logging, and CI/CD.
- Treat experiments as software: automated tests, clear versioning, reproducibility, and documentation.
Scientific Experiment Tooling
- Create frameworks to automate scientific cycles: data generation, metric computation, model comparison dashboards.
- Enable internal teams to run reproducible experiments at scale.
Knowledge Sharing & Publication
- Write up interesting findings—internal memos, tech blogs, or scholarship-worthy results.
- Participate in code and science critique sessions with peers.
Master’s or Ph.D. in CS, ML, or related (or equivalent experience)
- Deep familiarity with LLMs, fine‑tuning, reinforcment learning.
- Expert in Python + frameworks like PyTorch or JAX; experience with Hugging Face or deployed LLM APIs
- Strong software engineering intuition: modular design, CI/CD, containerization, automated testing
- Demonstrated track record in both research and production settings (e.g., prototypes published internally or externally)
- Bonus: familiarity with Kubernetes, JAX compiler stacks (e.g. Flax, Alpa), Hugging Face Accelerate, or MLOps toolchains
Research Innovation
Prototypes demonstrating clear scientific advances (e.g. better evaluation metrics or prompt robustness). Papers submitted or draft-ready internal research docs.
Engineering Quality & Velocity
Experimental pipelines and prototypes are production-ready within sprint cycles; clean codebases, smooth deployments, few bugs.
System Impact & Use
R&D code drives real product features—high user adoption, improved reliability or performance.
Senior AI Research & Development Engineer
Posted 3 days ago
Job Viewed
Job Description
TL;DR : This role blends deep, publishable AI research with hands-on system development , ensuring you can conduct research, build, validate, and ship groundbreaking AI capabilities end‑to‑end.
You're a bridge between science and software. You’ll innovate on LLMs, prompt systems, evaluation frameworks, and algorithm pipelines—and bring them to production-grade systems that power real users. Expect to author experimental prototypes rich enough for academic-quality write-ups, then productize those same ideas into robust, reliable software.
Research Prototyping
- Tackle experiments in LLM training, LLM agent architectures, fine tuning, RL, and new evaluation methodologies.
- Build code-first prototypes to test hypotheses and iterate rapidly—refactor them into production systems when experiments succeed.
Model & Architecture Innovation
- Develop and benchmark improvements in LLM architecture, fine‑tuning workflows, and generation control.
- Optimize for performance, resource efficiency, and interpretability.
Production Engineering
- Own infrastructure: from data pipelines and experiment sandbox to model serving, batch/online inference, logging, and CI/CD.
- Treat experiments as software: automated tests, clear versioning, reproducibility, and documentation.
Scientific Experiment Tooling
- Create frameworks to automate scientific cycles: data generation, metric computation, model comparison dashboards.
- Enable internal teams to run reproducible experiments at scale.
Knowledge Sharing & Publication
- Write up interesting findings—internal memos, tech blogs, or scholarship-worthy results.
- Participate in code and science critique sessions with peers.
Master’s or Ph.D. in CS, ML, or related (or equivalent experience)
- Deep familiarity with LLMs, fine‑tuning, reinforcment learning.
- Expert in Python + frameworks like PyTorch or JAX; experience with Hugging Face or deployed LLM APIs
- Strong software engineering intuition: modular design, CI/CD, containerization, automated testing
- Demonstrated track record in both research and production settings (e.g., prototypes published internally or externally)
- Bonus: familiarity with Kubernetes, JAX compiler stacks (e.g. Flax, Alpa), Hugging Face Accelerate, or MLOps toolchains
Research Innovation
Prototypes demonstrating clear scientific advances (e.g. better evaluation metrics or prompt robustness). Papers submitted or draft-ready internal research docs.
Engineering Quality & Velocity
Experimental pipelines and prototypes are production-ready within sprint cycles; clean codebases, smooth deployments, few bugs.
System Impact & Use
R&D code drives real product features—high user adoption, improved reliability or performance.
Be The First To Know
About the latest Ai specialists Jobs in Hyderabad !