14,644 AI & Emerging Technologies jobs in India
Senior Staff Machine Learning Engineer

Posted today
Job Viewed
Job Description
Serving thousands of enterprise customers around the world including 45% of Fortune 500 companies, Zscaler (NASDAQ: ZS) was founded in 2007 with a mission to make the cloud a safe place to do business and a more enjoyable experience for enterprise users. As the operator of the world's largest security cloud, Zscaler accelerates digital transformation so enterprises can be more agile, efficient, resilient, and secure. The pioneering, AI-powered Zscaler Zero Trust Exchange platform, which is found in our SASE and SSE offerings, protects thousands of enterprise customers from cyberattacks and data loss by securely connecting users, devices, and applications in any location.
Named a Best Workplace in Technology by Fortune and others, Zscaler fosters an inclusive and supportive culture that is home to some of the brightest minds in the industry. If you thrive in an environment that is fast-paced and collaborative, and you are passionate about building and innovating for the greater good, come make your next move with Zscaler.
Our Engineering team built the world's largest cloud security platform from the ground up, and we keep building. With more than 100 patents and big plans for enhancing services and increasing our global footprint, the team has made us and our multitenant architecture today's cloud security leader, with more than 15 million users in 185 countries. Bring your vision and passion to our team of cloud architects, software engineers, security experts, and more who are enabling organizations worldwide to harness speed and agility with a cloud-first strategy.
We're looking for an experienced Sr. Staff Machine Learning Engineer to join our Digital Experience team. This role is critical to shape the next generation of Digital Experience with world class tools to identify the insights, root causes, and also enable the agentic AI functionalities. Reporting to the Senior Manager, Machine Learning Engineering, you'll be responsible for:
+ Framing high-impact use cases, design agent workflows (tool use, planning, memory, context), and build the frameworks for all the products
+ Evaluating and integrating advances in LLMs/SLMs, retrieval, fine-tuning, and inference optimization to deliver reliable, cost-efficient production features
+ Designing, implementing, and operating microservices that are observable, resilient, and performant (APIs, data pipelines, orchestration, caching)
+ Working with Product, UX, and customers to turn ambiguous problems into measurable wins-clear SLAs, telemetry, and feedback loops
**What We're Looking for (Minimum Qualifications)**
+ BS in Computer Science (or related) with 7+ years, or MS/PhD in Computer Science (or related) with 6+ years of experience in solving real world problems leveraging AI/ML and distributed systems
+ Exceptional problem-solving skills driven by first-principles thinking, applying expertise in programming, data structures, algorithms, and machine learning
+ Proven experience in the full ML model lifecycle- building, deployment, monitoring, and optimization
+ Hands-on with modern GenAI stacks (e.g., LangChain/LangGraph, CrewAI, vector stores, RAG, prompts/memory, evaluators)
+ Proven experience designing and operating distributed microservices (Kubernetes/Docker, CI/CD, observability; AWS/GCP/Azure) and writing production-grade code in Python, Go, or Java
**What Will Make You Stand Out (Preferred Qualifications)**
+ Experience fine-tuning and serving proprietary SLMs/LLMs at scale (latency, cost, safety, evals)
+ Prior delivery of agentic systems in production, including context engineering and memory management strategies
+ Track record building high-throughput, fault-tolerant systems with clear SLOs
#LI-AN4
#LI-Hybrid
At Zscaler, we are committed to building a team that reflects the communities we serve and the customers we work with. We foster an inclusive environment that values all backgrounds and perspectives, emphasizing collaboration and belonging. Join us in our mission to make doing business seamless and secure.
Our Benefits program is one of the most important ways we support our employees. Zscaler proudly offers comprehensive and inclusive benefits to meet the diverse needs of our employees and their families throughout their life stages, including:
+ Various health plans
+ Time off plans for vacation and sick time
+ Parental leave options
+ Retirement options
+ Education reimbursement
+ In-office perks, and more!
Learn more about Zscaler's Future of Work strategy, hybrid working model, and benefits here ( .
By applying for this role, you adhere to applicable laws, regulations, and Zscaler policies, including those related to security and privacy standards and guidelines.
Zscaler is committed to providing equal employment opportunities to all individuals. We strive to create a workplace where employees are treated with respect and have the chance to succeed. All qualified applicants will be considered for employment without regard to race, color, religion, sex (including pregnancy or related medical conditions), age, national origin, sexual orientation, gender identity or expression, genetic information, disability status, protected veteran status, or any other characteristic protected by federal, state, or local laws. _See more information by clicking on the_ Know Your Rights: Workplace Discrimination is Illegal ( _link._
Pay Transparency
Zscaler complies with all applicable federal, state, and local pay transparency rules.
Zscaler is committed to providing reasonable support (called accommodations or adjustments) in our recruiting processes for candidates who are differently abled, have long term conditions, mental health conditions or sincerely held religious beliefs, or who are neurodivergent or require pregnancy-related support.
Data and AI consultant
Posted today
Job Viewed
Job Description
At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward - always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities.
**The Role**
Are you ready to embark on an exhilarating journey as a Data Consultant? Join Kyndryl and become a driving force behind the transformative power of data! We're seeking an exceptionally talented individual to accelerate the competitive performance of our customers worldwide, establishing us as their unrivaled business and technology consulting partner.
As Enterprise Consultant for Data and AI solutions you are expected to create a vision of the future by working with leadership to identify opportunities and translate them into functional and non-functional requirements. You should be and expert in Data Science/Model Lifecycle and deep expertise of managing data solutions. As an Architect in our Data and AI team, you will provide best-fit architectural solutions for one or more projects leveraging your architectural skills; assist in defining scope and sizing of work; and anchor Proof of Concept developments. You will provide solution architecture for the business problem, platform integration with third party services, designing and developing complex features for clients' business needs. You will collaborate with some of the best talent in the industry to create and implement innovative solutions, participate in Pre-Sales and various pursuits focused on our clients' business needs.
You will also contribute in a variety of roles in thought leadership, mentorship, systems analysis, architecture, data management models, configuration, testing, debugging, and documentation. You will challenge your leading edge solutions, consultative and business skills through the diversity of work in multiple industry domains.
**Responsibilities:**
+ Handling RFP / RFI / Government Tender technical solution, detailed scope preparation, effort estimations and response drafting
+ Excellent presentation skills is important
+ Understand client needs, translate them into business solutions that can be implemented
+ Responsible for architecture, design and development of scalable data engineering / AI solutions and standards for various business problems using cloud native services, or third party services on hyperscalers
+ Take ownership of technical solutions from design and architecture perspective, ensure the right direction and propose resolution to potential Data science/Model related problems
+ Delivering and presenting proofs of concept of key technology components to project stakeholders
+ Working within an Agile delivery / DevOps methodology to deliver proof of concept and production implementation in iterative sprints.
+ Design and develop model utilization benchmarks, metrics, and monitoring to measure and improve models. Detect model drift and alert - Prometheus, Grafana stack, Cloud native monitoring stack
+ Research, design, implement and validate cutting-edge deployment methods across hybrid cloud scenarios
+ Develop and maintain documentation of the Model flows and integrations, pipelines etc
+ Evaluate and create PoVs around the performance aspects of DSML platforms and tools in the market against customer requirements
+ Assist in driving improvements to the Data Engineering stack, with a focus on the digital experience for the user, as well as model performance & security to meet the needs of the business and customers, now & in the future
**Required Experience:**
Data Engineer with 15+ years of experience with following skills -
+ Must have 5+ years of experience working with Data modernisation solutions, working experience with on-prem or in cloud AI, DWH solutions based on native and third party solutions based AWS, Azure, GCP, Databricks etc.
+ Experience of handling RFP / RFI / Tender responses, proposal preparation
+ Must have experience designing and architecting of data lake/warehouse projects using Paas and Saas - such as Snowflake, Databricks, Redshift, Synapse, BigQuery etc. or data warehouse/data lake implementations on-premise
+ Must have good knowledge in designing data pipelines ETL/ELT, DS modules implementing complex stored Procedures and standard DWH and ETL concepts
+ Experience in Data Migration from on-premise RDBMS to cloud data warehouses
+ Good understanding of relational as well as NoSQL data stores, methods and approaches (star and snowflake, dimensional modelling)
+ Hands-on experience in Python, PySpark, programming for data integration projects
+ Support in providing resolution to an extensive range of complicated data pipeline related problems, proactively and as issues surface
**Who You Are**
**Preferred Skills:**
+ Understanding of cloud network, security, data security and data access controls and design aspects
+ AI and Data solutions on Hyperscalers such as Databricks, MS-Fabric, co-pilot, AWS redshift, GCP BigQuery, GCP Gemini etc
+ Background Agentic AI, GenAI technologies will be added advantage
+ Hands ON experience for planning and executing POC / MVP / Client projects engaging Data Modernization and AI use case developments
**Required Skills:**
-Bachelor's degree in Computer Science, Information Security, or a related field
-Skilled in planning, organization, analytics, and problem-solving
-Excellent communication and interpersonal skills to work collaboratively with clients and team members
-Comfortable working with statistics
**Being You**
Diversity is a whole lot more than what we look like or where we come from, it's how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we're not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you - and everyone next to you - the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That's the Kyndryl Way.
**What You Can Expect**
With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter - wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed.
**Get Referred!**
If you know someone that works at Kyndryl, when asked 'How Did You Hear About Us' during the application process, select 'Employee Referral' and enter your contact's Kyndryl email address.
Kyndryl is committed to creating a diverse environment and is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, pregnancy, disability, age, veteran status, or other characteristics. Kyndryl is also committed to compliance with all fair employment practices regarding citizenship and immigration status.
Data Scientist-Artificial Intelligence
Posted today
Job Viewed
Job Description
A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe.You'll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio, including Software and Red Hat.Curiosity and a constant quest for knowledge serve as the foundation to success in IBM Consulting. In your role, you'll be encouraged to challenge the norm, investigate ideas outside of your role, and come up with creative solutions resulting in groundbreaking impact for a wide network of clients. Our culture of evolution and empathy centers on long-term career growth and development opportunities in an environment
**Your role and responsibilities**
Role Overview
We are looking for a Senior/Lead ML Data Scientist with strong expertise in the Databricks ML ecosystem and proven experience in Generative AI and LLM fine-tuning. This role will drive end-to-end ML/AI initiatives - from presales solution shaping, customer workshops, and PoCs to large-scale delivery, deployment, and adoption. The candidate will define AI/ML strategy, ensure successful execution, and mentor teams while driving responsible and business-aligned AI delivery.
---
Key Responsibilities
ML & AI Solutioning
* Lead the design and development of machine learning models (classification, regression, clustering, NLP, CV).
* Implement ML workflows in Databricks using MLflow, Feature Store, AutoML, and Databricks notebooks.
* Optimize and scale training using distributed ML frameworks (Spark MLlib, Horovod, Databricks Runtime for ML).
Presales & Client Engagement
* Partner with sales and consulting teams to support presales activities, including solution design, RFP responses, and client presentations.
* Conduct workshops, PoCs, and live demos showcasing Databricks ML and GenAI capabilities.
* Translate complex ML/AI solutions into business value for CXOs and client stakeholders.
* Create thought leadership material (whitepapers, PoVs, reference architectures) to drive market presence.
Delivery & Execution
* Own the end-to-end execution of ML/GenAI projects - from requirements gathering to production deployment.
* Ensure scalable, secure, and cost-optimized delivery on Databricks and cloud ML platforms.
* Collaborate with cross-functional teams (data engineering, application engineering, cloud infra) to deliver high-quality outcomes.
* Establish success metrics, monitor delivery performance, and ensure client satisfaction.
GenAI / LLM Workloads
* Fine-tune and optimize LLMs (OpenAI, Llama, Falcon, MPT, HuggingFace Transformers) for domain-specific use cases.
* Implement Retrieval Augmented Generation (RAG) pipelines for enterprise search, chatbots, and knowledge assistants.
* Evaluate, deploy, and monitor custom fine-tuned models within Databricks Model Serving or cloud ML platforms.
* Collaborate with engineering teams to integrate GenAI capabilities into business applications.
MLOps & Governance
* Establish MLOps best practices with Databricks MLflow (experiment tracking, model registry, deployment pipelines).
* Implement automated CI/CD for ML pipelines with GitHub Actions, Azure DevOps, or Jenkins.
* Define and enforce Responsible AI practices: fairness, explainability (SHAP, LIME), bias detection, compliance.
Leadership & Collaboration
* Mentor and guide junior data scientists and engineers.
* Partner with business leaders to identify AI opportunities and define strategy.
* Advocate for data-driven decision making across the organization.
**Required technical and professional expertise**
Mandatory Skills
* Strong experience in Databricks ML ecosystem:
* MLflow (tracking, registry, deployment).
* Feature Store for feature management.
* AutoML for model experimentation.
* Databricks notebooks & pipelines.
* Proven expertise in LLM fine-tuning, prompt engineering, embeddings, and RAG pipelines.
* Strong foundation in ML & DL frameworks (Scikit-learn, TensorFlow, PyTorch).
* Hands-on with Python, Spark, SQL for data science workflows.
* Proficiency with cloud ML platforms (Azure ML, AWS SageMaker, GCP Vertex AI).
* Experience with large-scale model training, optimization, and deployment.
* Strong customer-facing presales experience and delivery ownership in AI/ML projects.
**Preferred technical and professional experience**
Good to Have
* Familiarity with Databricks MosaicML for efficient LLM fine-tuning.
* Hands-on with vector databases (Pinecone, Weaviate, Milvus, FAISS) for RAG.
* Exposure to streaming ML inference (Kafka, Event Hub, Kinesis).
* Certifications: Databricks ML Specialist, Databricks Generative AI Associate, Azure AI Engineer, AWS ML Specialty.
IBM is committed to creating a diverse environment and is proud to be an equal-opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, gender, gender identity or expression, sexual orientation, national origin, caste, genetics, pregnancy, disability, neurodivergence, age, veteran status, or other characteristics. IBM is also committed to compliance with all fair employment practices regarding citizenship and immigration status.
Data Scientist-Artificial Intelligence
Posted today
Job Viewed
Job Description
In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology.
**Your role and responsibilities**
* Work with broader team to build, analyze and improve the AI solutions.
* You will also work with our software developers in consuming different enterprise applications
**Required technical and professional expertise**
* Resource should have 5-7 years of experience. Sound knowledge of Python and should know how to use the ML related services.
* Proficient in Python with focus on Data Analytics Packages.
* Strategy Analyse large, complex data sets and provide actionable insights to inform business decisions.
* Strategy Design and implementing data models that help in identifying patterns and trends. Collaboration Work with data engineers to optimize and maintain data pipelines.
* Perform quantitative analyses that translate data into actionable insights and provide analytical, data-driven decision-making. Identify and recommend process improvements to enhance the efficiency of the data platform. Develop and maintain data models, algorithms, and statistical models
**Preferred technical and professional experience**
* Experience with conversation analytics. Experience with cloud technologies
* Experience with data exploration tools such as Tableu
IBM is committed to creating a diverse environment and is proud to be an equal-opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, gender, gender identity or expression, sexual orientation, national origin, caste, genetics, pregnancy, disability, neurodivergence, age, veteran status, or other characteristics. IBM is also committed to compliance with all fair employment practices regarding citizenship and immigration status.
Cloud Engineer, Artificial Intelligence
Posted today
Job Viewed
Job Description
+ Bachelor's degree in Computer Science or equivalent practical experience.
+ 3 years of experience building machine learning solutions and working with technical customers.
+ Experience designing cloud enterprise solutions and supporting customer projects to completion.
+ Experience writing software in Python, Scala, R, or similar.
+ Ability to travel up to 30% of the time.
**Preferred qualifications:**
+ Experience working with recommendation engines, data pipelines, or distributed machine learning, and with data analytics, data visualization techniques, software, and deep learning frameworks.
+ Experience in software development, professional services, solution engineering, technical consulting, architecting and rolling out new technology and solution initiatives.
+ Experience with data structures, algorithms, and software design.
+ Experience with core data science techniques.
+ Knowledge of data warehousing concepts, including data warehouse technical architectures, infrastructure components, ETL/ELT and reporting/analytic tools and environments.
+ Knowledge of cloud computing, including virtualization, hosted services, multi-tenant cloud infrastructures, storage systems, and content delivery networks.
The Google Cloud Consulting Professional Services team guides customers through the moments that matter most in their cloud journey to help businesses thrive. We help customers transform and evolve their business through the use of Google's global network, web-scale data centers, and software infrastructure. As part of an innovative team in this rapidly growing business, you will help shape the future of businesses of all sizes and use technology to connect with customers, employees, and partners.
As a Cloud Engineer, you will design and implement machine learning solutions for customer use cases, leveraging core Google products including TensorFlow, DataFlow, and Vertex AI. You will work with customers to identify opportunities to apply machine learning in their business, and travel to customer sites to deploy solutions and deliver workshops to educate and empower customers. Additionally, you will work closely with Product Management and Product Engineering to build and constantly drive excellence in our products.
In this role, you will be the Google Engineer working with Google's largest and most ambitious Cloud customers. Together with the team you will support customer implementation of Google Cloud products through: architecture guidance, best practices, data migration, capacity planning, implementation, troubleshooting, monitoring, and much more.
Google Cloud accelerates every organization's ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google's cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems.
**Responsibilities:**
+ Deliver effective big data and machine learning solutions and solve complex technical customer issues.
+ Act as a trusted technical advisor to Google's customers.
+ Identify new product features and feature gaps, provide guidance on existing product tests, and collaborate with Product Managers and Engineers to influence the roadmap of Google Cloud Platform.
+ Deliver best practices recommendations, tutorials, blog articles, and technical presentations adapting to different levels of key business and technical stakeholders.
Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also and If you have a need that requires accommodation, please let us know by completing our Accommodations for Applicants form:
Cloud Engineer, Artificial Intelligence
Posted today
Job Viewed
Job Description
+ Bachelor's degree in Computer Science or equivalent practical experience.
+ 3 years of experience building machine learning solutions and working with technical customers.
+ Experience designing cloud enterprise solutions and supporting customer projects to completion.
+ Experience writing software in Python, Scala, R, or similar.
+ Ability to travel up to 30% of the time.
**Preferred qualifications:**
+ Experience working with recommendation engines, data pipelines, or distributed machine learning, and with data analytics, data visualization techniques, software, and deep learning frameworks.
+ Experience in software development, professional services, solution engineering, technical consulting, architecting and rolling out new technology and solution initiatives.
+ Experience with data structures, algorithms, and software design.
+ Experience with core data science techniques.
+ Knowledge of data warehousing concepts, including data warehouse technical architectures, infrastructure components, ETL/ELT and reporting/analytic tools and environments.
+ Knowledge of cloud computing, including virtualization, hosted services, multi-tenant cloud infrastructures, storage systems, and content delivery networks.
The Google Cloud Consulting Professional Services team guides customers through the moments that matter most in their cloud journey to help businesses thrive. We help customers transform and evolve their business through the use of Google's global network, web-scale data centers, and software infrastructure. As part of an innovative team in this rapidly growing business, you will help shape the future of businesses of all sizes and use technology to connect with customers, employees, and partners.
As a Cloud Engineer, you will design and implement machine learning solutions for customer use cases, leveraging core Google products including TensorFlow, DataFlow, and Vertex AI. You will work with customers to identify opportunities to apply machine learning in their business, and travel to customer sites to deploy solutions and deliver workshops to educate and empower customers. Additionally, you will work closely with Product Management and Product Engineering to build and constantly drive excellence in our products.
In this role, you will be the Google Engineer working with Google's largest and most ambitious Cloud customers. Together with the team you will support customer implementation of Google Cloud products through: architecture guidance, best practices, data migration, capacity planning, implementation, troubleshooting, monitoring, and much more.
Google Cloud accelerates every organization's ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google's cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems.
**Responsibilities:**
+ Deliver effective big data and machine learning solutions and solve complex technical customer issues.
+ Act as a trusted technical advisor to Google's customers.
+ Identify new product features and feature gaps, provide guidance on existing product tests, and collaborate with Product Managers and Engineers to influence the roadmap of Google Cloud Platform.
+ Deliver best practices recommendations, tutorials, blog articles, and technical presentations adapting to different levels of key business and technical stakeholders.
Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also and If you have a need that requires accommodation, please let us know by completing our Accommodations for Applicants form:
Cloud Engineer, Artificial Intelligence
Posted today
Job Viewed
Job Description
+ Bachelor's degree in Computer Science or equivalent practical experience.
+ 3 years of experience building machine learning solutions and working with technical customers.
+ Experience designing cloud enterprise solutions and supporting customer projects to completion.
+ Experience writing software in Python, Scala, R, or similar.
+ Ability to travel up to 30% of the time.
**Preferred qualifications:**
+ Experience working with recommendation engines, data pipelines, or distributed machine learning, and with data analytics, data visualization techniques, software, and deep learning frameworks.
+ Experience in software development, professional services, solution engineering, technical consulting, architecting and rolling out new technology and solution initiatives.
+ Experience with data structures, algorithms, and software design.
+ Experience with core data science techniques.
+ Knowledge of data warehousing concepts, including data warehouse technical architectures, infrastructure components, ETL/ELT and reporting/analytic tools and environments.
+ Knowledge of cloud computing, including virtualization, hosted services, multi-tenant cloud infrastructures, storage systems, and content delivery networks.
The Google Cloud Consulting Professional Services team guides customers through the moments that matter most in their cloud journey to help businesses thrive. We help customers transform and evolve their business through the use of Google's global network, web-scale data centers, and software infrastructure. As part of an innovative team in this rapidly growing business, you will help shape the future of businesses of all sizes and use technology to connect with customers, employees, and partners.
As a Cloud Engineer, you will design and implement machine learning solutions for customer use cases, leveraging core Google products including TensorFlow, DataFlow, and Vertex AI. You will work with customers to identify opportunities to apply machine learning in their business, and travel to customer sites to deploy solutions and deliver workshops to educate and empower customers. Additionally, you will work closely with Product Management and Product Engineering to build and constantly drive excellence in our products.
In this role, you will be the Google Engineer working with Google's largest and most ambitious Cloud customers. Together with the team you will support customer implementation of Google Cloud products through: architecture guidance, best practices, data migration, capacity planning, implementation, troubleshooting, monitoring, and much more.
Google Cloud accelerates every organization's ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google's cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems.
**Responsibilities:**
+ Deliver effective big data and machine learning solutions and solve complex technical customer issues.
+ Act as a trusted technical advisor to Google's customers.
+ Identify new product features and feature gaps, provide guidance on existing product tests, and collaborate with Product Managers and Engineers to influence the roadmap of Google Cloud Platform.
+ Deliver best practices recommendations, tutorials, blog articles, and technical presentations adapting to different levels of key business and technical stakeholders.
Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also and If you have a need that requires accommodation, please let us know by completing our Accommodations for Applicants form:
Be The First To Know
About the latest Ai and emerging technologies Jobs in India !
SaaS DevOps Engineer - DevOps, Artificial Intelligence, Helm, Docker, Exp - 8 -12 Yrs
Posted today
Job Viewed
Job Description
Apply ( Location:Bangalore, India
+ Area of InterestEngineer - Software
+ Job TypeProfessional
+ Technology InterestSoftware Development
+ Job Id
**Meet the Team**
We are an innovation team on a mission to transform how enterprises harness AI. Operating with the agility of a startup and the focus of an incubator, we're building a tight-knit group of AI and infrastructure experts driven by bold ideas and a shared goal: to rethink systems from the ground up and deliver breakthrough solutions that redefine what's possible - faster, leaner, and smarter.
We thrive in a fast-paced, experimentation-rich environment where new technologies aren't just welcome - they're expected. Here, you'll work side-by-side with seasoned engineers, architects, and thinkers to craft the kind of iconic products that can reshape industries and unlock entirely new models of operation for the enterprise.
If you're energized by the challenge of solving hard problems, love working at the edge of what's possible, and want to help shape the future of AI infrastructure - we'd love to meet you.
**Impact**
Cisco is seeking a highly skilled **SaaS DevOps Engineer** to design and manage the operational infrastructure of SaaS applications. This role focuses on ensuring smooth deployments, maintaining uptime, managing costs, and delivering exceptional customer support. The ideal candidate will have strong expertise in CI/CD pipelines, packaging tools, and high-availability (HA) deployments, as well as a customer-first attitude for supporting SaaS operations.
As a SaaS DevOps Engineer at Cisco, your role will have a critical impact on:
+ Maintaining the reliability and availability of SaaS platforms to meet customer expectations.
+ Streamlining CI/CD pipelines to improve release velocity and minimize downtime.
+ Enabling seamless multi-region deployments for high-availability applications.
+ Supporting customers with tools and processes to troubleshoot and resolve issues quickly.
+ Reducing operational costs through telemetry insights and efficient system designs.
+ Use AI Agents to build CI and CD pipelines to improve efficiency.
+ Your contributions will ensure Cisco's SaaS offerings remain robust, scalable, and responsive to customer needs, enabling our customers to succeed with confidence.
**Key Responsibilities:**
+ Build and maintain CI/CD pipelines using industry-standard tools to enable smooth, automated deployments.
+ Apply Git tools for managing code repositories and ensuring efficient collaboration among teams.
+ Plan and manage releases, including evaluating their impact on Continuous Deployment (CD).
+ Implement and manage feature flagging systems to enable controlled rollouts and minimize production risks.
+ Design and deploy HA systems across multi-region environments to ensure minimal downtime and maximum uptime.
+ Use Helm, Docker, and other packaging tools to streamline deployment workflows and containerization.
+ Develop processes for live system upgrades and rollbacks to minimize customer impact during updates.
+ Monitor running systems using telemetry data, ensuring optimal performance and cost-effective operations.
+ Create and manage support bundle tools for efficient issue diagnosis and troubleshooting in customer environments.
+ Plan and manage the release of SaaS solutions to on-premise environments for customers with hybrid or private cloud needs.
+ Manage the use of AI agents in CI/CD pipelines, ensuring they perform effectively and provide value to customers.
+ Collaborate with customer support teams to resolve operational and infrastructure-related issues.
**Minimum Qualifications:**
+ Proficiency with python and other scripting languages
+ Proficiency with CI/CD tools such as Jenkins, GitLab CI, CircleCI, or similar platforms
+ Strong experience with Git tools and workflows for effective version control and collaboration.
+ Expertise in packaging and deployment tools, including Helm and Docker.
+ Knowledge of feature flagging systems and their impact on production environments.
+ Proven track record of designing and maintaining high-availability (HA) systems across multi-region environments.
+ Experience with live upgrades and rollback strategies for SaaS applications.
+ Strong understanding of metrics and telemetry for monitoring system performance and managing costs.
+ Familiarity with the use of agents in CI/CD pipelines and their role in enhancing customer support.
+ Exceptional problem-solving skills and a customer-first mindset for troubleshooting and issue resolution.
+ Bachelor's degree+ and relevant 8-12 years of Engineering work experience.
**Preferred Qualifications:**
+ Familiarity with cost management strategies for SaaS applications, including telemetry-based insights.
+ Experience with planning and managing on-premise releases of SaaS solutions.
**#WeAreCisco**
#WeAreCisco where every individual brings their unique skills and perspectives together to pursue our purpose of powering an inclusive future for all.
Our passion is connection-we celebrate our employees' diverse set of backgrounds and focus on unlocking potential. Cisconians often experience one company, many careers where learning and development are encouraged and supported at every stage. Our technology, tools, and culture pioneered hybrid work trends, allowing all to not only give their best, but be their best.
We understand our outstanding opportunity to bring communities together and at the heart of that is our people. One-third of Cisconians collaborate in our 30 employee resource organizations, called Inclusive Communities, to connect, foster belonging, learn to be informed allies, and make a difference. Dedicated paid time off to volunteer-80 hours each year-allows us to give back to causes we are passionate about, and nearly 86% do!
Our purpose, driven by our people, is what makes us the worldwide leader in technology that powers the internet. Helping our customers reimagine their applications, secure their enterprise, transform their infrastructure, and meet their sustainability goals is what we do best. We ensure that every step we take is a step towards a more inclusive future for all. Take your next step and be you, with us!
**Message to applicants applying to work in the U.S. and/or Canada:**
When available, the salary range posted for this position reflects the projected hiring range for new hire, full-time salaries in U.S. and/or Canada locations, not including equity or benefits. For non-sales roles the hiring ranges reflect base salary only; employees are also eligible to receive annual bonuses. Hiring ranges for sales positions include base and incentive compensation target. Individual pay is determined by the candidate's hiring location and additional factors, including but not limited to skillset, experience, and relevant education, certifications, or training. Applicants may not be eligible for the full salary range based on their U.S. or Canada hiring location. The recruiter can share more details about compensation for the role in your location during the hiring process.
U.S. employees haveaccess ( to quality medical, dental and vision insurance, a 401(k) plan with a Cisco matching contribution, short and long-term disability coverage, basic life insurance and numerous wellbeing offerings.
Employees receive up to twelve paid holidays per calendar year, which includes one floating holiday (for non-exempt employees), plus a day off for their birthday. Non-Exempt new hires accrue up to 16 days of vacation time off each year, at a rate of 4.92 hours per pay period. Exempt new hires participate in Cisco's flexible Vacation Time Off policy, which does not place a defined limit on how much vacation time eligible employees may use, but is subject to availability and some business limitations. All new hires are eligible for Sick Time Off subject to Cisco's Sick Time Off Policy and will have eighty (80) hours of sick time off provided on their hire date and on January 1st of each year thereafter. Up to 80 hours of unused sick time will be carried forward from one calendar year to the next such that the maximum number of sick time hours an employee may have available is 160 hours. Employees in Illinois have a unique time off program designed specifically with local requirements in mind. All employees also have access to paid time away to deal with critical or emergency issues. We offer additional paid time to volunteer and give back to the community.
Employees on sales plans earn performance-based incentive pay on top of their base salary, which is split between quota and non-quota components. For quota-based incentive pay, Cisco typically pays as follows:
.75% of incentive target for each 1% of revenue attainment up to 50% of quota;
1.5% of incentive target for each 1% of attainment between 50% and 75%;
1% of incentive target for each 1% of attainment between 75% and 100%; and once performance exceeds 100% attainment, incentive rates are at or above 1% for each 1% of attainment with no cap on incentive compensation.
For non-quota-based sales performance elements such as strategic sales objectives, Cisco may pay up to 125% of target. Cisco sales plans do not have a minimum threshold of performance for sales incentive compensation to be paid.
Cisco is an Affirmative Action and Equal Opportunity Employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, gender, sexual orientation, national origin, genetic information, age, disability, veteran status, or any other legally protected basis.
Cisco will consider for employment, on a case by case basis, qualified applicants with arrest and conviction records.
AI Software Developer
Posted today
Job Viewed
Job Description
**Req number:**
R6259
**Employment type:**
Full time
**Worksite flexibility:**
Hybrid
**Who we are**
CAI is a global technology services firm with over 8,500 associates worldwide and a yearly revenue of $1 billion+. We have over 40 years of excellence in uniting talent and technology to power the possible for our clients, colleagues, and communities. As a privately held company, we have the freedom and focus to do what is right-whatever it takes. Our tailor-made solutions create lasting results across the public and commercial sectors, and we are trailblazers in bringing neurodiversity to the enterprise.
**Job Summary**
If you thrive in environments where "AI" means building robust, maintainable systems (not just notebooks), and you've shipped AI features in production using AWS/Azure, this role is for you.
**Job Description**
We are looking for an **AI Software Developer** to design, build, and maintain cloud-native backend services, infrastructure, and APIs that power AI features. This position will be **full-time** and **hybrid.**
**What You'll Do**
+ Software Engineering & Cloud Infrastructure (Primary Focus) Design, build, and optimize cloud-native backend services (Python/Node.js) for AI applications on AWS or Azure (e.g., serverless, containers, managed services).
+ Develop infrastructure as code (IaC) using Terraform, CloudFormation, or ARM templates to automate cloud deployments.
+ Implement CI/CD pipelines for AI model deployment, application updates, and automated testing (e.g., GitHub Actions, Azure DevOps).
+ Build scalable APIs/microservices (FastAPI, gRPC) to serve AI features (e.g., LLM inference, agent workflows) with security, latency, and cost efficiency.
+ Ensure reliability and observability via monitoring (Prometheus, CloudWatch), logging, and alerting for AI systems.
+ AI Integration & Productionization (Secondary Focus) Integrate generative AI and agentic systems (e.g., LangChain, CrewAI, AutoGen) into full-stack applications-not just prototyping, but productionizing
+ workflows.
+ Design RAG pipelines with vector databases (e.g., Azure Cognitive Search, AWS OpenSearch) and optimize for latency/cost.
+ Fine-tune LLMs (using LoRA, PEFT) or leverage cloud AI services (e.g., AWS Bedrock, Azure OpenAI) for custom use cases.
+ Build data pipelines for AI training/inference (ingestion, preprocessing, synthetic data) with cloud tools (e.g., AWS Glue, Azure Data Factory).
+ Collaborate with ML engineers to deploy models via TorchServe, Triton, or cloud-managed services (e.g., SageMaker Endpoints, Azure ML Endpoints).
+ Collaboration & Ownership Work cross-functionally with product, frontend, and data teams to translate
+ business needs into scalable AI solutions.
+ Champion software best practices: testing (unit/integration), code reviews, documentation, and modular design.
+ Mentor junior engineers on cloud engineering and AI system design.
**What You'll Need**
Required:
+ 3-4 years of professional software development experience with strong fundamentals:
+ Proficiency in Python (required) and modern frameworks (FastAPI, Flask, Django).
+ Experience building cloud-native backend systems (AWS or Azure) with services like:
+ Compute (EC2, Lambda, Azure Functions, VMs) Storage (S3, Blob Storage)
+ Databases (RDS, Cosmos DB, DynamoDB) API gateways (API Gateway, Azure API Management)
+ Hands-on experience with containerization (Docker) and orchestration (Kubernetes).
+ Proven track record in CI/CD pipelines, infrastructure-as-code (Terraform/CloudFormation), and monitoring tools.
+ 1-2 years of hands-on experience in AI application development, specifically:
+ Building generative AI or agentic workflows (e.g., using LangChain, CrewAI, AutoGen).
+ Implementing RAG pipelines or fine-tuning LLMs in production (e.g., via AWS Bedrock, Azure OpenAI, or open-source models).
+ Experience with cloud AI services (SageMaker, Azure ML) or deploying open-source models on cloud infrastructure.
+ Strong software engineering discipline:
+ Writing testable, maintainable code with unit/integration tests.
+ Experience with Git workflows, agile development, and collaborative code reviews.
+ Understanding of system design (scalability, security, cost optimization).
+ Bachelor's or Master's in Computer Science, Software Engineering, or related
+ field.
**SPACE**
Preferred:
+ Experience with full-stack development (frontend frameworks like React/Vue for AI-powered UIs).
+ Knowledge of serverless architectures (AWS Lambda/Azure Functions) for AI workloads.
+ Familiarity with MLOps tools (MLflow, Kubeflow) or cloud-native MLOps (SageMaker Pipelines, Azure ML Pipelines).
+ Prior work on cost-optimized AI systems (e.g., model quantization, autoscaling, spot instances).
+ Contributions to open-source AI/ML projects or cloud infrastructure tooling.
**Physical Demands**
+ Ability to safely and successfully perform the essential job functions
+ Sedentary work that involves sitting or remaining stationary most of the time with occasional need to move around the office to attend meetings, etc.
+ Ability to conduct repetitive tasks on a computer, utilizing a mouse, keyboard, and monitor
**Reasonable accommodation statement**
If you require a reasonable accommodation in completing this application, interviewing, completing any pre-employment testing, or otherwise participating in the employment selection process, please direct your inquiries to or (888) 824 - 8111.
Business Intel Engineer I, Global Operations - Artificial Intelligence
Posted today
Job Viewed
Job Description
Want to join the Earth's most customer centric company? Do you like to dive deep to understand problems? Are you someone who likes to challenge Status Quo? Do you strive to excel at goals assigned to you? If yes, we have opportunities for you. Global Operations - Artificial Intelligence (GO-AI) at Amazon is looking to hire candidates who can excel in a fast-paced dynamic environment.
Are you somebody that likes to use and analyze big data to drive business decisions? Do you enjoy converting data into insights that will be used to enhance customer decisions worldwide for business leaders? Do you want to be part of the data team which measures the pulse of innovative machine vision-based projects? If your answer is yes, join our team. GO-AI is looking for a motivated individual with strong skills and experience in resource utilization planning, process optimization and execution of scalable and robust operational mechanisms, to join the GO-AI Ops DnA team. In this position you will be responsible for supporting our sites to build solutions for the rapidly expanding GO-AI team. The role requires the ability to work with a variety of key stakeholders across job functions with multiple sites.
We are looking for an entrepreneurial and analytical program manager, who is passionate about their work, understands how to manage service levels across multiple skills/programs, and who is willing to move fast and experiment often.
Key job responsibilities
Key responsibilities include:
- Ability to maintain and refine straightforward ETL and write secure, stable, testable, maintainable code with minimal defects and automate manual processes.
- Proficiency in one or more industry analytics visualization tools (e.g. Excel, Tableau/Quicksight/PowerBI) and, as needed, statistical methods (e.g. t-test, Chi-squared) to deliver actionable insights to stakeholders.
- Building and owning small to mid-size BI solutions with high accuracy and on time delivery using data sets, queries, reports, dashboards, analyses or components of larger solutions to answer straightforward business questions with data incorporating business intelligence best practices, data management fundamentals, and analysis principles.
- Good understanding of the relevant data lineage: including sources of data; how metrics are aggregated; and how the resulting business intelligence is consumed, interpreted and acted upon by the business where the end product enables effective, data-driven business decisions.
- Having high responsibility for the code, queries, reports and analyses that are inherited or produced and having analyses and code reviewed periodically.
- Effective partnering with peer BIEs and others in your team to troubleshoot, research root causes, propose solutions, by either take ownership for their resolution or ensure a clear hand-off to the right owner.
About the team
The Global Operations - Artificial Intelligence (GO-AI) team is an initiative, which remotely handles exceptions in the Amazon Robotic Fulfillment Centers Globally. GO-AI seeks to complement automated vision based decision-making technologies by providing remote human support for the subset of tasks which require higher cognitive ability and cannot be processed through automated decision making with high confidence. This team provides end-to-end solutions through inbuilt competencies of Operations and strong central specialized teams to deliver programs at Amazon scale. It is operating multiple programs including Nike IDS, Proteus, Sparrow and other new initiatives in partnership with global technology and operations teams.
Basic Qualifications
- 2+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience
- Experience building and maintaining basic data artifacts (e.g., ETL, data models, queries)
- Experience with one or more industry analytics visualization tools (e.g. Excel, Tableau, QuickSight, MicroStrategy, PowerBI) and statistical methods (e.g. t-test, Chi-squared)
- Experience with scripting language (e.g., Python, Java, or R)
- Experience applying basic statistical methods (e.g. regression) to difficult business problems
Preferred Qualifications
- Master's degree, or Advanced technical degree
- Experience with statistical analysis, co-relation analysis
- Knowledge of how to improve code quality and optimizes BI processes (e.g. speed, cost, reliability)
- Excellence in technical communication with peers, partners, and non-technical cohorts
Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit for more information. If the country/region you're applying in isn't listed, please contact your Recruiting Partner.