666 Cloudformation jobs in India
Sr. Data Engineer (CloudFormation)
Posted 8 days ago
Job Viewed
Job Description
CACI International Inc is an American multinational professional services and information technology company headquartered in Northern Virginia.
CACI provides expertise and technology to enterprise and mission customers in support of national security missions and government transformation for defense, intelligence, and civilian customers.
CACI has approximately 23,000 employees worldwide.
Headquartered in London, CACI Ltd is a wholly owned subsidiary of CACI International Inc., a publicly listed company on the NYSE with annual revenue in excess of US $6.2bn.
Founded in 2022, CACI India is an exciting, growing and progressive business unit of CACI Ltd. CACI Ltd currently has over 2000 intelligent professionals and are now adding many more from our Hyderabad and Pune offices. Through a rigorous emphasis on quality, the CACI India has grown considerably to become one of the UKs most well-respected Technology centres.
About Data Platform:
- The Data Platform will be built and managed “as a Product” to support a Data Mesh organization.
- The Data Platform focusses on enabling decentralized management, processing, analysis and delivery of data, while enforcing corporate wide federated governance on data, and project environments across business domains.
- The goal is to empower multiple teams to create and manage high integrity data and data products that are analytics and AI ready, and consumed internally and externally.
What does a Data Infrastructure Engineer do?
- A Data Infrastructure Engineer will be responsible to develop, maintain and monitor the data platform infrastructure and operations. The infrastructure and pipelines you build will support data processing, data analytics, data science and data management across the CACI business.
- The data platform infrastructure will conform to a zero trust, least privilege architecture, with a strict adherence to data and infrastructure governance and control in a multi-account, multi-region AWS environment.
- You will use Infrastructure as Code and CI/CD to continuously improve, evolve and repair the platform.
- You will be able to design architectures and create re-useable solutions to reflect the business needs.
Responsibilities will include:
- Collaborating across CACI departments to develop and maintain the data platform
- Building infrastructure and data architectures in Cloud Formation, and SAM.
- Designing and implementing data processing environments and integrations using AWS PaaS such as Glue, EMR, Sagemaker, Redshift, Aurora and Snowflake
- Building data processing and analytics pipelines as code, using python, SQL, PySpark, spark, CloudFormation, lambda, step functions, Apache Airflow
- Monitoring and reporting on the data platform performance, usage and security
- Designing and applying security and access control architectures to secure sensitive data
You will have:
- 6+ years of experience in a Data Engineering role.
- Strong experience and knowledge of data architectures implemented in AWS using native AWS services such as S3, DataZone, Glue, EMR, Sagemaker, Aurora and Redshift.
- Experience administrating databases and data platforms
- Good coding discipline in terms of style, structure, versioning, documentation and unit tests
- Strong proficiency in Cloud Formation, Python and SQL
- Knowledge and experience of relational databases such as Postgres, Redshift
- Experience using Git for code versioning, and lifecycle management
- Experience operating to Agile principles and ceremonies
- Hands-on experience with CI/CD tools such as GitLab
- Strong problem-solving skills and ability to work independently or in a team environment.
- Excellent communication and collaboration skills.
- A keen eye for detail, and a passion for accuracy and correctness in numbers
Whilst not essential, the following skills would also be useful:
- Experience using Jira, or other agile project management and issue tracking software
- Experience with Snowflake
- Experience with Spatial Data Processing
Sr. Data Engineer (CloudFormation)
Posted 5 days ago
Job Viewed
Job Description
CACI provides expertise and technology to enterprise and mission customers in support of national security missions and government transformation for defense, intelligence, and civilian customers.
CACI has approximately 23,000 employees worldwide.
Headquartered in London, CACI Ltd is a wholly owned subsidiary of CACI International Inc., a publicly listed company on the NYSE with annual revenue in excess of US $6.2bn.
Founded in 2022, CACI India is an exciting, growing and progressive business unit of CACI Ltd. CACI Ltd currently has over 2000 intelligent professionals and are now adding many more from our Hyderabad and Pune offices. Through a rigorous emphasis on quality, the CACI India has grown considerably to become one of the UKs most well-respected Technology centres.
About Data Platform:
The Data Platform will be built and managed “as a Product” to support a Data Mesh organization.
The Data Platform focusses on enabling decentralized management, processing, analysis and delivery of data, while enforcing corporate wide federated governance on data, and project environments across business domains.
The goal is to empower multiple teams to create and manage high integrity data and data products that are analytics and AI ready, and consumed internally and externally.
What does a Data Infrastructure Engineer do?
A Data Infrastructure Engineer will be responsible to develop, maintain and monitor the data platform infrastructure and operations. The infrastructure and pipelines you build will support data processing, data analytics, data science and data management across the CACI business.
The data platform infrastructure will conform to a zero trust, least privilege architecture, with a strict adherence to data and infrastructure governance and control in a multi-account, multi-region AWS environment.
You will use Infrastructure as Code and CI/CD to continuously improve, evolve and repair the platform.
You will be able to design architectures and create re-useable solutions to reflect the business needs.
Responsibilities will include:
Collaborating across CACI departments to develop and maintain the data platform
Building infrastructure and data architectures in Cloud Formation, and SAM.
Designing and implementing data processing environments and integrations using AWS PaaS such as Glue, EMR, Sagemaker, Redshift, Aurora and Snowflake
Building data processing and analytics pipelines as code, using python, SQL, PySpark, spark, CloudFormation, lambda, step functions, Apache Airflow
Monitoring and reporting on the data platform performance, usage and security
Designing and applying security and access control architectures to secure sensitive data
You will have:
6+ years of experience in a Data Engineering role.
Strong experience and knowledge of data architectures implemented in AWS using native AWS services such as S3, DataZone, Glue, EMR, Sagemaker, Aurora and Redshift.
Experience administrating databases and data platforms
Good coding discipline in terms of style, structure, versioning, documentation and unit tests
Strong proficiency in Cloud Formation, Python and SQL
Knowledge and experience of relational databases such as Postgres, Redshift
Experience using Git for code versioning, and lifecycle management
Experience operating to Agile principles and ceremonies
Hands-on experience with CI/CD tools such as GitLab
Strong problem-solving skills and ability to work independently or in a team environment.
Excellent communication and collaboration skills.
A keen eye for detail, and a passion for accuracy and correctness in numbers
Whilst not essential, the following skills would also be useful:
Experience using Jira, or other agile project management and issue tracking software
Experience with Snowflake
Experience with Spatial Data Processing
AWS Developer - CloudFormation/Terr...
Posted today
Job Viewed
Job Description
• Experience with AppDynamics (or equivalent), Splunk (or equivalent)
• Strong understanding of IT security principles, such as Role-based access, Multifactor Authentication, Access Lists, Firewalls, Encryption, Bastion Hosts.
• Experience working in teams with agile software development practice and MVP deliverable mindset.
Sr. Data Engineer (CloudFormation)
Posted today
Job Viewed
Job Description
CACI International Inc is an American multinational professional services and information technology company headquartered in Northern Virginia.
CACI provides expertise and technology to enterprise and mission customers in support of national security missions and government transformation for defense, intelligence, and civilian customers.
CACI has approximately 23,000 employees worldwide.
Headquartered in London, CACI Ltd is a wholly owned subsidiary of CACI International Inc., a publicly listed company on the NYSE with annual revenue in excess of US $6.2bn.
Founded in 2022, CACI India is an exciting, growing and progressive business unit of CACI Ltd. CACI Ltd currently has over 2000 intelligent professionals and are now adding many more from our Hyderabad and Pune offices. Through a rigorous emphasis on quality, the CACI India has grown considerably to become one of the UKs most well-respected Technology centres.
About Data Platform:
- The Data Platform will be built and managed “as a Product” to support a Data Mesh organization.
- The Data Platform focusses on enabling decentralized management, processing, analysis and delivery of data, while enforcing corporate wide federated governance on data, and project environments across business domains.
- The goal is to empower multiple teams to create and manage high integrity data and data products that are analytics and AI ready, and consumed internally and externally.
What does a Data Infrastructure Engineer do?
- A Data Infrastructure Engineer will be responsible to develop, maintain and monitor the data platform infrastructure and operations. The infrastructure and pipelines you build will support data processing, data analytics, data science and data management across the CACI business.
- The data platform infrastructure will conform to a zero trust, least privilege architecture, with a strict adherence to data and infrastructure governance and control in a multi-account, multi-region AWS environment.
- You will use Infrastructure as Code and CI/CD to continuously improve, evolve and repair the platform.
- You will be able to design architectures and create re-useable solutions to reflect the business needs.
Responsibilities will include:
- Collaborating across CACI departments to develop and maintain the data platform
- Building infrastructure and data architectures in Cloud Formation, and SAM.
- Designing and implementing data processing environments and integrations using AWS PaaS such as Glue, EMR, Sagemaker, Redshift, Aurora and Snowflake
- Building data processing and analytics pipelines as code, using python, SQL, PySpark, spark, CloudFormation, lambda, step functions, Apache Airflow
- Monitoring and reporting on the data platform performance, usage and security
- Designing and applying security and access control architectures to secure sensitive data
You will have:
- 6+ years of experience in a Data Engineering role.
- Strong experience and knowledge of data architectures implemented in AWS using native AWS services such as S3, DataZone, Glue, EMR, Sagemaker, Aurora and Redshift.
- Experience administrating databases and data platforms
- Good coding discipline in terms of style, structure, versioning, documentation and unit tests
- Strong proficiency in Cloud Formation, Python and SQL
- Knowledge and experience of relational databases such as Postgres, Redshift
- Experience using Git for code versioning, and lifecycle management
- Experience operating to Agile principles and ceremonies
- Hands-on experience with CI/CD tools such as GitLab
- Strong problem-solving skills and ability to work independently or in a team environment.
- Excellent communication and collaboration skills.
- A keen eye for detail, and a passion for accuracy and correctness in numbers
Whilst not essential, the following skills would also be useful:
- Experience using Jira, or other agile project management and issue tracking software
- Experience with Snowflake
- Experience with Spatial Data Processing
Senior Developer – Backend & Cloud Deployment Specialist
Posted 3 days ago
Job Viewed
Job Description
Title: Senior Developer – Backend & Cloud Deployment Specialist
Location: Ahmedabad, Gujarat.
Experience: 4-6 Years
Employment Type: Full-Time
Apeiros AI Pvt. Limited is a pioneering provider of retail and customer engagement software solutions tailored for hyperlocal mom-and-pop retailers. Our mission is to empower our clients with innovative, user-friendly products that streamline operations, enhance customer experiences, and drive business growth. We thrive on enabling small retailers to leverage technology effectively in today's competitive market.
We are looking for a highly skilled Senior Node.js Developer to manage and deploy backend applications in cloud environments such as AWS, Azure, and GCP. The candidate should have in-depth experience with Node.js backend systems, relational databases like MySQL, and the entire deployment lifecycle.
Responsibilities- Design, develop, and maintain scalable backend systems using Node.js.
- Deploy Node.js applications on AWS, Azure, and GCP environments.
- Optimize database queries and manage MySQL schema design.
- Create RESTful and GraphQL APIs to serve frontend applications.
- Ensure application performance, reliability, and scalability.
- Implement authentication, authorization, and secure data practices.
- Set up CI/CD pipelines for automated deployment and testing.
- Monitor cloud infrastructure and debug production issues.
- Collaborate with frontend developers and DevOps teams.
- Mentor junior developers and contribute to code reviews and best practices.
Must-Have Skills- Strong expertise in Node.js, Express.js, and backend service architecture
- Advanced proficiency in MySQL database design, queries, and optimization
- Experience in deploying applications on AWS, Microsoft Azure, and Google Cloud Platform (GCP)
- Knowledge of containerization tools like Docker
- Understanding of CI/CD tools such as Jenkins, GitHub Actions, or GitLab CI
- Experience in RESTful and GraphQL API development
- Proficiency with Git and version control workflows
- Good understanding of cloud security and API rate limiting
Good to Have- Knowledge of microservices architecture
- Familiarity with NoSQL databases like MongoDB or Redis
- Experience with serverless architectures (e.g., AWS Lambda, Azure Functions)
- Basic frontend understanding (React, Vue) for integration purposes
EducationBachelor's degree in Computer Science, IT, or a related field.
Certifications in Cloud Platforms (AWS/GCP/Azure) are a plus.
Soft Skills- Excellent problem-solving and debugging skills
- Strong communication and teamwork abilities
- Self-driven and able to manage tasks independently
Note: Only Candidates from Ahmedabad are preferred.
How to Apply:
Interested candidates can send their updated resumes to or contact us at for more details.
Senior Developer – Backend & Cloud Deployment Specialist
Posted 3 days ago
Job Viewed
Job Description
Location: Ahmedabad, Gujarat.
Experience: 4-6 Years
Employment Type: Full-Time
About us: Apeiros AI Pvt. Limited is a pioneering provider of retail and customer engagement software solutions tailored for hyperlocal mom-and-pop retailers. Our mission is to empower our clients with innovative, user-friendly products that streamline operations, enhance customer experiences, and drive business growth. We thrive on enabling small retailers to leverage technology effectively in today's competitive market.
Role Summary We are looking for a highly skilled Senior Node.js Developer to manage and deploy backend applications in cloud environments such as AWS, Azure, and GCP. The candidate should have in-depth experience with Node.js backend systems, relational databases like MySQL, and the entire deployment lifecycle.
Responsibilities - Design, develop, and maintain scalable backend systems using Node.js.
- Deploy Node.js applications on AWS, Azure, and GCP environments.
- Optimize database queries and manage MySQL schema design.
- Create RESTful and GraphQL APIs to serve frontend applications.
- Ensure application performance, reliability, and scalability.
- Implement authentication, authorization, and secure data practices.
- Set up CI/CD pipelines for automated deployment and testing.
- Monitor cloud infrastructure and debug production issues.
- Collaborate with frontend developers and DevOps teams.
- Mentor junior developers and contribute to code reviews and best practices.
Must-Have Skills - Strong expertise in Node.js, Express.js, and backend service architecture
- Advanced proficiency in MySQL database design, queries, and optimization
- Experience in deploying applications on AWS, Microsoft Azure, and Google Cloud Platform (GCP)
- Knowledge of containerization tools like Docker
- Understanding of CI/CD tools such as Jenkins, GitHub Actions, or GitLab CI
- Experience in RESTful and GraphQL API development
- Proficiency with Git and version control workflows
- Good understanding of cloud security and API rate limiting
Good to Have - Knowledge of microservices architecture
- Familiarity with NoSQL databases like MongoDB or Redis
- Experience with serverless architectures (e.g., AWS Lambda, Azure Functions)
- Basic frontend understanding (React, Vue) for integration purposes
Education Bachelor's degree in Computer Science, IT, or a related field.
Certifications in Cloud Platforms (AWS/GCP/Azure) are a plus.
Soft Skills - Excellent problem-solving and debugging skills
- Strong communication and teamwork abilities
- Self-driven and able to manage tasks independently
Note: Only Candidates from Ahmedabad are preferred.
How to Apply:
Interested candidates can send their updated resumes to or contact us at for more details.
Senior Developer – Backend & Cloud Deployment Specialist
Posted today
Job Viewed
Job Description
Title: Senior Developer – Backend & Cloud Deployment Specialist
Location: Ahmedabad, Gujarat.
Experience: 4-6 Years
Employment Type: Full-Time
Apeiros AI Pvt. Limited is a pioneering provider of retail and customer engagement software solutions tailored for hyperlocal mom-and-pop retailers. Our mission is to empower our clients with innovative, user-friendly products that streamline operations, enhance customer experiences, and drive business growth. We thrive on enabling small retailers to leverage technology effectively in today's competitive market.
We are looking for a highly skilled Senior Node.js Developer to manage and deploy backend applications in cloud environments such as AWS, Azure, and GCP. The candidate should have in-depth experience with Node.js backend systems, relational databases like MySQL, and the entire deployment lifecycle.
Responsibilities- Design, develop, and maintain scalable backend systems using Node.js.
- Deploy Node.js applications on AWS, Azure, and GCP environments.
- Optimize database queries and manage MySQL schema design.
- Create RESTful and GraphQL APIs to serve frontend applications.
- Ensure application performance, reliability, and scalability.
- Implement authentication, authorization, and secure data practices.
- Set up CI/CD pipelines for automated deployment and testing.
- Monitor cloud infrastructure and debug production issues.
- Collaborate with frontend developers and DevOps teams.
- Mentor junior developers and contribute to code reviews and best practices.
Must-Have Skills- Strong expertise in Node.js, Express.js, and backend service architecture
- Advanced proficiency in MySQL database design, queries, and optimization
- Experience in deploying applications on AWS, Microsoft Azure, and Google Cloud Platform (GCP)
- Knowledge of containerization tools like Docker
- Understanding of CI/CD tools such as Jenkins, GitHub Actions, or GitLab CI
- Experience in RESTful and GraphQL API development
- Proficiency with Git and version control workflows
- Good understanding of cloud security and API rate limiting
Good to Have- Knowledge of microservices architecture
- Familiarity with NoSQL databases like MongoDB or Redis
- Experience with serverless architectures (e.g., AWS Lambda, Azure Functions)
- Basic frontend understanding (React, Vue) for integration purposes
EducationBachelor's degree in Computer Science, IT, or a related field.
Certifications in Cloud Platforms (AWS/GCP/Azure) are a plus.
Soft Skills- Excellent problem-solving and debugging skills
- Strong communication and teamwork abilities
- Self-driven and able to manage tasks independently
Note: Only Candidates from Ahmedabad are preferred.
How to Apply:
Interested candidates can send their updated resumes to or contact us at for more details.
Be The First To Know
About the latest Cloudformation Jobs in India !
Infrastructure-as-code (Iac) Architect
Posted today
Job Viewed
Job Description
This role has been designated as ‘Office’, which means you will primarily work from an HPE office.
**Infrastructure-as-Code (IAC) Architect**
**Requirements**:
- Over 15+ years of experience, with more than 4 years of architectural experience
- Experience in working with different project and architecture methodologies and frameworks.
- Experience in Infrastructure-as-cloud platform (Ansible/Terraform/VMware VRA), Cloud migration, and DevOps automation
- Experience in Coaching, Mentoring, and Developing people.
- Good understanding of IAAS and PAAS services on the cloud.
- Good understanding of DevOps processes and tools with hands-on experience.
- Experience with CI/CD pipeline tools like Jenkins/Bamboo or equivalent tools.
- Experience on GitHub/GitLab.
- Good Experience in creating Build and Release pipelines and deploying web apps.
- Experience with JSON and ARM templates.
**Key Responsibilities**
- Drive Infrastructure projects
- Support the Managed Services Delivery Team with the transition and establishment of the managed services.
- Provide issue resolution as a point of contact for technical questions.
- Facilitate client and internal meetings; present architecture and design solutions.
- Plan implementations and advance strategies for new initiatives.
- Mentor Infrastructure as Code (IaC) project delivery and publish a white paper.
- Lead due diligence exercises analyzing complex data to drive solution, risk, and commercial decisions in the best interest of the business unit.
- Proficiency in Design Thinking and Agile Methodology.
- Experience in developing integrated managed services solutions.
- This role requires flexibility in working with various key project stakeholders across the world and in different time zones.
- Experience across Infrastructure Managed and Delivery Services life cycle
- Client-facing experience in terms of negotiation and presentation skills
- Create the framework and development practices for developers of modern architectures
- Design & architect solutions and review the work
**Additional Responsibilities**:
- Designs prepare and execute support services strategy and operational strategy. Sets Technical Strategy and direction. Represents team(s) to senior management and client/customers.
- A preponderance of time is spent on strategic and creative problem-solving.
- Develops innovative multi-team solutions to complex problems. Independently implements end-user or enterprise infrastructure or services of significant complexity. Applies deep broad technical knowledge of technology and industry trends to lead operations and administration of high-risk critical infrastructure or software platforms and user groups of high complexity.
- Demonstrates broad technical leadership, impacting significant technical direction; exerts influence outside of the immediate team and drives change. Integrates technical expertise and business understanding to create superior solutions for the company and customers.
- Mentors and consults with team members and other organizations, customers, and vendors on complex issues. Independently resolves highly complex technical issues within multiple technical areas. Partners with members of multiple teams as appropriate; leads the technical team while resolving key issues. Mentors and assists other less experienced team members. Identifies potential escalations and proactively alerts management; leads and escalates through L3; engages in resolution at L2.
- Proactively searches for issues and provides solutions to prevent problems from occurring in areas beyond those of immediate responsibility.
- Independently reviews and manages highly complex and high-risk changes to critical business systems. Leads or participates in the Change Advisory or Technical Advisory Board. Mentors others in the technology community; may publish or otherwise engage professionally outside of the company.
**Education and Experience Required**:
- 15+ years of relevant industry experience
- Bachelor's degree in Management Information SystemsComputer Science or equivalent experience and a minimum of 12 years of related experience or a Master's degree and a minimum of 10 years of experience.
**Job**:
Information Technology
**Job Level**:
Master
**Hewlett Packard Enterprise is EEO F/M/Protected Veteran/ Individual with Disabilities.**
HPE will comply with all applicable laws related to employer use of arrest and conviction records, including laws requiring employers to consider for employment qualified applicants with criminal histories.
AWS Cloud
Posted today
Job Viewed
Job Description
• Java programming knowledge is required.
• 8-9 years of overall development (Java Stack) experience with 4+ years with AWS Cloud services.
D&A Sr Infrastructure as Code (IAC) Engineer - GCP

Posted today
Job Viewed
Job Description
**Job Summary**
We are seeking a Senior IAC Engineer to architect, develop, and automate D&A GCP PAAS services and Databricks platform provisioning using Terraform, Spacelift, and GitHub. This role combines the depth of platform engineering with the principles of reliability engineering, enabling resilient, secure, and scalable cloud environments. The ideal candidate has **6+ years of hands-on experience with IaC** , CI/CD, infrastructure automation, and driving cloud infrastructure reliability.
**Key Responsibilities**
+ **Infrastructure & Automation**
+ Design, implement, and manage modular, reusable Terraform modules to provision GCP resources (BigQuery, GCS, VPC, IAM, Pub/Sub, Composer, etc.).
+ Automate provisioning of Databricks workspaces, clusters, jobs, service principals, and permissions using Terraform.
+ Build and maintain CI/CD pipelines for infrastructure deployment and compliance using GitHub Actions and Spacelift.
+ Standardize and enforce GitOps workflows for infrastructure changes, including code reviews and testing.
+ Integrate infrastructure cost control, policy-as-code, and secrets management into automation pipelines.
+ **Architecture & Reliability**
+ Lead the design of scalable and highly reliable infrastructure patterns across GCP and Databricks.
+ Implement resiliency and fault-tolerant designs, backup/recovery mechanisms, and automated alerting around infrastructure components.
+ Partner with SRE and DevOps teams to enable observability, performance monitoring, and automated incident response tooling.
+ Develop proactive monitoring and drift detection for Terraform-managed resources.
+ Contribute to reliability reviews, runbooks, and disaster recovery strategies for cloud resources.
+ **Collaboration & Governance**
+ Work closely with security, networking, FinOps, and platform teams to ensure compliance, cost-efficiency, and best practices.
+ Define Terraform standards, module registries, and access patterns for scalable infrastructure usage.
+ Provide mentorship, peer code reviews, and knowledge sharing across engineering teams.
**Required Skills & Experience**
+ 6+ years of experience with Terraform and Infrastructure as Code (IaC), with deep expertise in GCP provisioning.
+ Experience in automating Databricks (clusters, jobs, users, ACLs) using Terraform.
+ Strong hands-on experience with Spacelift (or similar tools like Terraform Cloud or Atlantis) and GitHub CI/CD workflows.
+ Deep understanding of infrastructure reliability principles: HA, fault tolerance, rollback strategies, and zero-downtime deployments.
+ Familiar with monitoring/logging frameworks (Cloud Monitoring, Stackdriver, Datadog, etc.).
+ Strong scripting and debugging skills to troubleshoot infrastructure or CI/CD failures.
+ Proficient with GCP networking, IAM policies, folder/project structure, and Org Policy configuration.
**Nice to Have**
+ HashiCorp Certified: Terraform Associate or Architect.
+ Familiarity with SRE principles (SLOs, error budgets, alerting).
+ Exposure to FinOps strategies: cost controls, tagging policies, budget alerts.
+ Experience with container orchestration (GKE/Kubernetes), Cloud Composer is a plus
No Relocation support available
**Business Unit Summary**
**At Mondelez International, our purpose is to empower people to snack right by offering the right snack, for the right moment, made the right way. That means delivering a broad range of delicious, high-quality snacks that nourish life's moments, made with sustainable ingredients and packaging that consumers can feel good about.**
**We have a rich portfolio of strong brands globally and locally including many household names such as** **_Oreo_** **,** **_belVita_** **and** **_LU_** **biscuits;** **_Cadbury Dairy Milk_** **,** **_Milka_** **and** **_Toblerone_** **chocolate;** **_Sour Patch Kids_** **candy and** **_Trident_** **gum. We are proud to hold the top position globally in biscuits, chocolate and candy and the second top position in gum.**
**Our 80,000 makers and bakers are located in more** **than 80 countries** **and we sell our products in** **over 150 countries** **around the world. Our people are energized for growth and critical to us living our purpose and values. We are a diverse community that can make things happen-and happen fast.**
Mondelez International is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, gender, sexual orientation or preference, gender identity, national origin, disability status, protected veteran status, or any other characteristic protected by law.
**Job Type**
Regular
Analytics & Modelling
Analytics & Data Science
At Mondelez International, our purpose is to empower people to snack right through offering the right snack, for the right moment, made the right way. That means delivering a broader range of delicious, high-quality snacks that nourish life's moments, made with sustainable ingredients and packaging that consumers can feel good about.
We have a rich portfolio of strong brands - both global and local. Including many household names such as Oreo, belVita and LU biscuits; Cadbury Dairy Milk, Milka and Toblerone chocolate; Sour Patch Kids candy and Trident gum. We are proud to hold the number 1 position globally in biscuits, chocolate and candy as well as the No. 2 position in gum
Our 80,000 Makers and Bakers are located in our operations in more than 80 countries and are working to sell our products in over 150 countries around the world. They are energized for growth and critical to us living our purpose and values. We are a diverse community that can make things happen, and happen fast.
Join us and Make It An Opportunity!
Mondelez Global LLC is an Equal Opportunity/Affirmative Action employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected Veteran status, sexual orientation, gender identity, gender expression, genetic information, or any other characteristic protected by law. Applicants who require accommodation to participate in the job application process may contact for assistance.