A Cloud Cache cluster is an in-memory data grid. It is ideal for use as a cache.
Given CAP theorem restrictions, Cloud Cache chooses to provide consistency and partition tolerance, while implementing a less strict protocol for availability.
A cluster is formed from two components: locators and servers.
Locators provide oversight in cluster networking and allow the various components to discover each other. The locators implement a protocol, such that if a network partition occurs between components, one side of the partition will continue operating.
Servers hold data. The data is held in memory (not on disk), leading to higher performance. The quantity of servers in a running cluster scales, without a degradation in performance. More servers can increase the data capacity. More servers can increase the spread of the configurable number of redundant copies, leading to a higher fault tolerance. See Data Architecture for details on how data may be distributed among servers.
An app works with a cluster in a client-server model. The app is the client within the model, and the cluster’s servers are the servers within the model.
The app interacts with the data through the declaration and use of a cache. This cache is local to the app. The local cache may hold data, but a cloud native app that does not hold state will not hold data in the local cache. Instead, all data operations are sent to the servers.
Cached data are held in regions. A region is a logical grouping of data, and it provides the same organizational structure as a database table. A regions is implemented with a map data structure. Each entry within a region is a key/value pair.