\r\n

51Degrees Pipeline Documentation  4.1Newer Version 4.4

Result Caching

Introduction

Results caching refers to a cache that exists at the level of an aspect engine. If caching is enabled then when a request arrives at the aspect engine, it will first check to see if it has recently processed a request with the same evidence. If it has then the result determined from the previous processing will be returned without any further work being required.

Results caching will provide improved performance in most scenarios where the processing time of an aspect engine is significant. However, profiling should always be undertaken to ensure that the configuration used is returning real benefit in the relevant environment.

Note that some aspect engines (for example, the 51Degrees on-premise device detection engine) may have internal caches for various reasons, these are separate to the Pipeline results cache and may or may not be configurable depending on the implementation of the aspect engine.

Internals

The evidence encapsulated within flow data will often contain many more values than are relevant to a particular aspect engine. Consequently, we don't want to use the whole of the evidence structure as a key to the results cache.

Instead, we make use of the evidence key filter on an aspect engine to produce a data key for the sub-set of the total evidence that the aspect engine will make use of.

This data key is then used to access the cache and check for an existing result. If one is found then it will be added to the flow data as normal. If not, the aspect engine performs the usual processing and then stores the result in the cache using the data key.

Below is a flow chart illustrating this process.

Configuration

The result cache can be configured using the element builder associated with the aspect engine.

Any cache that implements the 51Degrees cache interface can be used, but by default we provide a relatively simple shared LRU (least recently used) cache which has a low overhead, copes well with concurrent access and enables good performance in a wide range of scenarios.

This cache has a single 'size' parameter that determines how many result instances will be stored. Increasing this value will generally improve performance at the cost of increased memory usage. Profiling should always be used to determine appropriate settings for your use-case.

Examples

We provide examples on how to enable result caching for each language where it is supported. Additionally, there are examples on implementing and using a custom caching implementation.