Introduction
For device detection, as with many computing tasks, there are tradeoffs to be made between the performance of the algorithm and memory usage, as well as performance vs adaptability.
Rather than trying to solve this with a one-size-fits-all approach, our device detection API allows you to easily configure the solution to suit your requirements.
Performance Profile Templates
At the low level, the device detection API uses various collections of data from the data file in order to perform detections. These collections may either be fully mapped into memory or accessed via highly optimized LRU caches, with data being loaded from disk on a cache miss.
The mechanism used to access the data, as well as the size of these caches, can be configured specifically. However, we have defined templates which we believe will cover the majority of scenarios.
The exact method for specifying the template will vary by programing language. See the performance examples for a demonstration.
The table below explains the options, from fastest performance and highest memory usage to slowest performance and lowest memory usage.
Template Name | Behavior | Recommendations |
MaxPerformance | All data from the data file is mapped into main memory at startup. As caches are not needed, data access is lock-free | Use when memory usage is not a problem and performance is critical. This configuration is also strongly recommended when the API is running in a highly concurrent environment |
HighPerformance | Data accessed via caches; caches are large enough that all data from the data file can be accommodated as it is requested over time | Generally not recommended. It offers slightly worse performance than MaxPerformance but will grow to the same memory usage over time. Can be useful as a starting point when creating a custom configuration |
Balanced (Default) | Data accessed via caches; some caches are smaller than the high performance template. However, there is enough space that the most commonly accessed items are retained in memory. As such, loading from the disk is still relatively uncommon (assuming a typical web server workload) | Fine for generic workloads where there is no extreme memory or performance requirement |
LowMemory | Data always streamed from disk on-demand | Recommended when the lowest possible memory usage is more important than performance |
The precise values associated with each template can be seen in the source code on GitHub.
Evaluation graphs
The hash data file includes 2 different 'graphs' that can be used when trying to find a match, performance and predictive.
The performance graph is significantly faster than predictive, but is less tolerant of differences between the training data and the evaluated user-agent.
This means that the performance graph is generally recommended when fast matching is the primary concern and the data file is regularly and frequently updated.
In comparison, the predictive graph is recommended when getting an accurate match for every request is the primary concern, particularly when user-agents are frequently encountered that are not in the training data.
Note that if both graphs are enabled, performance will be used first. Predictive will only then be used if the algorithm fails to find a match with the performance graph.
The default graph options are defined by the performance templates described above. At time of writing, all templates enable predictive and disable performance. This is done in order to maximize accuracy and ensure consistent device detection results between the profiles.