In times of the Web2.0 dynamically generated web pages are becoming ever more important with their contents influenced by data base contents or user inputs. Dynamic pages do not lie statically as an HTML file on the server, but are only generated as soon as a user calls them up.
Web pages which come from a Content Management System are generally created in this manner.
The more users that request such a page, the more pages the server has to put together at the “same” time. This of course increases the strain on the server or servers. This in turn has the consequence, from a certain point, that the user has to wait noticeably longer for his page to set itself up.
It soon becomes evident therefore, that it is wise to buffer certain data which does not change so often. When a user then requests it, it can be read from the memory, the so-called cache, which is of course faster than generating the page again each time.
The information is pre-computed according to this caching principle and generally expires again after a certain period. In this way the information still remains more or less up-to-date, although it does not have to be re-computed each time.
Event-based caching is considerably more elegant, however. onion.net offers the possibility of influencing event handling i.e. waiting for events or getting them going.
In this way the page is therefore only re-computed if information changes in the data base which concerns this page.
Imagine that we have products with certain information in our onion.net. In addition there are pages which represent this information in different manners, e.g. an overview page, which lists all articles, although it only displays the name and the price, and a detail page for each individual article, which of course shows much more information on the article.
Yet if an article detail changes, this modification may only concern the detail page. This is re-computed. The overview page, which does not need this information, does therefore also not change and can remain in the cache.
The disadvantage is however that new computations must be made with each transaction.
In practice, caches usually have a graph-based operation, i.e. they also enter existing dependencies between buffered objects or aggregations.
You can imagine for example that there is a “Top 10” list which is based on the total overview of our articles and displays the 10 most popular of these. The information represented per article are the same. The computations of this list will therefore only extend those for the calculation of the total list, or sort these according to certain criteria etc.
Yet if the price of a product changes which is not in the “Top 10”, not only is the total list re-computed in the case of normal event-based caching, but also the “Top 10” list, since this also considers all products indirectly (it ultimately uses the computation of the total list).
An intelligent incremental cache, as it is provided in onion.net, can detect and administer this dependency between the two methods. The “Top 10” would therefore not be re-computed.
Such handling therefore relieves the server and offers an optimal scaling. The onion.net system therefore delivers all information up-to-date and quickly.