Memcached has proven to be an efficient system for the storing and distribution of cached data, which makes websites run faster and be more efficient in terms of time management.

Cached data

If you are concerned with data that is repeatedly used and which you don’t want to recompute every time you attempt to access it, a cache is always a good solution. Generally speaking, a cached memory is a memory component which functions as a temporary repository of data. It can be managed either by hardware (as in the case of CPU caches) or by software (as in disk cache). What the development of Web 2.0 has enabled is the employment of web caches, which represent data stored remotely on web servers or browsers.

This data is made into pockets of information stored for later use, so that the amount of information circulating on the net is drastically reduced. Storing reusable data that is easily accessible has the major advantage of rendering navigation faster. This is due to the fact that bandwidth is considerably reduced, and so are the operations required by web browsers, which instead of chasing the internet for the relevant data go directly to the stored information and take it from there. Responsiveness is also seriously improved, since the time required of the browser to perform the requested task is reduced.


Memcached is an elastic data infrastructure software of the cached data family that works well on websites with large volumes of information. Memcached is mainly used to avoid database load, and so large sites such as Facebook, Amazon and the likes use it for a better management of their content, and have even developed their own versions of memcached to suit their specific interests and necessities. Memcached was designed to serve as a tool for networks, where servers accessing the same data needed to be somehow brought together.

Prior to the distributed technology of memcached, accessing information from a data cache was done on a one-on-one basis. Which means that every single machine had to have its own cache from which it could draw the required data. The same entry was, thus, stored on a multitude of locations which were completely separated from one another. With memcached, the possibility was brought forth for the entire cluster of machines to use the same set of cached data. So now a lot of space is saved as well, since the machines are not only using the same storage nucleus but they also work together towards the enlargement of the cached data available for share.

Memcached on Drupal

Like all major manipulators of online data, Drupal also needed memcached in order to manage the high volume of information it stores and circulates. And like all major content management systems, Drupal benefits from the possibility of deconstructing data loads and creating cache tables where information is easier to retrieve.


Moreover, by using this method a clear and clean separation is marked between content and presentation, which is precisely what managements of data want in order to better structure their work. In Drupal, default caching is done through mysql, which is an interesting choice, knowing that memcached is meant to avoid accessing databases, while that is exactly what mysql does. However, mysql is fast, and this is probably the main reason behind Drupal’s decision to use it.

Another aspect that needs to be kept in mind is that memcached works well with small to medium size data stores. In the case of large users, the database is unavoidable, since the likelihood of coming across un-cached data is higher. Hence another reason to use mysql as the start-up method for memcaching. Information about how to run memcached on Drupal is available on their own website. As for a more detailed discussion on the pros and cons of memcached in conjunction with Drupal, you might want to take a look at this blog entry or maybe this one.