What's a Web Cache? Why do people use them? A Web cache sits between one or more Web servers (also known as origin servers) and a client or many clients, and watches requests come by, saving copies of the responses - like HTML pages, images and files (collectively known as representations) - for itself. Then, if there is another request for the same URL, it can use the response that it has, instead of asking the origin server for it again.
If you examine the preferences dialog of any modern Web browser (like Internet Explorer, Safari or Mozilla), you'll probably notice a 'cache' setting. This lets you set aside a section of your computer's hard disk to store representations that you've seen, just for you. The browser cache works according to fairly simple rules. It will check to make sure that the representations are fresh, usually once a session (that is, the once in the current invocation of the browser).
This cache is especially useful when users hit the 'back' button or click a link to see a page they've just looked at. Also, if you use the same navigation images throughout your site, they'll be served from browsers' caches almost instantaneously.
Web proxy caches work on the same principle, but a much larger scale. Proxies serve hundreds or thousands of users in the same way; large corporations and ISPs often set them up on their firewalls, or as standalone devices (also known as intermediaries).
Because proxy caches aren't part of the client or the origin server, but instead are out on the network, requests have to be routed to them somehow. One way to do this is to use your browser's proxy setting to manually tell it what proxy to use; another is using interception. Interception proxies have Web requests redirected to them by the underlying network itself, so that clients don't need to be configured for them, or even know about them.
Proxy caches are a type of shared cache; rather than just having one person using them, they usually have a large number of users, and because of this they are very good at reducing latency and network traffic. That's because popular representations are reused a number of times.
Also known as 'reverse proxy caches' or 'surrogate caches,' gateway caches are also intermediaries, but instead of being deployed by network administrators to save bandwidth, they're typically deployed by Webmasters themselves, to make their sites more scalable, reliable and better performing.
Requests can be routed to gateway caches by a number of methods, but typically some form of load balancer is used to make one or more of them look like the origin server to clients.
Content delivery networks (CDNs) distribute gateway caches throughout the Internet (or a part of it) and sell caching to interested Web sites. Speedera and Akamai are examples of CDNs.
This tutorial focuses mostly on browser and proxy caches, although some of the information is suitable for those interested in gateway caches as well.
Web caching is one of the most misunderstood technologies on the Internet. Webmasters in particular fear losing control of their site, because a proxy cache can 'hide' their users from them, making it difficult to see who's using the site.
Unfortunately for them, even if Web caches didn't exist, there are too many variables on the Internet to assure that they'll be able to get an accurate picture of how users see their site. If this is a big concern for you, this tutorial will teach you how to get the statistics you need without making your site cache-unfriendly.
Another concern is that caches can serve content that is out of date, or stale. However, this tutorial can show you how to configure your server to control how your content is cached.
CDNs are an interesting development, because unlike many proxy caches, their gateway caches are aligned with the interests of the Web site being cached, so that these problems aren't seen. However, even when you use a CDN, you still have to consider that there will be proxy and browser caches downstream.
On the other hand, if you plan your site well, caches can help your Web site load faster, and save load on your server and Internet link. The difference can be dramatic; a site that is difficult to cache may take several seconds to load, while one that takes advantage of caching can seem instantaneous in comparison. Users will appreciate a fast-loading site, and will visit more often.
Think of it this way; many large Internet companies are spending millions of dollars setting up farms of servers around the world to replicate their content, in order to make it as fast to access as possible for their users. Caches do the same for you, and they're even closer to the end user. Best of all, you don't have to pay for them.
The fact is that proxy and browser caches will be used whether you like it or not. If you don't configure your site to be cached correctly, it will be cached using whatever defaults the cache's administrator decides upon.
All caches have a set of rules that they use to determine when to serve a representation from the cache, if it's available. Some of these rules are set in the protocols (HTTP 1.0 and 1.1), and some are set by the administrator of the cache (either the user of the browser cache, or the proxy administrator).
Generally speaking, these are the most common rules that are followed (don't worry if you don't understand the details, it will be explained below):
ETag
or Last-Modified
header) is present on a response, it will be considered uncacheable.Together, freshness and validation are the most important ways that a cache works with content. A fresh representation will be available instantly from the cache, while a validated representation will avoid sending the entire representation over again if it hasn't changed.
There are several tools that Web designers and Webmasters can use to fine-tune how caches will treat their sites. It may require getting your hands a little dirty with your server's configuration, but the results are worth it. For details on how to use these tools with your server, see the Implementation sections below.
HTML authors can put tags in a document's HEAD
section that describe its attributes. These meta tags are often used in the belief that they can mark a document as uncacheable, or expire it at a certain time.
Meta tags are easy to use, but aren't very effective. That's because they're only honored by a few browser caches (which actually read the HTML), not proxy caches (which almost never read the HTML in the document). While it may be tempting to put a Pragma: no-cache meta tag into a Web page, it won't necessarily cause it to be kept fresh.
If your site is hosted at an ISP or hosting farm and they don't give you the ability to set arbitrary HTTP headers (like Expires
and Cache-Control
), complain loudly; these are tools necessary for doing your job.
On the other hand, true HTTP headers give you a lot of control over how both browser caches and proxies handle your representations. They can't be seen in the HTML, and are usually automatically generated by the Web server. However, you can control them to some degree, depending on the server you use. In the following sections, you'll see what HTTP headers are interesting, and how to apply them to your site.
HTTP headers are sent by the server before the HTML, and only seen by the browser and any intermediate caches. Typical HTTP 1.1 response headers might look like this:
HTTP/1.1 200 OK Date: Fri, 30 Oct 1998 13:19:41 GMT Server: Apache/1.3.3 (Unix) Cache-Control: max-age=3600, must-revalidate Expires: Fri, 30 Oct 1998 14:19:41 GMT Last-Modified: Mon, 29 Jun 1998 02:28:12 GMT ETag: "3e86-410-3596fbbc" Content-Length: 1040 Content-Type: text/html
The HTML would follow these headers, separated by a blank line. See the Implementation sections for information about how to set HTTP headers.
Many people believe that assigning a Pragma: no-cache
HTTP header to a representation will make it uncacheable. This is not necessarily true; the HTTP specification does not set any guidelines for Pragma response headers; instead, Pragma request headers (the headers that a browser sends to a server) are discussed. Although a few caches may honor this header, the majority won't, and it won't have any effect. Use the headers below instead.
The Expires
HTTP header is a basic means of controlling caches; it tells all caches how long the associated representation is fresh for. After that time, caches will always check back with the origin server to see if a document is changed. Expires
headers are supported by practically every cache.
Most Web servers allow you to set Expires
response headers in a number of ways. Commonly, they will allow setting an absolute time to expire, a time based on the last time that the client saw the representation (last access time), or a time based on the last time the document changed on your server (last modification time).
Expires
headers are especially good for making static images (like navigation bars and buttons) cacheable. Because they don't change much, you can set extremely long expiry time on them, making your site appear much more responsive to your users. They're also useful for controlling caching of a page that is regularly changed. For instance, if you update a news page once a day at 6am, you can set the representation to expire at that time, so caches will know when to get a fresh copy, without users having to hit 'reload'.
The only value valid in an Expires
header is a HTTP date; anything else will most likely be interpreted as 'in the past', so that the representation is uncacheable. Also, remember that the time in a HTTP date is Greenwich Mean Time (GMT), not local time.
For example:
Expires: Fri, 30 Oct 1998 14:19:41 GMT
It's important to make sure that your Web server's clock is accurate if you use the Expires
header. One way to do this is using the Network Time Protocol (NTP); talk to your local system administrator to find out more.
Although the Expires
header is useful, it has some limitations. First, because there's a date involved, the clocks on the Web server and the cache must be synchronised; if they have a different idea of the time, the intended results won't be achieved, and caches might wrongly consider stale content as fresh.
Another problem with Expires
is that it's easy to forget that you've set some content to expire at a particular time. If you don't update an Expires
time before it passes, each and every request will go back to your Web server, increasing load and latency.
HTTP 1.1 introduced a new class of headers, Cache-Control
response headers, to give Web publishers more control over their content, and to address the limitations of Expires
.
Useful Cache-Control
response headers include:
max-age=
[seconds] - specifies the maximum amount of time that an representation will be considered fresh. Similar to Expires
, this directive is relative to the time of the request, rather than absolute. [seconds] is the number of seconds from the time of the request you wish the representation to be fresh for.s-maxage=
[seconds] - similar to max-age
, except that it only applies to shared (e.g., proxy) caches.public
- marks authenticated responses as cacheable; normally, if HTTP authentication is required, responses are automatically uncacheable.no-cache
- forces caches to submit the request to the origin server for validation before releasing a cached copy, every time. This is useful to assure that authentication is respected (in combination with public), or to maintain rigid freshness, without sacrificing all of the benefits of caching.no-store
- instructs caches not to keep a copy of the representation under any conditions.must-revalidate
- tells caches that they must obey any freshness information you give them about a representation. HTTP allows caches to serve stale representations under special conditions; by specifying this header, you're telling the cache that you want it to strictly follow your rules.proxy-revalidate
- similar to must-revalidate
, except that it only applies to proxy caches.For example:
Cache-Control: max-age=3600, must-revalidate
If you plan to use the Cache-Control
headers, you should have a look at the excellent documentation in HTTP 1.1
In How Web Caches Work, we said that validation is used by servers and caches to communicate when an representation has changed. By using it, caches avoid having to download the entire representation when they already have a copy locally, but they're not sure if it's still fresh.
Validators are very important; if one isn't present, and there isn't any freshness information (Expires
or Cache-Control
) available, caches will not store a representation at all.
The most common validator is the time that the document last changed, as communicated in Last-Modified
header. When a cache has an representation stored that includes a Last-Modified
header, it can use it to ask the server if the representation has changed since the last time it was seen, with an If-Modified-Since
request.
HTTP 1.1 introduced a new kind of validator called the ETag. ETags are unique identifiers that are generated by the server and changed every time the representation does. Because the server controls how the ETag is generated, caches can be surer that if the ETag matches when they make a If-None-Match
request, the representation really is the same.
Almost all caches use Last-Modified times in determining if an representation is fresh; ETag validation is also becoming prevalent.
Most modern Web servers will generate both ETag
and Last-Modified
headers to use as validators for static content (i.e., files) automatically; you won't have to do anything. However, they don't know enough about dynamic content (like CGI, ASP or database sites) to generate them; see Writing Cache-Aware Scripts.
Besides using freshness information and validation, there are a number of other things you can do to make your site more cache-friendly.
Cache-Control: max-age
header with a large value.Last-Modified
date. For instance, when updating your site, don't copy over the entire site; just move the files that you've changed.By default, most scripts won't return a validator (a Last-Modified
or ETag
response header) or freshness information (Expires
or Cache-Control
). While some scripts really are dynamic (meaning that they return a different response for every request), many (like search engines and database-driven sites) can benefit from being cache-friendly.
Generally speaking, if a script produces output that is reproducable with the same request at a later time (whether it be minutes or days later), it should be cacheable. If the content of the script changes only depending on what's in the URL, it is cacheble; if the output depends on a cookie, authentication information or other external criteria, it probably isn't.
Last-Modified
times are preserved.Expires
, it's probably easiest to do so with Cache-Control: max-age
, which will make the request fresh for an amount of time after the request.If-Modified-Since
and/or If-None-Match
requests. This can be done by parsing the HTTP headers, and then responding with 304 Not Modified
when appropriate. Unfortunately, this is not a trival task.Some other tips;
Content-Length
response headers. It's easy to do, and it will allow the response of your script to be used in a persistent connection. This allows clients to request multiple representations on one TCP/IP connection, instead of setting up a connection for every request. It makes your site seem much faster.See the Implementation Notes for more specific information.
A good strategy is to identify the most popular, largest representations (especially images) and work with them first.
The most cacheable representation is one with a long freshness time set. Validation does help reduce the time that it takes to see a representation, but the cache still has to contact the origin server to see if it's fresh. If the cache already knows it's fresh, it will be served directly.
If you must know every time a page is accessed, select ONE small item on a page (or the page itself), and make it uncacheable, by giving it a suitable headers. For example, you could refer to a 1x1 transparent uncacheable image from each page. The Referer
header will contain information about what page called it.
Be aware that even this will not give truly accurate statistics about your users, and is unfriendly to the Internet and your users; it generates unnecessary traffic, and forces people to wait for that uncached item to be downloaded. For more information about this, see On Interpreting Access Statistics in the references.
Many Web browsers let you see the Expires
and Last-Modified
headers are in a 'page info' or similar interface. If available, this will give you a menu of the page and any representations (like images) associated with it, along with their details.
To see the full headers of a representation, you can manually connect to the Web server using a Telnet client.
To do so, you may need to type the port (be default, 80) into a separate field, or you may need to connect to www.google.com:80
or www.google.com 80
(note the space). Consult your Telnet client's documentation.
Once you've opened a connection to the site, type a request for the representation. For instance, if you want to see the headers for http://www.google.com/foo.html
, connect to www.google.com
, port 80
, and type:
GET /foo.html HTTP/1.1 [return] Host: www.google.com [return][return]
Press the Return key every time you see [return]
; make sure to press it twice at the end. This will print the headers, and then the full representation. To see the headers only, substitute HEAD for GET.
By default, pages protected with HTTP authentication are considered private; they will not be kept by shared caches. However, you can make authenticated pages public with a Cache-Control: public header; HTTP 1.1-compliant caches will then allow them to be cached.
If you'd like such pages to be cacheable, but still authenticated for every user, combine the Cache-Control: public
and no-cache
headers. This tells the cache that it must submit the new client's authentication information to the origin server before releasing the representation from the cache. This would look like:
Cache-Control: public, no-cache
Whether or not this is done, it's best to minimize use of authentication; for example, if your images are not sensitive, put them in a separate directory and configure your server not to force authentication for it. That way, those images will be naturally cacheable.
SSL pages are not cached (or decrypted) by proxy caches, so you don't have to worry about that. However, because caches store non-SSL requests and URLs fetched through them, you should be conscious about unsecured sites; an unscrupulous administrator could conceivably gather information about their users, especially in the URL.
In fact, any administrator on the network between your server and your clients could gather this type of information. One particular problem is when CGI scripts put usernames and passwords in the URL itself; this makes it trivial for others to find and user their login.
If you're aware of the issues surrounding Web security in general, you shouldn't have any surprises from proxy caches.
It varies. Generally speaking, the more complex a solution is, the more difficult it is to cache. The worst are ones which dynamically generate all content and don't provide validators; they may not be cacheable at all. Speak with your vendor's technical staff for more information, and see the Implementation notes below.
The Expires header can't be circumvented; unless the cache (either browser or proxy) runs out of room and has to delete the representations, the cached copy will be used until then.
The most effective solution is to change any links to them; that way, completely new representations will be loaded fresh from the origin server. Remember that the page that refers to an representation will be cached as well. Because of this, it's best to make static images and similar representations very cacheable, while keeping the HTML pages that refer to them on a tight leash.
If you want to reload an representation from a specific cache, you can either force a reload (in Firefox, holding down shift while pressing 'reload' will do this by issuing a Pragma: no-cache
request header) while using the cache. Or, you can have the cache administrator delete the representation through their interface.
If you're using Apache, consider allowing them to use .htaccess files and providing appropriate documentation.
Otherwise, you can establish predetermined areas for various caching attributes in each virtual server. For instance, you could specify a directory /cache-1m that will be cached for one month after access, and a /no-cache area that will be served with headers instructing caches not to store representations from it.
Whatever you are able to do, it is best to work with your largest customers first on caching. Most of the savings (in bandwidth and in load on your servers) will be realized from high-volume sites.
Caches aren't required to keep a representation and reuse it; they're only required to not keep or use them under some conditions. All caches make decisions about which representations to keep based upon their size, type (e.g., image vs. html), or by how much space they have left to keep local copies. Yours may not be considered worth keeping around, compared to more popular or larger representations.
Some caches do allow their administrators to prioritize what kinds of representations are kept, and some allow representations to be 'pinned' in cache, so that they're always available.
Generally speaking, it's best to use the latest version of whatever Web server you've chosen to deploy. Not only will they likely contain more cache-friendly features, new versions also usually have important security and performance improvements.
Apache uses optional modules to include headers, including both Expires and Cache-Control. Both modules are available in the 1.2 or greater distribution.
The modules need to be built into Apache; although they are included in the distribution, they are not turned on by default. To find out if the modules are enabled in your server, find the httpd binary and run httpd -l; this should print a list of the available modules. The modules we're looking for are mod_expires and mod_headers.
Once you have an Apache with the appropriate modules, you can use mod_expires to specify when representations should expire, either in .htaccess files or in the server's access.conf file. You can specify expiry from either access or modification time, and apply it to a file type or as a default. See the module documentation for more information, and speak with your local Apache guru if you have trouble.
To apply Cache-Control
headers, you'll need to use the mod_headers module, which allows you to specify arbitrary HTTP headers for a resource. See the mod_headers documentation.
Here's an example .htaccess file that demonstrates the use of some headers.
### activate mod_expires ExpiresActive On ### Expire .gif's 1 month from when they're accessed ExpiresByType image/gif A2592000 ### Expire everything else 1 day from when it's last modified ### (this uses the Alternative syntax) ExpiresDefault "modification plus 1 day" ### Apply a Cache-Control header to index.html Header append Cache-Control "public, must-revalidate"
Cache-Control:max-age
header as appropriate.Apache 2.0's configuration is very similar to that of 1.3; see the 2.0 mod_expires and mod_headers documentation for more information.
One thing to keep in mind is that it may be easier to set HTTP headers with your Web server rather than in the scripting language. Try both.
Because the emphasis in server-side scripting is on dynamic content, it doesn't make for very cacheable pages, even when the content could be cached. If your content changes often, but not on every page hit, consider setting a Cache-Control: max-age header; most users access pages again in a relatively short period of time. For instance, when users hit the 'back' button, if there isn't any validator or freshness information available, they'll have to wait until the page is re-downloaded from the server to see it.
CGI scripts are one of the most popular ways to generate content. You can easily append HTTP response headers by adding them before you send the body; Most CGI implementations already require you to do this for the Content-Type header. For instance, in Perl;
#!/usr/bin/perl print "Content-type: text/htmln"; print "Expires: Thu, 29 Oct 1998 17:04:19 GMTn"; print "n"; ### the content body follows...
Since it's all text, you can easily generate Expires
and other date-related headers with in-built functions. It's even easier if you use Cache-Control: max-age
;
print "Cache-Control: max-age=600n";
This will make the script cacheable for 10 minutes after the request, so that if the user hits the 'back' button, they won't be resubmitting the request.
The CGI specification also makes request headers that the client sends available in the environment of the script; each header has 'HTTP_' appended to its name. So, if a client makes an If-Modified-Since
request, it may show up like this:
HTTP_IF_MODIFIED_SINCE = Fri, 30 Oct 1998 14:19:41 GMT
See also the cgi_buffer library, which automatically handles ETag generation and validation, Content-Length
generation and gzip content-oding for Perl and Python CGI scripts with a one-line include. The Python version can also be used to wrap arbitrary CGI scripts.
PHP is a server-side scripting language that, when built into the server, can be used to embed scripts inside a page's HTML, much like SSI, but with a far larger number of options. PHP can be used as a CGI script on any Web server (Unix or Windows), or as an Apache module.
By default, representations processed by PHP are not assigned validators, and are therefore uncacheable. However, developers can set HTTP headers by using the Header()
function.
For example, this will create a Cache-Control header, as well as an Expires header three days in the future:
Remember that the Header()
function MUST come before any other output.
As you can see, you'll have to create the HTTP date for an Expires
header by hand; PHP doesn't provide a function to do it for you (although recent versions have made it easier; see the PHP's date documentation). Of course, it's easy to set a Cache-Control: max-age header
, which is just as good for most situations.
For more information, see the manual entry for header.
Cold Fusion makes setting arbitrary HTTP headers relatively easy, with the CFHEADER tag. Unfortunately, their example for setting an Expires
header, as below, is a bit misleading.
It doesn't work like you might think, because the time (in this case, when the request is made) doesn't get converted to a HTTP-valid date; instead, it just gets printed as a representation of Cold Fusion's Date/Time object. Most clients will either ignore such a value, or convert it to a default, like January 1, 1970.
However, Cold Fusion does provide a date formatting function that will do the job; GetHttpTimeSTring. In combination with DateAdd, it¬„s easy to set Expires dates; here, we set a header to declare that representations of the page expire in one month;
You can also use the CFHEADER
tag to set Cache-Control: max-age
and other headers.
Remember that Web server headers are passed through in some deployments of Cold Fusion (such as CGI); check yours to determine whether you can use this to your advantage, by setting headers on the server instead of in Cold Fusion.
When setting HTTP headers from ASPs, make sure you either place the Response method calls before any HTML generation, or use Response.Buffer
to buffer the output. Also, note that some versions of IIS set a Cache-Control: private
header on ASPs by default, and must be declared public to be cacheable by shared caches.
Active Server Pages, built into IIS and also available for other Web servers, also allows you to set HTTP headers. For instance, to set an expiry time, you can use the properties of the Response
object;
<% Response.Expires=1440 %>
specifying the number of minutes from the request to expire the representation. Likewise, absolute expiry time can be set like this (make sure you format HTTP date correctly):
<% Response.ExpiresAbsolute=#May 31,1996 13:30:15 GMT# %>
Cache-Control
headers can be added like this:
<% Response.CacheControl="public" %>
In ASP.NET, Response.Expires
is deprecated; the proper way to set cache-related headers is with Response.Cache
;
Response.Cache.SetExpires ( DateTime.Now.AddMinutes ( 60 ) ) ; Response.Cache.SetCacheability ( HttpCacheability.Public ) ;
See the MSDN documentation for more information.
The HTTP 1.1 spec has many extensions for making pages cacheable, and is the authoritative guide to implementing the protocol. See sections 13, 14.9, 14.21, and 14.25.
One-line include in Perl CGI, Python CGI and PHP scripts automatically handles ETag generation and validation, Content-Length generation and gzip Content-Encoding - correctly. The Python version can also be used as a wrapper around arbitrary CGI scripts.
This document is Copyright © 1998-2006 Mark Nottingham mnot@pobox.com. This work is licensed under a Creative Commons License If you do mirror this document, please send e-mail to the address above, so that you can be informed of updates. All trademarks within are property of their respective holders.
Although the author believes the contents to be accurate at the time of publication, no liability is assumed for them, their application or any consequences thereof. If any misrepresentations, errors or other need for clarification is found, please contact the author immediately.
The latest revision of this document can always be obtained from mnot.net