The website presented by a Fossil server has many hyperlinks. Even a modest project can have millions of pages in its tree, and many of those pages (for example diffs and annotations and ZIP archives of older check-ins) can be expensive to compute. If a spider or bot tries to walk a website implemented by Fossil, it can present a crippling bandwidth and CPU load.
The website presented by a Fossil server is intended to be used interactively by humans, not walked by spiders. This article describes the techniques used by Fossil to try to welcome human users while keeping out spiders.
Every Fossil web session has a "user". For random passers-by on the internet (and for spiders) that user is "nobody". The "anonymous" user is also available for humans who do not wish to identify themselves. The difference is that "anonymous" requires a login (using a password supplied via a CAPTCHA) whereas "nobody" does not require a login. The site administrator can also create logins with passwords for specific individuals.
Users without the Hyperlink capability do not see most Fossil-generated hyperlinks. This is a simple defense against spiders, since the "nobody" user category does not have this capability by default. Users must log in (perhaps as "anonymous") before they can see any of the hyperlinks. A spider that cannot log into your Fossil repository will be unable to walk its historical check-ins, create diffs between versions, pull zip archives, etc. by visiting links, because they aren't there.
A text message appears at the top of each page in this situation to invite humans to log in as anonymous in order to activate hyperlinks.
Because this required login step is annoying to some, Fossil provides other techniques for blocking spiders which are less cumbersome to humans.
The UserAgent string is a text identifier that is included in the header of most HTTP requests that identifies the specific maker and version of the browser (or spider) that generated the request. Typical UserAgent strings look like this:
The first two UserAgent strings above identify Firefox 19 and Internet Explorer 8.0, both running on Windows NT. The third example is the spider used by Google to index the internet. The fourth example is the "wget" utility running on OpenBSD. Thus the first two UserAgent strings above identify the requester as human whereas the second two identify the requester as a spider. Note that the UserAgent string is completely under the control of the requester and so a malicious spider can forge a UserAgent string that makes it look like a human. But most spiders truly seem to desire to "play nicely" on the internet and are quite open about the fact that they are a spider. And so the UserAgent string provides a good first-guess about whether or not a request originates from a human or a spider.
The second new sub-setting is a delay (in milliseconds) before setting the "href=" attributes on anchor tags. The default value for this delay is 10 milliseconds. The idea here is that a spider will try to render the page immediately, and will not wait for delayed scripts to be run, thus will never enable the hyperlinks.
These two sub-settings can be used separately or together. If used together, then the delay timer does not start until after the first mouse movement is detected.
See also Managing Server Load for a description of how expensive pages can be disabled when the server is under heavy load.
Fossil currently does a very good job of providing easy access to humans while keeping out troublesome robots and spiders. However, spiders and bots continue to grow more sophisticated, requiring ever more advanced defenses. This "arms race" is unlikely to ever end. The developers of Fossil will continue to try improve the spider defenses of Fossil so check back from time to time for the latest releases and updates.
Readers of this page who have suggestions on how to improve the spider defenses in Fossil are invited to submit your ideas to the Fossil Users forum: https://fossil-scm.org/forum.