systemd is the service management framework in all major
in-support versions of Linux. There are multiple ways to run Fossil
Two of the methods for running containerized Fossil integrate
systemd, potentially obviating the more direct methods below:
If you take the Podman method of running containerized Fossil, it opens the
podman createoption for you, exemplified in the
fslsrvscript used on this author’s public Fossil-based web site. That script pulls custom containers from my Docker Hub repo. With these elements combined, this scheme allows you to build from source on one machine, then deploy to a server, running Fossil containerized without needing build tools on the server.
If you’re willing to give up a lot of features relative to Podman, and you’re willing to tolerate a lot more manual administrivia, the nspawn method has a lot less overhead, being a direct feature of
Both of these options provide better security than running
Fossil directly under
systemd, among other benefits.
A fun thing you can easily do with
systemd that you can’t directly do
with older technologies like
xinetd is to set a server up
as a “user” service.
You can’t listen on TCP port 80 with this method due to security
restrictions on TCP ports in every OS where
systemd runs, but you can
create a listener socket on a high-numbered (≥ 1024) TCP port,
suitable for sharing a Fossil repo to a workgroup on a private LAN.
To do this, write the following in
[Unit] Description=Fossil user server After=network-online.target [Service] WorkingDirectory=/home/fossil/museum ExecStart=/home/fossil/bin/fossil server --port 9000 repo.fossil Restart=always RestartSec=3 [Install] WantedBy=multi-user.target
xinetd, we don’t need to tell
user and group to run this service as, because we’ve installed it
under the account we’re logged into, which
systemd will use as the
The result is essentially the standalone server method
coupled with an intelligent service manager that will start it
automatically in the background on system boot, perform automatic
service restarts with back-off logic, and more, making this much more
robust than the by-hand launches of
fossil in the platform-independent
Fossil server instructions. The service will stay up until we
explicitly tell it to shut down.
This scheme couples well with the generic SCGI instructions as it requires a way to run the underlying repository server in the background. Given that its service port is then proxied by SCGI, it follows that it doesn’t need to run as a system service. A user service works perfectly well for this.
Because we’ve set this up as a user service, the commands you give to manipulate the service vary somewhat from the sort you’re more likely to find online:
$ systemctl --user daemon-reload $ systemctl --user enable fossil $ systemctl --user start fossil $ systemctl --user status fossil -l $ systemctl --user stop fossil
That is, we don’t need to talk to
sudo privileges, but
we do need to tell it to look at the user configuration rather than the
This scheme isolates the permissions needed by the Fossil server, which reduces the amount of damage it can do if there is ever a remotely-triggerable security flaw found in Fossil.
systemd based OSes, user services only run while that user is
logged in interactively. This is common on systems aiming to provide
desktop environments, where this is the behavior you often want. To
allow background services to continue to run after logout, say:
$ sudo loginctl enable-linger $USER
You can paste the command just like that into your terminal, since
$USER will expand to your login name.
System Service Alternative
There are some common reasons that you’d have good cause to install Fossil as a system-level service rather than the prior user-level one:
You’re using the new
fossil server --certfeature to get TLS service and want it to listen directly on port 443, rather than be proxied, as one had to do before Fossil got the ability to act as a TLS server itself. That requires root privileges, so you can’t run it as a user-level service.
You’re proxying Fossil with nginx or similar, allowing it to bind to high-numbered ports, but because it starts as a system service, you can’t get Fossil into the same dependency chain to ensure things start up and shut down in the proper order unless it also runs as a system service.
You want to make use of Fossil’s chroot jail feature, which requires the server to start as root.
There are just a small set of changes required:
Install the unit file to one of the persistent system-level unit file directories. Typically, these are:
Groupdirectives to the
[Service]section so Fossil runs as a normal user, preferably one with access only to the Fossil repo files, rather than running as
Another useful method to serve a Fossil repo via
systemd is via a
socket listener, which
systemd calls “socket activation,”
roughly equivalent to the ancient
It’s more complicated, but it has some nice properties.
We first need to define the privileged socket listener by writing
[Unit] Description=Fossil socket [Socket] Accept=yes ListenStream=80 NoDelay=true [Install] WantedBy=sockets.target
Note the change of configuration directory from the
to the system level. We need to start this socket listener at the root
level because of the low-numbered TCP port restriction we brought up
This configuration says more or less the same thing as the socket part
inted entry exemplified elsewhere in this
Next, create the service definition file in that same directory as
[Unit] Description=Fossil socket server After=network-online.target [Service] WorkingDirectory=/home/fossil/museum ExecStart=/home/fossil/bin/fossil http repo.fossil StandardInput=socket [Install] WantedBy=multi-user.target
We’ll explain the “
@” in the file name below.
Notice that we haven’t told
systemd which user and group to run Fossil
under. Since this is a system-level service definition, that means it
will run as root, which then causes Fossil to automatically drop into a
chroot(2) jail rooted at the
we’ve configured above, shortly each
fossil http call starts.
Restart* directives we had in the user service configuration above
are unnecessary for this method, since Fossil isn’t supposed to remain
running under it. Each HTTP hit starts one Fossil instance, which
handles that single client’s request and then immediately shuts down.
Next, you need to tell
systemd to reload its system-level
configuration files and enable the listening socket:
$ sudo systemctl daemon-reload $ sudo systemctl enable fossil.socket
And now you can manipulate the socket listener:
$ sudo systemctl start fossil.socket $ sudo systemctl status -l fossil.socket $ sudo systemctl stop fossil.socket
Notice that we’re working with the socket, not the service. The fact
that we’ve given them the same base name and marked the service as an
instantiated service with the “
@” notation allows
automatically start an instance of the service each time a hit comes in
on the socket that
systemd is monitoring on Fossil’s behalf. To see
this service instantiation at work, visit a long-running Fossil page
/tarball) and then give a command like this:
$ sudo systemctl --full | grep fossil
This will show information about the
fossil socket and service
instances, which should show your
/tarball hit handler, if it’s still
You can feed that service instance description to a
command to stop that single instance without restarting the whole
fossil service, for example.
In all of this, realize that we’re able to manipulate a single socket
listener or single service instance at a time, rather than reload the
whole externally-facing network configuration as with the far more