How To Configure a Linux Service to Start Automatically After a Crash or Reboot – Part 2: Reference

Introduction

In this second part of the tutorial about starting Linux services automatically, we’ll take a step back and explain init processes in more detail. You should gain a good understanding of how they control a daemon’s start-up behavior.

In the first part of this tutorial series we shared some practical examples using MySQL for how to enable a Linux service to auto-start after a crash or reboot.

We saw how to do this from three different init modes: System V, Upstart, and systemd. Read the first tutorial for a refresher on which distributions use which init system by default.

In this tutorial, we will take a step back and explain why we ran the commands and edited the config files that we did. We’ll start with the System V init daemon. We will also see why it was replaced over time with newer init modes.

Prerequisites

To follow this tutorial, you will need the three DigitalOcean Droplets that you created before.

We had:

  • A Debian 6 server running MySQL
  • An Ubuntu 14.04 server running MySQL
  • A CentOS 7 server running MySQL

We recommend you go back to Part 1 of this series and create the Droplets first.

Also, you will need to be the root user or have sudo privilege on the servers. To understand how sudo privileges work see this DigitalOcean tutorial about sudo.

You should not run any commands, queries or configurations from this tutorial on a production Linux server.

Runlevels

A runlevel represents the current state of a Linux system.

The concept comes from System V init, where the Linux system boots, initializes the kernel, and then enters one (and only one) runlevel.

For example, a runlevel can be the shutdown state of a Linux server, a single-user mode, the restart mode, etc. Each mode will dictate what services can be running in that state.

Some services can run in one or more runlevels but not in others.

Runlevels are denoted by single digits and they can have a value between 0 and 6. The following list shows what each of these levels mean:

  • Runlevel 0: System shutdown
  • Runlevel 1: Single-user, rescue mode
  • Runlevels 2, 3, 4: Multi-user, text mode with networking enabled
  • Runlevel 5: Multi-user, network enabled, graphical mode
  • Runlevel 6: System reboot

Runlevels 2, 3, and 4 vary by distribution. For example, some Linux distributions don’t implement runlevel 4, while others do. Some distributions have a clear distinction between these three levels. In general, runlevel 2, 3 or 4 means a state where Linux has booted in multi-user, network enabled, text mode.

When we enable a service to auto-start, we are actually adding it to a runlevel. In System V, the OS will start with a particular runlevel; and, when it starts, it will try to start all the services that are associated with that runlevel.

Runlevels become targets in systemd, which we’ll discuss in the systemd section.

Init and PID 1

init is the first process that starts in a Linux system after the machine boots and the kernel loads into memory.

Among other things, it decides how a user process or a system service should load, in what order, and whether it should start automatically.

Every process in Linux has a process ID (PID) and init has a PID of 1. It’s the parent of all other processes that subsequently spawn as the system comes online.

History of Init

As Linux has evolved, so has the behavior of the init daemon. Originally, Linux started out with System V init, the same that was used in UNIX. Since then, Linux has implemented the Upstart init daemon (created by Ubuntu) and now the systemd init daemon (first implemented by Fedora).

Most Linux distributions have gradually migrated away from System V or on their way to phasing it out, keeping it only for backward compatibility. FreeBSD, a variant of UNIX, uses a different implementation of System V, known as BSD init. Older versions of Debian use SysVinit too.

Each version of the init daemon has different ways of managing services. The reason behind these changes was the need for a robust service management tool that would handle not only services, but devices, ports, and other resources; that would load resources in parallel, and that would gracefully recovering from a crash.

System V Init Sequence

System V uses an inittab file, which later init methods like Upstart have kept for backwards compatibility.

Let’s run through System V’s startup sequence:

  1. The init daemon is created from the binary file /sbin/init
  2. The first file the init daemon reads is /etc/inittab
  3. One of the entries in this file decides the runlevel the machine should boot into. For example, if the value for the runlevel is specified as 3, Linux will boot in multi-user, text mode with networking enabled. (This runlevel is known as the default runlevel)
  4. Next, the init daemon looks further into the /etc/inittab file and reads what init scripts it needs to run for that runlevel

So when the init daemon finds what init scripts its needs to run for the given runlevel, it’s essentially finding out what services it needs to start up. These init scripts are where you can configure startup behavior for individual services, like we did for MySQL in the first tutorial.

Next, let’s look at init scripts in detail.

System V Configuration Files: Init Scripts

An init script is what controls a specific service, like MySQL Server, in System V.

Init scripts for services are either provided by the application’s vendor or come with the Linux distribution (for native services). We can also create our own init scripts for custom created services.

When a process or service such as MySQL Server starts, its binary program file has to load into memory.

Depending on how the service is configured, this program may have to keep executing in the background continuously (and accept client connections). The job of starting, stopping, or reloading this binary application is handled by the service’s init script. It’s called the init script because it initializes the service.

In System V, an init script is a shell script.

Init scripts are also called rc (run command) scripts.

Directory Structure

The /etc directory is the parent directory for init scripts.

The actual location for init shell scripts is under /etc/init.d. These scripts are symlinked to the rc directories.

Within the /etc directory, we have a number of rc directories, each with a number in its name.

The numbers represent different runlevels. So we have /etc/rc0.d, /etc/rc1.d, /etc/rc2.d and so on.

Then, within each rcn.d directory, we have files that start with either K or S in their file name, followed by two digits. These are symbolic link files that point back to the actual init shell scripts. Why the K and S? K means Kill (i.e. stop) and “S” stands for Start.

The two digits represents the order of execution of the script. So if we have a file named K25some_script, it will execute before K99another_script.

Startup

Let’s pick back up with our startup sequence. So how are the init scripts called? Who calls them?

The K and S scripts are not called directly by the init daemon, but by another script: the /etc/init.d/rc script.

If you remember, the /etc/inittab file tells the init daemon what runlevel the system should enter by default. For each runlevel, a line in the /etc/inittab file calls the /etc/init.d/rc script, passing on that runlevel as a parameter. Based on this parameter, the script then calls the files under the corresponding /etc/rcn.d directory. So, if the server boots with runlevel 2, scripts under the /etc/rc2.d will be called; for runlevel 3, scripts under /etc/rc3.d are executed, and so on.

Within an rc directory, first, all K scripts are run in numerical order with an argument of “stop”, and then all S scripts are run in similar fashion with an argument of “start.” Behind the scenes, the corresponding init shell scripts will be called with stop and start parameters respectively.

Now since the files under the /etc/rcn.d directories (Knn and Snn files) are symbolic links only, calling them means calling the actual init shell scripts with stop and start parameters.

To sum up, when the Linux server enters a runlevel, certain scripts will be run to stop some services while others will be run to start other services.

This calling of init scripts also happens whenever the system switches to a new runlevel: the corresponding /etc/rc<n>.d directory scripts are executed. And since those K and S files are nothing but links, the actual shell scripts under the /etc/init.d directory are executed with the appropriate start or stop argument.

The whole process ensures any service not supposed to run in that runlevel is stopped and all services supposed to run in that runlevel are started.

System V Auto-Starting

As we enable a service to auto-start at boot time, we are actually modifying the init behavior.

So, for example, when we enable a service to auto-start at runlevel 3, behind the scenes the process creates the appropriate links in the /etc/rc3.d directory.

If this sounds confusing, don’t worry – we will see what it all means in a minute.

System V Example

We’ll go back to our MySQL service example, this time with more theory.

Step 1 — Logging in to Debian Droplet

For the purpose of this part of the tutorial, we will go back to the Debian 6 Droplet we created in Part 1. Use the SSH command to connect to the server (Windows users can connect using a tool like PuTTy).

  • ssh sammy@your_server_ip

Step 2 — Looking at inittab

Run the following command to see the inittab file contents:

  • cat /etc/inittab | grep initdefault

The output should be something like this:

Output
id:2:initdefault:

The 2 after the id field shows the system is configured to start with runlevel 2. That’s the default runlevel. In this case Debian designates 2 as multi-user, text mode. If you execute the following command:

  • cat /etc/inittab | grep Runlevel

the output confirms this:

Output
# Runlevel 0 is halt. # Runlevel 1 is single-user. # Runlevels 2-5 are multi-user. # Runlevel 6 is reboot.

Step 3 — Looking at the rc Directories

Run the following command to list the rc directories. You should see there are six of these:

  • ls -ld /etc/rc*.d
Output
drwxr-xr-x 2 root root 4096 Jul 31 07:09 /etc/rc0.d drwxr-xr-x 2 root root 4096 Jul 31 07:09 /etc/rc1.d drwxr-xr-x 2 root root 4096 Jul 31 07:21 /etc/rc2.d drwxr-xr-x 2 root root 4096 Jul 31 07:21 /etc/rc3.d drwxr-xr-x 2 root root 4096 Jul 31 07:21 /etc/rc4.d drwxr-xr-x 2 root root 4096 Jul 31 07:21 /etc/rc5.d drwxr-xr-x 2 root root 4096 Jul 31 07:09 /etc/rc6.d drwxr-xr-x 2 root root 4096 Jul 23 2012 /etc/rcS.d

Since the system boots in runlevel 2 (default init from the inittab file), scripts under the /etc/rc2.d directory will execute at system startup.

List the contents of this directory:

  • ls -l /etc/rc2.d

This shows the files are nothing but symbolic links, each pointing to script files under /etc/init.d:

Output
. . . lrwxrwxrwx 1 root root 17 Jul 23 2012 S01rsyslog -> ../init.d/rsyslog lrwxrwxrwx 1 root root 22 Jul 23 2012 S02acpi-support -> ../init.d/acpi-support lrwxrwxrwx 1 root root 15 Jul 23 2012 S02acpid -> ../init.d/acpid lrwxrwxrwx 1 root root 17 Jul 23 2012 S02anacron -> ../init.d/anacron lrwxrwxrwx 1 root root 13 Jul 23 2012 S02atd -> ../init.d/atd lrwxrwxrwx 1 root root 14 Jul 23 2012 S02cron -> ../init.d/cron lrwxrwxrwx 1 root root 15 Jul 31 07:09 S02mysql -> ../init.d/mysql lrwxrwxrwx 1 root root 13 Jul 23 2012 S02ssh -> ../init.d/ssh . . .

We can see there are no K scripts here, only S (start) scripts. The scripts start known services like rsyslog, cron, or ssh.

Remember that the two digits after S decide the order of starting: for example, rsyslog starts before the cron daemon. We can also see that MySQL is listed here.

Step 4 — Looking at an Init Script

We now know that when a System V-compliant service is installed, it creates a shell script under the /etc/init.d directory. Check the shell script for MySQL:

  • ls -l /etc/init.d/my*
Output
-rwxr-xr-x 1 root root 5437 Jan 14 2014 /etc/init.d/mysql

To see what the start-up script actually looks like, read the file:

  • cat /etc/init.d/mysql | less

From the output, you will see it’s a large bash script.

Step 5 — Using chkconfig or sysv-rc-conf

In RHEL-based distributions like CentOS, a command called chkconfig can be used to enable or disable a service in System V. It can also list installed services and their runlevels.

The syntax for checking the status of a service for all runlevels on a CentOS system would be:

  • chkonfig --list | grep service_name

No such utility ships with Debian natively (update-rc.d installs or removes services from runlevels only). We can, however, install a custom tool called sysv-rc-conf to help us manage services.

Run the following command to install sysv-rc-conf:

  • sudo apt-get install sysv-rc-conf -y

Once the tool has been installed, simply execute this command to see the runlevel behavior for various services:

  • sudo sysv-rc-conf

The output will be a pretty graphical window as shown below. From here, we can clearly see what services are enabled for what runlevels (marked by X).

sysv-rc-conf Window showing X marks for various services for each runlevel

Using the arrow keys and SPACEBAR, we can enable or disable a service for one or more runlevels.

For now, leave the screen by pressing Q.

Step 7 — Testing MySQL Startup Behavior at Boot

As you can see from the screenshot in the previous section, and from our testing in Part 1 of the tutorial, MySQL is currently enabled on runlevels 2-5.

Run the command below to disable the MySQL Service:

  • sudo update-rc.d mysql disable
Output
update-rc.d: using dependency based boot sequencing insserv: warning: current start runlevel(s) (empty) of script `mysql' overwrites defaults (2 3 4 5). insserv: warning: current stop runlevel(s) (0 1 2 3 4 5 6) of script `mysql' overwrites defaults (0 1 6).

Now run the command:

  • ls -l /etc/rc2.d

The output should show that symlink from /etc/rc2.d to /etc/init.d/mysql has changed to K:

Output
. . . lrwxrwxrwx 1 root root 15 Jul 31 07:09 K02mysql -> ../init.d/mysql . . .

In other words, MySQL will no longer start at default runlevel (2).

This is what happens behind the scenes in System V when we enable and disable a service. As long as there is an S script under the default runlevel directory for the service, init will start that service when booting.

Enable the service again:

  • sudo update-rc.d mysql enable

Step 8 — Testing MySQL Start-up Behavior on Crash

Let’s see how System V handles service crashes.

Remember that we made a change to the /etc/inittab file in Part 1 of this tutorial, to enable MySQL to start automatically after a crash. We added the following line:

/etc/inittab
ms:2345:respawn:/bin/sh /usr/bin/mysqld_safe

This was to ensure the MySQL service starts after a crash. To check if that happens, first reboot the server:

  • sudo reboot

Once the server comes back, SSH in to it and check the MySQL process IDs like before:

  • ps -ef | grep mysql

Note the process IDs for mysqld_safe and mysqld. In our case, these were 895 and 1019 respectively:

Output
root 907 1 0 07:30 ? 00:00:00 /bin/sh /usr/bin/mysqld_safe mysql 1031 907 0 07:30 ? 00:00:00 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --pid-file=/var/run/mysqld/mysqld.pid --socket=/var/run/mysqld/mysqld.sock --port=3306 root 1032 907 0 07:30 ? 00:00:00 logger -t mysqld -p daemon.error root 2550 2532 0 07:31 pts/0 00:00:00 grep mysql

Kill the processes again with a -9 switch (substitute the PIDs with those of your Debian system):

  • sudo kill -9 907
  • sudo kill -9 1031

<!– mark variables in red –>

Wait for five minutes or so and then execute the command:

  • sudo service mysql status

The output will show MySQL service is running, starting with this line:

Output
/usr/bin/mysqladmin Ver 8.42 Distrib 5.1.73, for debian-linux-gnu on x86_64

If you run the ps -ef | grep mysql command again, you will see that both the mysqld_safe and mysqld processes have come up.

Try to kill the process a few more times, and in each case it should respawn after five minutes.

This is the reason we added that extra line in /etc/inittab: this is how you configure a System V service to respawn in a crash. There is a detailed explanation of the syntax for this line in Part 1.

However, be careful when you add an automatic restart for a service: if a service tries to respawn and fails more than ten times within two minutes, Linux will disable the respawn for the next five minutes. This is so the system remains stable and does not run out of computing resources.

If you happen to receive a message in the console about such event or find it in system logs, you will know there’s a problem with the application that needs to be fixed, since it keeps crashing.

Upstart Introduction

Classic SysVinit had been part of mainstream Linux distributions for a long time before Upstart came along.

As the Linux market grew, serialized ways of loading jobs and services became more time consuming and complex. At the same time, as more and more modern devices like hot-pluggable storage media proliferated the market, SysVinit was found to be incapable of handling them quickly.

The need for faster loading of the OS, graceful clean-up of crashed services, and predictable dependency between system services drove the need for a better service manager. The developers at Ubuntu came up with another means of initialization, the Upstart daemon.

Upstart init is better than System V init in a few ways:

  • Upstart does not deal with arcane shell scripts to load and manage services. Instead, it uses simple configuration files that are easy to understand and modify
  • Upstart does not load services serially like System V. This cuts down on system boot time
  • Upstart’s uses a flexible event system to customize how services are handled in various states
  • Upstart has better ways of handling how a crashed service should respawn
  • There is no need to keep a number of redundant symbolic links, all pointing to the same script
  • Upstart is backwards-compatible with System V. The /etc/init.d/rc script still runs to manage native System V services

Upstart Events

Upstart allows for multiple events to be associated with a service. This event-based architecture allows Upstart to treat service management flexibly.

Each event can fire off a shell script that takes care of that event.

Upstart events include:

  • Starting
  • Started
  • Stopping
  • Stopped

In between these events, a service can be in a number of states, like:

  • waiting
  • pre-start
  • starting
  • running
  • pre-stop
  • stopping
  • etc.

Upstart can take actions for each of these states as well, creating a very flexible architecture.

Upstart Init Sequence

Like System V, Upstart also runs the /etc/init.d/rc script at startup. This script executes any System V init scripts normally.

Upstart also looks under the /etc/init directory and executes the shell commands in each service config file.

Upstart Configuration Files

Upstart uses configuration files to control services.

Upstart does not use Bash scripts the way System V does. Instead, Upstart uses service configuration files with a naming standard of service_name.conf.

The files have plain text content with different sections, called stanzas. Each stanza describes a different aspect of the service and how it should behave.

Different stanzas control different events for the service, like pre-start, start, pre-stop or post-stop.

The stanzas themselves contain shell commands. Therefore, it’s possible to call multiple actions for each event for each service.

Each configuration file also specifies two things:

  • Which runlevels the service should start and stop on
  • Whether the service should respawn if it crashes

Directory Structure

The Upstart configuration files are located under the /etc/init directory (not to be confused with /etc/init.d).

Upstart Example

Let’s take a look at how Upstart handles MySQL Server again, this time with more background knowledge.

Step 1 — Logging in to Ubuntu Droplet

Go back to the Ubuntu 14.04 Droplet we created in Part 1.

Use the SSH command to connect to the server (Windows users can connect using a tool like PuTTy).

  • ssh sammy@your_server_ip

Step 2 — Looking at the init and rc Directories

Most of Upstart’s config files are in the /etc/init directory. This is the directory you should use when creating new services.

Once logged into the server, execute the following command:

  • sudo ls -l /etc/init/ | less

The result will show a large number of service configuration files, one screen at a time. These are services that run natively under Upstart:

Output
total 356 . . . -rw-r--r-- 1 root root 297 Feb 9 2013 cron.conf -rw-r--r-- 1 root root 489 Nov 11 2013 dbus.conf -rw-r--r-- 1 root root 273 Nov 19 2010 dmesg.conf . . . -rw-r--r-- 1 root root 1770 Feb 19 2014 mysql.conf -rw-r--r-- 1 root root 2493 Mar 20 2014 networking.conf

Press Q to exit less.

Compare this with the native System V init services in the system:

  • sudo ls -l /etc/rc3.d/* | less

There will be only a handful:

Output
-rw-r--r-- 1 root root 677 Jun 14 23:31 /etc/rc3.d/README lrwxrwxrwx 1 root root 15 Apr 17 2014 /etc/rc3.d/S20rsync -> ../init.d/rsync lrwxrwxrwx 1 root root 24 Apr 17 2014 /etc/rc3.d/S20screen-cleanup -> ../init.d/screen-cleanup lrwxrwxrwx 1 root root 19 Apr 17 2014 /etc/rc3.d/S70dns-clean -> ../init.d/dns-clean lrwxrwxrwx 1 root root 18 Apr 17 2014 /etc/rc3.d/S70pppd-dns -> ../init.d/pppd-dns lrwxrwxrwx 1 root root 26 Apr 17 2014 /etc/rc3.d/S99digitalocean -> ../init.d//rc.digitalocean lrwxrwxrwx 1 root root 21 Apr 17 2014 /etc/rc3.d/S99grub-common -> ../init.d/grub-common lrwxrwxrwx 1 root root 18 Apr 17 2014 /etc/rc3.d/S99ondemand -> ../init.d/ondemand lrwxrwxrwx 1 root root 18 Apr 17 2014 /etc/rc3.d/S99rc.local -> ../init.d/rc.local

Step 3 — Looking at an Upstart File

We’ve already seen the mysql.conf file in Part 1 of this tutorial. So, let’s open another config file: the one for the cron daemon.

  • sudo nano /etc/init/cron.conf

As you can see, this is a fairly simple config file for the cron daemon:

/etc/init/cron.conf
# cron - regular background program processing daemon
#
# cron is a standard UNIX program that runs user-specified programs at
# periodic scheduled times

description     "regular background program processing daemon"

start on runlevel [2345]
stop on runlevel [!2345]

expect fork
respawn

exec cron

The important fields to be mindful of here are start on, stop on and respawn.

The start on directive tells Ubuntu to start the crond daemon when the system enters runlevels 2, 3, 4 or 5. 2, 3, and 4 are multi-user text modes with networking enabled, and 5 is multi-user graphical mode. The service does not run on any other runlevels (like 0,1 or 6).

The fork directive tells Upstart the process should detach from the console and run in the background.

Next comes the respawn directive. This tells the system that cron should start automatically if it crashes for any reason.

Exit the editor without making any changes.

The cron config file is a fairly small configuration file. The MySQL configuration file is structurally similar to the cron configuration file; it also has stanzas for start, stop, and respawn. In addition, it also has two script blocks for pre-start and post-start events. These code blocks tell the system what to execute when the mysqld process is either coming up or has already come up.

For practical help on making your own Upstart file, see this tutorial about Upstart.

Step 4 — Testing MySQL Startup Behavior at Boot

We know the MySQL instance on our Ubuntu 14.04 server is set to auto-start at boot time by default. Let’s see how we can disable it.

In Upstart, disabling a service depends on the existence of a file under /etc/init/ called service_name.override. The content of the file should be a simple word: manual.

To see how we can use this file to disable MySQL, execute the following command to create this override file for MySQL:

  • sudo nano /etc/init/mysql.override

Add this single line:

/etc/init/mysql.override
manual

Save your changes.

Next, reboot the server:

  • sudo reboot

Once the server comes back online, check the staus of the service

  • sudo initctl status mysql

The output should be:

Output
mysql stop/waiting

This means MySQL didn’t start up.

Check if the start directive has changed in the MySQL service configuration file:

  • sudo cat /etc/init/mysql.conf | grep start on

It should still be the same:

Output
start on runlevel [2345]

This means that checking the .conf file in the init directory is not the sole factor to see if the service will start at the appropriate levels. You also need to make sure the .override file doesn’t exist.

To reenable MySQL, delete the override file and reboot the server:

  • sudo rm -f /etc/init/mysql.override
  • sudo reboot

Once the server reboots, remotely connect to it.

Running the sudo initctl status mysql command will show the service has started automatically.

Step 5 — Testing MySQL Startup Behavior on Crash

By default, MySQL comes up automaticaly after a crash.

To stop MySQL this, open the /etc/init/mysql.conf service configuration file:

  • sudo nano /etc/init/mysql.conf

Comment out both the respawn directives.

/etc/init/mysql.conf
# respawn
# respawn limit 2 5

Run the following commands to restart the service:

  • sudo initctl stop mysql
  • sudo initctl start mysql

We are explicitly stopping and starting the service because our test showed initctl restart or initctl reload would not work here.

The second command to start the service shows the PID MySQL started with:

Output
mysql start/running, process 1274

Note the PID for your instance of MySQL. If you crash the mysql process now, it won’t be coming up automatically. Kill the process ID (replacing it with your own number):

  • sudo kill -9 1274

Now check its status:

  • sudo initctl status mysql
Output
mysql stop/waiting

Try to find the status a few more times, giving some time between each. In every case, MySQL will still be stopped. This is happening because the service configuration file does not have the respawn directives anymore.

Part 1 of the tutorial has a more detailed explanation of the respawn directives.

When would you not want an Upstart service to come up after a reboot or crash?

Say you have upgraded your Linux kernel or put the latest patch in. You don’t want any drama; you just the server to come up. You can largely eliminate risks by disabling auto-start for any Upstart process.

If your service comes up but keeps crashing, you can first stop it and then change its respawn behavior as well.

systemd Introduction

The latest in Linux init daemons is systemd. In fact it’s more than an init daemon: systemd is a whole new framework that encompasses many components of a modern Linux system.

One of its functions is to work as a system and service manager for Linux. In this capacity, one of the things systemd controls is how a service should behave if it crashes or the machine reboots. You can read about systemd’s systemctl here.

systemd backward-compatible with System V commands and initialization scripts. That means any System V service will also run under systemd. This is possible because most Upstart and System V administrative commands have been modified to work under systemd.

In fact, if we run the ps -ef | grep systemd command in an operating system that supports it, we won’t see anything, because systemd renames itself to init at boot time. There is an /sbin/init file that’s a symbolic link to /bin/systemd.

systemd Configuration Files: Unit Files

At the heart of systemd are unit files. Each unit file represents a system resource. The main difference between systemd and the other two init methods is that systemd is responsible for initialization of not only service daemons but also other types of resources like sockets, device operating system paths, mount points, sockets, etc. A resource can be any of these.

Information about the resource is kept track of in the unit file.

Each unit file represents a specific system resource and has a naming style of service name.unit type.

So, we will have files like dbus.service, sshd.socket, or home.mount.

As we will see later, service unit files are simple text files (like Upstart .conf files) with a declarative syntax. These files are pretty easy to understand and modify.

Directory Structure

In Red Hat-based systems like CentOS, unit files are located in two places. The main location is /lib/systemd/system/.

Custom-created unit files or existing unit files modified by system administrators will live under /etc/systemd/system.

If a unit file with the same name exists in both locations, systemd will use the one under /etc. If a service is enabled to start at boot time or any other target/runlevel, a symbolic link will be created for that service unit file under appropriate directories in /etc/systemd/system. Unit files under /etc/systemd/system are actually symbolic links to the files with same name under /lib/systemd/system.

systemd Init Sequence: Target Units

A special type of unit file is a target unit.

A target unit filename is suffixed by .target. Target units are different from other unit files because they don’t represent one particular resource. Rather, they represent the state of the system at any one time.

Target units do this by grouping and launching multiple unit files that should be part of that state. systemd targets can therefore be loosely compared to System V runlevels, although they are not the same.

Each target has a name instead of a number. For example, we have multi-user.target instead of runlevel 3 or reboot.target instead of runlevel 6.

When a Linux server boots with say, multi-user.target, it’s essentially bringing the server to runlevel 2, 3, or 4, which is the multi-user text mode with networking enabled.

How it brings the server up to that stage is where the difference lies. Unlike System V, systemd does not bring up services sequentially. Along the way, it can check for the existence of other services or resources and decide the order of their loading. This makes it possible for services to load in parallel.

Another difference between target units and runlevels is that in System V, a Linux system could exist in only one runlevel. You could change the runlevel, but the system would exist in that new runlevel only. With systemd, target units can be inclusive, which means when a target unit activates, it can ensure other target units are loaded as part of it.

For example, a Linux system that boots with a graphical user interface will have the graphical.target activated, which in turn will automatically ensure multi-user.target is loaded and activated as well.

(In System V terms, that would be like having runlevels 3 and 5 activated at the same time.)

The table below compares runlevels and targets:

Runlevel (System V init) Target Units (Systemd)
runlevel 0 poweroff.target
runlevel 1 resuce.target
runlevel 2, 3, 4 multi-user.target
runlevel 5 graphical.target
runlevel 6 reboot.target

systemd default.target

default.target is equivalent to the default runlevel.

In System V, we had the default runlevel defined in a file called inittab. In systemd, that file is replaced by default.target. The default target unit file lives under /etc/systemd/system directory. It’s a symbolic link to one of the target unit files under /lib/systemd/system.

When we change the default target, we are essentially recreating that symbolic link and changing the system’s runlevel.

The inittab file in System V also specified which directory Linux will execute its init scripts from: it could be any of the rcn.d directories. In systemd, the default target unit determines which resource units will be loaded at boot time.

All those units are activated, but not all in parallel or all in sequence. How a resource unit loads may depend on other resource units it wants or requires.

systemd Dependencies: Wants and Requires

The reason for this discussion on unit files and target units is to highlight how systemd addresses dependency among its daemons.

As we saw before, Upstart ensures parallel loading of services using configuration files. In System V, a service could start in particular runlevels, but it also could be made to wait until another service or resource became available. In similar fashion, systemd services can be made to load in one or more targets, or wait until another service or resource became active.

In systemd, a unit that requires another unit will not start until the required unit is loaded and activated. If the required unit fails for some reason while the first unit is active, the first unit will also stop.

If you think about it, this ensures system stability. A service that requires a particular directory to be present can thus be made to wait until the mount point to that directory is active. On other hand, a unit that wants another unit will not impose such restrictions. It won’t stop if the wanted unit stops when the caller is acive. An example of this would be the non-essential services that come up in graphical-target mode.

systemd Example

It’s time for our deep dive into MySQL’s startup behavior under systemd.

Step 1 — Log in to CentOS Droplet

To understand all these concepts and how they relate to enabling a service to auto-start, let’s go back to the CentOS 7 Droplet that we created in Part 1.

Use the SSH command to connect to the server (Windows users can connect using a tool like PuTTy).

  • ssh sammy@your_server_ip

Step 2 — Looking at the default.target File and Dependencies

This is a long section, because we’re going to follow the .target rabbit-trail as far as we can. systemd’s startup sequence follows a long chain of dependencies.

defaul.target

The default.target file controls which services start during a normal server boot.

Execute the following command to list the default target unit file:

  • sudo ls -l /etc/systemd/system/default.target

This shows output like the following:

Output
lrwxrwxrwx. 1 root root 37 Jul 8 2014 /etc/systemd/system/default.target -> /lib/systemd/system/multi-user.target

As we can see, the default target is actually a symbolic link to the multi-user target file under /lib/systemd/system/. So, the system is supposed to boot under multi-user.target, which is similar to runlevel 3.

multi-user.target.wants

Next, execute the following command to check all the services the multi-user.target file wants:

  • sudo ls -l /etc/systemd/system/multi-user.target.wants/*.service

This should show an output like this:

Output
. . . lrwxrwxrwx. 1 root root 37 Jul 8 2014 /etc/systemd/system/multi-user.target.wants/crond.service -> /usr/lib/systemd/system/crond.service . . . lrwxrwxrwx 1 root root 38 Jul 31 22:02 /etc/systemd/system/multi-user.target.wants/mysqld.service -> /usr/lib/systemd/system/mysqld.service lrwxrwxrwx. 1 root root 46 Jul 8 2014 /etc/systemd/system/multi-user.target.wants/NetworkManager.service -> /usr/lib/systemd/system/NetworkManager.service lrwxrwxrwx. 1 root root 39 Jul 8 2014 /etc/systemd/system/multi-user.target.wants/postfix.service -> /usr/lib/systemd/system/postfix.service lrwxrwxrwx. 1 root root 39 Jul 8 2014 /etc/systemd/system/multi-user.target.wants/rsyslog.service -> /usr/lib/systemd/system/rsyslog.service lrwxrwxrwx. 1 root root 36 Jul 8 2014 /etc/systemd/system/multi-user.target.wants/sshd.service -> /usr/lib/systemd/system/sshd.service . . .

We can see these are all symbolic link files, pointing back to actual unit files under /lib/systemd/system/. We can also see that mysqld.service is part of multi-user.target.

The same information can be found if you execute this command to filter the output:

  • sudo systemctl show --property "Wants" multi-user.target | fmt -10 | grep mysql
Output
mysqld.service

Other than multi-user.target, there are different types of targets like system-update.target or basic.target.

To see what targets our multi-user target depends on, execute the following command:

  • sudo systemctl show --property "Requires" multi-user.target | fmt -10
Output
Requires=basic.target

So to start the system in multi-user.target mode, basic.target will have to load first.

basic.target

To see what other targets basic.target depends on, execute this command:

  • sudo systemctl show --property "Requires" basic.target | fmt -10

The output will be:

Output
Requires=sysinit.target

sysinit.target

Going recursively, we can see if there are any required units for sysinit.target. There are none. However, we can see what services are wanted by sysinit.target:

  • sudo systemctl show --property "Wants" sysinit.target | fmt -10

This will show a number of services wanted by sysinit.

Output
Wants=local-fs.target swap.target cryptsetup.target systemd-udevd.service systemd-update-utmp.service systemd-journal-flush.service plymouth-read-write.service . . .

As you can see, the system does not stay in one target only. It loads services in a dependent fashion as it transitions between targets.

Step 3 — Looking at a Unit File

Going a step further now, let’s look inside a service unit file. We saw the MySQL service unit file in Part 1 of this tutorial, and we will use it again shortly, but for now let’s open another service unit file, the one for sshd:

  • sudo nano /etc/systemd/system/multi-user.target.wants/sshd.service

It looks like this:

Output
[Unit] Description=OpenSSH server daemon After=syslog.target network.target auditd.service [Service] EnvironmentFile=/etc/sysconfig/sshd ExecStartPre=/usr/sbin/sshd-keygen ExecStart=/usr/sbin/sshd -D $OPTIONS ExecReload=/bin/kill -HUP $MAINPID KillMode=process Restart=on-failure RestartSec=42s [Install] WantedBy=multi-user.target

Just like an Upstart daemon config file, this service unit file is clean and easy to understand.

The first important bit to understand is the After clause. This says the SSHD service needs to load after the system and network targets and the audit logging service are loaded.

The file also shows the service is wanted by the multi-user.target, which means the target will load this service, but it won’t shut down or crash if sshd fails.

Since multi-user.target is the default target, sshd daemon is supposed to start at boot time.

Exit the editor.

Step 4 — Testing MySQL Startup Behavior at Boot

In Part 1 of the tutorial, we left the MySQL service enabled and running. Let’s see how to change that.

In the last section, we ran a command to confirm that mysqld.service is wanted by multi-user.target. When we listed the contents of the /etc/systemd/system/multi-user.target.wants/ directory, we saw a symbolic link pointing back to the original service unit under /usr/lib/systemd/system/.

Run the following command to disable the service so it does not auto-start at boot time:

  • sudo systemctl disable mysqld.service

Now, run this command to check if MySQL is still wanted by multi-user.target:

  • sudo systemctl show --property "Wants" multi-user.target | fmt -10 | grep mysql

Nothing will be returned. Run the command below to check if the symbolic link still exists:

  • sudo ls -l /etc/systemd/system/multi-user.target.wants/mysql*

The link doesn’t exist:

Output
ls: cannot access /etc/systemd/system/multi-user.target.wants/mysql*: No such file or directory

If you’d like, try rebooting the server. MySQL should not come up.

Now reenable the service:

  • sudo systemctl enable mysqld.service

The link will come back:

  • sudo ls -l /etc/systemd/system/multi-user.target.wants/mysql*
Output
lrwxrwxrwx 1 root root 38 Aug 1 04:43 /etc/systemd/system/multi-user.target.wants/mysqld.service -> /usr/lib/systemd/system/mysqld.service

(If you rebooted before, you should start MySQL again.)

As you can see, enabling or disabling a systemd service creates or removes the symbolic link from the default target’s wants directory.

Step 5 — Testing MySQL Startup Behavior on Crash

MySQL will currently come up automatically after a crash. Let’s see how to disable that.

Open the MySQL service unit file in an editor:

  • sudo nano /etc/systemd/system/multi-user.target.wants/mysqld.service

After the header information, the contents of the file looks like this:

/etc/systemd/system/multi-user.target.wants/mysqld.service
[Unit]
Description=MySQL Community Server
After=network.target
After=syslog.target

[Install]
WantedBy=multi-user.target
Alias=mysql.service

[Service]
User=mysql
Group=mysql

# Execute pre and post scripts as root
PermissionsStartOnly=true

# Needed to create system tables etc.
ExecStartPre=/usr/bin/mysql-systemd-start pre

# Start main service
ExecStart=/usr/bin/mysqld_safe

# Don't signal startup success before a ping works
ExecStartPost=/usr/bin/mysql-systemd-start post

# Give up if ping don't get an answer
TimeoutSec=600

Restart=always
PrivateTmp=false

As we saw in Part 1, the value of the Restart parameter is set to always (for sshd, this was set to on-failure only). This means the MySQL service will restart for clean or unclean exit codes or timeouts.

The man page for systemd service shows the following table for Restart parameters:

Restart settings/Exit causes no always on-success on-failure on-abnormal on-abort on-watchdog
Clean exit code or signal X X
Unclean exit code X X
Unclean signal X X X X
Timeout X X X
Watchdog X X X X

In a systemd service unit file, the two parameters – Restart and RestartSec – control crash behaviour. The first parameter specifies when the service should restart, and the second parameter defines how long it should wait before restarting.

Comment out the Restart directive, save the file, and exit the editor. This will disable the restart behavior.

/etc/systemd/system/multi-user.target.wants/mysqld.service
# Restart=always

Next, reload the systemd daemon, followed by a restart of the mysqld service:

  • sudo systemctl daemon-reload
  • sudo systemctl restart mysqld.service

Next, find the Main PID of the service by running this command:

  • sudo systemctl status mysqld.service
Output
. . . Main PID: 11217 (mysqld_safe)

Using the kill -9 command, kill the main PID, using your own number.

  • sudo kill -9 11217

Running the sudo systemctl status mysqld.service again will show that the service has failed:

  • sudo systemctl status mysqld.service
Output
mysqld.service - MySQL Community Server Loaded: loaded (/usr/lib/systemd/system/mysqld.service; enabled) Active: failed (Result: signal) since Sun 2015-06-21 02:28:17 EDT; 1min 33s ago Process: 2566 ExecStartPost=/usr/bin/mysql-systemd-start post (code=exited, status=0/SUCCESS) Process: 2565 ExecStart=/usr/bin/mysqld_safe (code=killed, signal=KILL) Process: 2554 ExecStartPre=/usr/bin/mysql-systemd-start pre (code=exited, status=0/SUCCESS) Main PID: 2565 (code=killed, signal=KILL) Jun 21 02:20:09 test-centos7 systemd[1]: Starting MySQL Community Server... Jun 21 02:20:09 test-centos7 mysqld_safe[2565]: 150621 02:20:09 mysqld_safe Logging to '/var/log/mysqld.log'. Jun 21 02:20:09 test-centos7 mysqld_safe[2565]: 150621 02:20:09 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql Jun 21 02:20:10 test-centos7 systemd[1]: Started MySQL Community Server. Jun 21 02:28:16 test-centos7 systemd[1]: mysqld.service: main process exited, code=killed, status=9/KILL Jun 21 02:28:17 test-centos7 systemd[1]: Unit mysqld.service entered failed state.

Try to find the service status a few times, and each time the service will be shown as failed.

So, we have emulated a crash where the service has stopped and hasn’t come back. This is because we have instructed systemd not to restart the service.

Now, if you edit the mysqld.service unit file again, uncomment the Restart parameter, save it, reload the systemctl daemon, and finally start the service, it should be back to what it was before.

This is how a native systemd service can be configured to auto-start after crash. All we have to do is to add an extra directive for Restart (and optionally RestartSec) under the [Service] section of the service unit file.

Conclusion

So this is how Linux handles service startup. We have seen how System V, Upstart, and systemd init processes work and how they relate to auto-starting a service after a reboot or crash.

The declarative syntax of Upstart config files or systemd unit files is an improvement over the arcane System V init scripts.

As you work with your own Linux environment, check your distribution’s version and see what init daemon it supports.

It will be worthwhile to think about where would you want to enable a service and where would you want to disable it. In most cases, you don’t have to change anything for third-party applications or native Linux daemons. It’s only when you create your own service-based applications that you have to think about their startup and respawn behavior.

Source: DigitalOcean News