Control panels make a web server a sinch to use. If you sign up for shared web hosting, you'll be given a login for the control panel to manage just your account. Even if you run a whole server (virtual or dedicated) there are great advantages to using a control panel. Some control panels are free (like VirtualMin), and others you have to pay for (like cPanel).
One of those advantages is backups. A control panel will have a feature that lets you take a backup of your entire site, either on schedule or manually. If you're running a whole server, you can take a backup of each account on the server, and the configuration files you'd need to recreate the server on fresh hardware. This is vital. Hosting providers crash and go out of business; servers and hard drives crash and fail.
Sometimes, though, I want to run a really simple server. I don't want the memory overheads, or frankly the complexity of running a control panel. I just want to set it up to host two or three websites, and leave it to run.
I've recently posted a couple of posts about how to set up lighttpd as your web hosting daemon for doing just this. First came a post about the basic setup process, including setting up configuration files for each virtual host. Then I wrote another on how to enable SSL / https on this setup.
But just because you're running without a control panel, it doesn't mean you can afford to run without backups. Over the years, I've honed a couple of bash scripts I use to do precisely this for me, and I'm sharing them here in the hope they help others.
Unlike in a control panel, there is no automated restore process. You'd have to untar the backup file, and restore manually.
Overview
There are three scripts
/usr/local/sbin/backup_server
is the one to call once a day via cron.- It first creates a directory in /backup with today's date as its name.
- It then iterates over each account on the server, and backs up that account (using the second script, below).
- It then backups various system configuration files (using the third script, below).
- It then removes all but the last N days of backups, so that the backup partition / directory doesn't progressively fill up the whole server.
- Lastly, it prints a simple report of what backups are stored on the server, so that an administrator can easily monitor things by email.
/usr/local/sbin/backup_account
backups up a single account. The following are included:- The account's home directory. There is a way to exclude specific files or subdirectories that consume a lot of space, if you wish to do so.
- Any MySQL databases used by the account, if there are any.
- The account's crontab, if there is one.
- The lighttpd configuration file for the website hosted out by that account, if there is one.
/backup/scripts/archive_system_files
backs up system configuration files.
The backup_server
script needs to be told which accounts are to back up. This is done by creating a simple file at /etc/backup-account-list
with a list of the account usernames, one per line.
The archive_system_files
script needs to be told which system files to back up. This is done by creating a simple file at /backup/system/systemfiles.txt
, each line of which contains either a full filename or a full directory name, to back up. I'll include a sample of this below to show how it might be used.
I'll now give each script in full, following each one with some notes to help you understand what it's doing, how to make any modifications, and what you need to understand in order to use it.
/usr/local/sbin/backup_server
echo "`date +\%Y-\%m-\%d\ \%H:\%M:\%S` -- $(hostname -a) Backing up accounts"
BACKUP_DIR=/backup/`date +%Y-%m-%d`/accounts
mkdir -p $BACKUP_DIR
chmod 777 $BACKUP_DIR
for USERNAME in $(cat /etc/backup-account-list)
do
/usr/local/sbin/backup_account $USERNAME $BACKUP_DIR
done
chmod 755 $BACKUP_DIR
echo "`date +\%Y-\%m-\%d\ \%H:\%M:\%S` -- Account backup complete"
echo
echo "`date +\%Y-\%m-\%d\ \%H:\%M:\%S` -- Backing up system files"
/backup/scripts/archive_system_files
echo "`date +\%Y-\%m-\%d\ \%H:\%M:\%S` -- Copying backups to offsite backup server"
# If you want to send backups to an offsite server, something like this goes here:
# rsync -ad --stats -e "ssh -p 22" /backup/system_files account@server.com:. > /dev/null
# rsync -ad --stats -e "ssh -p 22" /backup/`date +%Y-%m-%d`/accounts account@server.com:. > /dev/null
# End of offsite backup lines
echo
echo "`date +\%Y-\%m-\%d\ \%H:\%M:\%S` -- Trimming older backups"
# In the "tail" line below, set the "+" number to be one higher than the number of directories to keep
find /backup/2* \
-maxdepth 0 \
-type d \
-printf '%T@ %p\0' \
| sort -r -z -n \
| awk 'BEGIN { RS="\0"; ORS="\0"; FS="" } { sub("^[0-9]*(.[0-9]*)? ", "\n"); print }' \
| awk 'NF > 0' \
| tail -n +8 \
| xargs -d '\n' rm -fr
echo
echo "========================================
Directory Backups Size
========================================"
for filepath in `ls /backup/ | grep ^2 | sort`; do
diskuse=`du --max-depth=0 -h /backup/$filepath | cut -f1`
backupcount=0
if [ -d /backup/$filepath/accounts ] ; then
backupcount=`ls /backup/$filepath/accounts/ | grep tar.gz$ -c`
fi
echo " $filepath `printf '%6s' $backupcount` `printf '%10s' $diskuse`"
done
echo "========================================"
echo
echo "`date +\%Y-\%m-\%d\ \%H:\%M:\%S` -- $(hostname -a) Backup Complete"
Notes:
- As described above, this script does four basic things:
- Create today's datestamped backup directory
- For each account, call backup_account
- Call archive_system_files to back up server configuration
- Trim old backups, and report
- This needs to run as root. As explained below, each account's backup will run with the credentials of that user.
- This script is called as is, without any command line arguments.
- To change the backup storage location, change line 2, BACKUP_DIR. Lines 16 and 17 would also need changing if used. Adjustments would need making to lines 36, 37, 39 and 40 depending on how you configure your storage directories.
- If you want to store backup_account or archive_system_files in a different location, change those in lines 7 and 13.
- It's highly recommended to store a copy of your backups somewhere other than on the server itself, otherwise you'd lose your backups in case of hardware failure. To do this, uncomment and modify lines 16 and 17.
- Currently, this keeps 7 days' backups. This is set in line 29, where we have
tail -n +8
. Adjust the number of days' backups to retain by changing this number to be one larger than the required number of days.
/usr/local/sbin/backup_account
#!/bin/bash
# A script to take an archive of an account on the server.
# Run as root, it takes two arguments - the username to backup, and the target directory
# Run as another user, it takes just the target directory, as the user's own account will be backed up
# If the home directory includes a file called .backup_exclude, this will be used for tar exclude patterns
# To back up mysql databases, a user needs to be created with Select Table Data and Lock Tables
# There then needs to be an entry in that linux user's .my.cnf file, with a heading of [clientbackup], user=[username], password="[password]"
USER=$(whoami)
USERID=$(id -u)
ROOT_USAGE="Usage: backup_account username target-directory"
USER_USAGE="Usage: backup_account target-directory"
if [ "$USERID" == "0" ]
then
if [ ! "$#" == "2" ]
then
echo $ROOT_USAGE
else
if [ ! -d $2 ]
then
echo $ROOT_USAGE
echo "Target directory does not exist"
exit
else
USERNAME=$(id -u $1 2>&1 | grep "no such")
if [ ! "$USERNAME" == "" ]
then
echo $USERNAME
echo $ROOT_USAGE
echo "User $1 does not exist"
else
su -s "/bin/bash" $1 -c "$0 $(realpath $2)"
fi
fi
fi
else
if [ ! "$#" == "1" ]
then
echo $USER_USAGE
else
if [ ! -d $1 ]
then
echo $USER_USAGE
echo "Target directory does not exist or cannot be accessed by $USER"
exit
else
echo
echo "`date +\%Y-\%m-\%d\ \%H:\%M:\%S` -- Backing up username $USER"
echo "`date +\%Y-\%m-\%d\ \%H:\%M:\%S` -- Tarring home directory"
HOME=$(realpath ~)
cd $HOME
tar --transform 's,^\.,homedir,' $(if [ -f .backup_exclude ] ; then echo "--exclude-from=.backup_exclude"; fi) -p -c -f $1/$USER.tar .
echo "`date +\%Y-\%m-\%d\ \%H:\%M:\%S` -- Home directory tarball complete"
echo "`date +\%Y-\%m-\%d\ \%H:\%M:\%S` -- Tarball contains $(tar --list -f $1/$USER.tar | grep -v "/$" -c) files. File size: $(du -h "$1/$USER.tar" | awk '{print $1}')"
TMP_DIR=`/bin/mktemp -d -t`
cd $TMP_DIR
crontab -l > crontab 2>&1
# We now need a safe way to analyse that file to see if it is reporting no crontab
# Note: Can't use tr if it would result in a file being empty
sed "s/no crontab for $USER//" crontab | tr '\n' '~' | sed 's/~//g' > .crontab
if [ -s .crontab ]
then
echo "`date +\%Y-\%m-\%d\ \%H:\%M:\%S` -- Crontab ... done"
else
echo "`date +\%Y-\%m-\%d\ \%H:\%M:\%S` -- Crontab ... empty"
rm crontab -f
fi
rm .crontab -f
echo "`date +\%Y-\%m-\%d\ \%H:\%M:\%S` -- Backing up MySQL databases"
for dbname in $(find /var/lib/mysql -maxdepth 1 -name "$USER"_* -type d | sed 's,/var/lib/mysql/,,')
do
echo "`date +\%Y-\%m-\%d\ \%H:\%M:\%S` -- Backing up database: $dbname"
DBAUTH=$(mysql --defaults-group-suffix=backup -e "show databases;" 2> $HOME/.sql-errors | grep "^$dbname")
if [ ! "$(cat $HOME/.sql-errors)" == "" ] ; then
echo "`date +\%Y-\%m-\%d\ \%H:\%M:\%S` -- ERROR: Could not connect to MySQL with the username and password supplied. Check the username, and the password stored in .my.cnf"
else
if [ ! "$DBAUTH" == "$dbname" ] ; then
echo "`date +\%Y-\%m-\%d\ \%H:\%M:\%S` -- ERROR: A connection was made to the MySQL server, but the username $DBUSER does not have permission to access database $DBNAME"
else
mysqldump --defaults-group-suffix=backup --single-transaction --routines $dbname | gzip > $dbname.sql.gz
echo "`date +\%Y-\%m-\%d\ \%H:\%M:\%S` -- Finished backing up database: $dbname"
fi
fi
rm -f $HOME/.sql-errors
done
echo "`date +\%Y-\%m-\%d\ \%H:\%M:\%S` -- All MySQL databases backed up"
echo "`date +\%Y-\%m-\%d\ \%H:\%M:\%S` -- Checking for lighttpd vhost conf files"
docroot=$(echo 'server.document-root\s*=\s*"'"$HOME/public_html"'"')
for config in $(grep -l "$docroot" /etc/lighttpd/vhosts/*.conf)
do
echo "`date +\%Y-\%m-\%d\ \%H:\%M:\%S` -- Found $config"
cp $config .
done;
echo "`date +\%Y-\%m-\%d\ \%H:\%M:\%S` -- Finished lighttpd vhost conf files"
if [ "$(/bin/ls -A)" ]; then
tar -r -f $1/$USER.tar *
fi
cd $HOME
rm $TMP_DIR -rf
echo "`date +\%Y-\%m-\%d\ \%H:\%M:\%S` -- Compressing backup for username $USER"
gzip $1/$USER.tar
echo "`date +\%Y-\%m-\%d\ \%H:\%M:\%S` -- Final backup size: $(du -h "$1/$USER.tar.gz" | awk '{print $1}')"
echo "`date +\%Y-\%m-\%d\ \%H:\%M:\%S` -- Backing complete for username $USER"
fi
fi
fi
Notes:
- This script is to back up a single account.
- This script can be run as root or as a less privileged account.
- It runs as root when called from
backup_server
(itself running as root). In this case, it takes two arguments: The username of the account to back up, and the full path of the directory where the backup is to be stored. - When run as a less privileged account, it is in order to backup the account running the script. There is therefore only a single argument: the full path of the directory where the backup is to be stored.
- If the script is called as root, it in fact does only one thing: It calls itself, this time running as the user to be backed up.
- It runs as root when called from
- The script backs up everything as an uncompresed tarball, to which it adds each new piece of data. After this is completed, the tarball is compressed with gzip. The user account running this backup therefore needs to have storage quota sufficient to store a full backup of their own account, uncompressed.
- First, the script backs up the user's home directory.
- If there are any files you wish to exclude from the backup, create a file in the home directory named
.backup_exclude
. This contains, one entry per line, the files and directories to exclude. This argument will be passed to the tar command in the--exclude-from
switch. - The home directory will be included in the final backup as a directory named
homedir
.
- If there are any files you wish to exclude from the backup, create a file in the home directory named
- Next, if the user has a crontab, this is exported to a file named
crontab
, and included in the backup. - Next, any databases associated with this account are dumped in full, in compressed form (
.sql.gz
), one file per database.- To identify which databases this account owns, the databases must be named following a convention. Each database name must begin with the username of the account, then an underscore. For example, if the username is 'example' (for the domain example.com), then a database named '
wordpress_db
' would not be included in the backup for account 'example'. However a database named 'example_wordpress
' would (because it beginsexample_
). - The command
mysqldump
is used to dump the database into the backup. It needs a mysql user account to run under, with suitable permissions. - The MySQL account needs at least permission to "Select" and "Lock Tables". Depending on what is in the database, it may also need permissions for "Triggers", "Show Views", "Events", and (if your views or triggers invoke functions) "Execute".
- You can call this MySQL account whatever you wish. The MySQL username and password go in the file
.my.cnf
in the user's home directory, headed by[clientbackup]
. For example (note the quotation marks around the password):
- To identify which databases this account owns, the databases must be named following a convention. Each database name must begin with the username of the account, then an underscore. For example, if the username is 'example' (for the domain example.com), then a database named '
[clientbackup]
user=mysql_username
password="mysql_password"
- Lastly, the script searches for any lighttpd configuration files that define a virtual host where the document root is a directory named
public_html
within this home directory. It includes those configuration files in the backup. This assumes you're lighttpd virtual host configuration is set up as described in my earlier post. - As the script runs, it reports what it does and does not find, and then gives the size of the final backup file.
/backup/scripts/archive_system_files
#!/bin/bash
# Change working directory, so that we can find the list of systemfiles
cd "`dirname $0`"
# Move yesterday's archive out of the way, so that we don't get stale files hanging around
if [ -d /backup/system_files ] ; then
rm -rf /backup/system_files.old
mv -f /backup/system_files /backup/system_files.old
fi
# Iterate through the entries in system files, and make a copy
for filepath in `cat systemfiles.txt`; do
filepath=`echo $filepath | sed 's~\(.*\)/$~\1~'`
echo $filepath
mkdir -p $(dirname /backup/system_files$(dirname $filepath)/..)
rsync --delete -adz $filepath /backup/system_files$(dirname $filepath)
done
# Save root's crontab
crontab -l > /backup/system_files/root_cron.txt
Notes:
- This saves a complete copy of the files and directories you ask for.
- It saves these in
/backup/system_files
, where you'll find a directory tree mirroring that on your server, but containing just the files and directories you asked for. - You specify which files and directories you want by creating a text file named
/backup/systemfiles.txt
. Each line of this file must contain the full path of either a file or a directory. A sample such file is included below to give you the idea. - The account backups (described above) are placed in directories named after the date the backup was taken, and you can choose how many days backups to retain. The system files usually change less frequently, so you don't gain real-world restore points by keeping more copies. For this reason, there is only copy of your system files, at
/backup/system_files
, and this is overwritten each day. - Having said that there is only one copy, there is the danger that you delete a crucial file on the server, and then the backup script runs before you can retrieve the deleted file. For this reason, the previous day's backup is available at
/backup/system_files.old
. - If you want to store more versions of your system backup, you would need to create a script to copy this to date-named folders or to an offsite location.
- Having backed up the files specified, the script exports root's crontab, so you have a copy of that as well.
Sample systemfiles.txt
/backup/scripts
/etc/backup-account-list
/etc/csf/csf.allow
/etc/csf/csf.blocklists
/etc/csf/csf.conf
/etc/csf/csf.deny
/etc/csf/csf.ignore
/etc/csf/csf.pignore
/etc/lighttpd
/etc/network/interfaces
/etc/munin/plugin-conf.d/munin-node
/etc/munin/plugins
/etc/mysql/my.cnf
/etc/php.d
/usr/local/sbin
/usr/share/munin/plugins
/root
/var/spool/cron
Questions / Comments / Suggestions
If you have any questions about how this works, please leave them in the comments below. I'll try to answer them, but make no promises.
If you have suggestions to improve this setup, please leave a comment. Doubtless there is much that could be improved; this is a system I've devised for my own needs, so won't be exactly what everyone needs, or be perfect for every system. This is posted to get you up and running quickly with backups, giving you a workable system from which you can adapt something that is what you need. All the same, improvements that apply to most uses and that don't vastly increase the complexity of the scripts will be gratefully worked into the scripts themselves for everyone's benefit.
Recent comments