Skip to content
Snippets Groups Projects
Select Git revision
  • c3f1421720fa476089398e53214248e0c5ea7004
  • master default protected
  • 7771-harden-postgres-backup
  • pgsql-dump-with-snapshots
  • update-colors
  • update-docs-css
  • usb-repair-stick
  • desktop-notification
  • 7000-corrections
  • db-detector
10 results

iml-backup

  • Clone with SSH
  • Clone with HTTPS
  • user avatar
    Hahn Axel (hahn) authored
    c3f14217
    History

    IML BACKUP

    Backup scripts using duplicity. Runs on Linux.

    Free software. GNU GPL 3.0.

    Source: https://git-repo.iml.unibe.ch/iml-open-source/iml-backup/ Duplicity: http://duplicity.nongnu.org/

    Why

    We don't want to configure a backup set on a "central backup server" for each new node. Each new node pushed its own backup data to a backup target.

    We want to push data from a private network to target; a central backup server would ot reach some clients.

    A set of database backup scripts detects exsiting locally running database servers and puts a compressed dump file per database scheme to a local backup directory.

    Then a transfer script uses duplicity to encrypt and transfer local backups and other local folders to a backup target.

    Features

    Supported Databases for backup and restore

    • Mysql/ Mariadb (mysql_dump)
    • postgres (pg_dump)
    • sqlite (by naming files with full path in a config)

    Limited support:

    • couchdb (using a config with naming convention)
    • ldap (without restore so far)

    Duplicity allows

    • Incremental and full backups
    • encrypted backups using GPG
    • set size of backup volumes
    • delete old backups by a given time limit
    • several backup targets (we currently use scp:// rsync:// and file://)

    Installation

    • Uncompress / clone the client to a local directory
    • go to jobs directory to copy the *.job.dist files to *.job
    • configure *.job files
    • manual test run
    • create a cronjob

    Uncompress client

    To put all files into a directory i.e.

    /opt/imlbackup/client

    then use the root user and follow these steps:

    # Create the directory level above
    mdir -p /opt/imlbackup/
    
    # download
    cd /opt/imlbackup/
    wget https://git-repo.iml.unibe.ch/iml-open-source/iml-backup/-/archive/master/iml-backup-master.tar.gz
    
    # extract
    tar -xzf iml-backup-master.tar.gz
    mv iml-backup-master client
    
    # remove downloaded file
    rm -f iml-backup-master.tar.gz
    
    # to set pwd to /opt/imlbackup/client:
    cd client

    database backup: set local backup target

    Create a jobs/dirs.job (copy the deliverers *.dist file)

    cd jobs
    cp dirs.job.dist dirs.job
    cd ..

    There are 2 defaults:

    dir-localdumps = /var/iml-backup
    keep-days = 7

    dir-localdumps

    {string}

    The target directory for local dumps. It is used by

    • the database dump scripts
    • the transfer script to store the client backups
    • the restore script

    Below that one a directory for the service will be generated; inside that one the database dumbs with scheme name and timestamp, i.e.

    /var/iml-backup/mysql/mydatabase__20190827-2300.sql.gz

    keep-days

    {integer}

    The number of days how long to keep dumps locally.

    Remark: To make a database restore its dump must be located at this directory. To restore an older database you need to restore the dump from duplicity first.

    If you have local Mysql daemon or Pgsql you can test it by starting

    # dump all databases
    ./localdump.sh
    
    # show written files
    find /var/iml-backup

    Define local directories to backup

    Edit jobs/dirs.job again.

    There are a few include definitions:

    # ----------------------------------------------------------------------
    # directory list to transfer
    # without ending "/"
    # missing directories on a system will be ignored
    # ----------------------------------------------------------------------
    include = /etc
    include = /var/log
    include = /home

    ... and excludes

    # ----------------------------------------------------------------------
    # excludes
    # see duplicity ... added as -exclude-regex parameter
    # ----------------------------------------------------------------------
    
    # exclude = .*\.(swp|tmp)
    
    # mac file
    # exclude = \.DS_Store
    
    # all subdirs containing "cache/", i.e. any/path/zend-cache/[file]
    # exclude = cache/.*

    include

    {string}

    Multiple entries are allowed. Each defines a starting directory that is backed up recursive.

    Do not use a trailing slash "/".

    Each include line creates ist own backup volume on the backup target: One duplicity backup command will be started per include.

    An include for the database dumps is not needed - it will be added automatically.

    Missing directories on a system will be ignored and does NOT throw an error. So you can write a "general" single config and deploy all servers.

    exclude

    {string}

    Multiple entries are allowed. Each defines a regex that is applied to all include items. This could have an negative impact. I suggest not to define an exclude - only if it is needed because of huge wasted space.

    TODO: advanced stuff ... There is a possibility for directory based include and exclude rules.

    Setup the target

    incl. test transfer to storage

    Production usage

    setup backup times

    Create a cronjob

    Monitoring

    Restore files

    Restore databases