We don't want to configure a backup set on a "central backup server" for each new node. Each new node pushed its own backup data to a backup target.
We want to push data from a private network to target; a central backup server would ot reach some clients.
A set of database backup scripts detects exsiting locally running database servers and puts a compressed dump file per database scheme to a local backup directory.
Then a transfer script uses a tool to encrypt and transfer local backups and other local folders to a backup target.
## Features ##
### Database dumps ###
Supported Databases for backup and restore
* Mysql/ Mariadb (mysql_dump)
* postgres (pg_dump)
* sqlite (by naming files with full path in a config)
Limited support:
* couchdb (using a config with naming convention)
* ldap (without restore so far)
### backup tools ###
DUPLICITY
* Incremental and full backups
* encrypted backups using GPG
* set size of backup volumes
* delete old backups by a given time limit
* several backup targets (we currently use scp:// rsync:// and file://)
RESTIC
* creates an initial full backup - and then never again.
* encrypts data
* deduplicates files
* delete backups by rules to keep a count of hourly, daily, weekly, mothly, yearly backups
* several backup targets (we currently use sftp:// https:// and file://)
### control simoultanous backups ###
As an optional feature you can control the count simultanous written backups.
This requires additional effort next to the client installation.
## Installation ##
- Uncompress / clone the client to a local directory
- go to jobs directory to copy the *.job.dist files to *.job
- configure *.job files
- manual test run
- create a cronjob
### Uncompress client ###
To put all files into a directory i.e.
/opt/imlbackup/client
then use the **root** user and follow these steps:
# all subdirs containing "cache/", i.e. any/path/zend-cache/[file]
# exclude = cache/.*
**include**
{string}
Multiple entries are allowed. Each defines a starting directory that is backed up recursive.
Do not use a trailing slash "/".
Each include line creates ist own backup volume on the backup target: One duplicity backup command will be started per include.
An include for the database dumps is not needed - it will be added automatically.
Missing directories on a system will be ignored and does NOT throw an error. So you can write a "general" single config and deploy all servers.
**exclude**
{string}
Multiple entries are allowed. Each defines a regex that is applied to all include items. This could have an negative impact. I suggest not to define an exclude - only if it is needed because of huge wasted space.
TODO: advanced stuff ... There is a possibility for directory based include and exclude rules.
### Setup the target ###
Edit **jobs/transfer.job**. This file handles the transfer of local directories
to a backup target. You find comments in the config.
By default the backp tool "restic" is activated (and recommended). You can switch to duplicity
if you feel familiar with it.
`bin = restic`
Create a repository base directory with the wanted protocol. This step has to be done
once for all systems you want to backup. The IML Backup will create a subdirectory
with the hostname for its backups. Set your target in storage:
We rollout our linux systems automatically. We try not to configure third systems
for backup, monitoring and other general services.
### Monitoring ###
***No central backup server**: We don't want to configure a backup set on a "central backup server" for each new node. Each new node pushes its own backup data to a given backup target.<br><br>
* We want to **push data from a private network** to target; a central backup server would not reach some clients without sattelite systems.<br><br>
* No agent needed.<br><br>
***Automatic backup of databases**: A set of database backup scripts detects exsiting locally running database services and puts a compressed dump file per database scheme to a local backup directory.<br><br>
* We want to use a **local encryption** of all data to backup.