Skip to content
Snippets Groups Projects
Commit eff9b8b2 authored by Hahn Axel (hahn)'s avatar Hahn Axel (hahn)
Browse files

Merge branch 'fix-no-cache' into 'master'

Fix no cache

See merge request !72
parents 6caea7f6 298b28e2
Branches
No related tags found
1 merge request!72Fix no cache
# Changelog
## 🗓️ 2022-05-10
🐞 Bugfix: nocache
When using restic_nocache it was applied on backup only. Now the restic parameter --no-cache is used in all restic commands.
## 📢 Info
The changelog was started on 2022-05-10.
Legend:
✅ Added Feature
✴️ Update
🐞 Bugfix
🛡 Security feature
# IML BACKUP #
Backup scripts using restic or duplicity.
Runs on Linux.
Backup scripts using restic (or duplicity).
Runs on Linux: CentOS, Debian, Manjaro, Ubuntu.
* Free software. GNU GPL 3.0
* Source: <https://git-repo.iml.unibe.ch/iml-open-source/iml-backup/>
* Restic: <https://restic.net/>
* Duplicity: <http://duplicity.nongnu.org/>
## Why ##
We don't want to configure a backup set on a "central backup server" for each new node. Each new node pushed its own backup data to a backup target.
We want to push data from a private network to target; a central backup server would ot reach some clients.
A set of database backup scripts detects exsiting locally running database servers and puts a compressed dump file per database scheme to a local backup directory.
Then a transfer script uses a tool to encrypt and transfer local backups and other local folders to a backup target.
## Features ##
### Database dumps ###
Supported Databases for backup and restore
* Mysql/ Mariadb (mysql_dump)
* postgres (pg_dump)
* sqlite (by naming files with full path in a config)
Limited support:
* couchdb (using a config with naming convention)
* ldap (without restore so far)
### backup tools ###
DUPLICITY
* Incremental and full backups
* encrypted backups using GPG
* set size of backup volumes
* delete old backups by a given time limit
* several backup targets (we currently use scp:// rsync:// and file://)
RESTIC
* creates an initial full backup - and then never again.
* encrypts data
* deduplicates files
* delete backups by rules to keep a count of hourly, daily, weekly, mothly, yearly backups
* several backup targets (we currently use sftp:// https:// and file://)
### control simoultanous backups ###
As an optional feature you can control the count simultanous written backups.
This requires additional effort next to the client installation.
## Installation ##
- Uncompress / clone the client to a local directory
- go to jobs directory to copy the *.job.dist files to *.job
- configure *.job files
- manual test run
- create a cronjob
### Uncompress client ###
To put all files into a directory i.e.
/opt/imlbackup/client
then use the **root** user and follow these steps:
# Create the directory level above
mdir -p /opt/imlbackup/
# download
cd /opt/imlbackup/
wget https://git-repo.iml.unibe.ch/iml-open-source/iml-backup/-/archive/master/iml-backup-master.tar.gz
# extract
tar -xzf iml-backup-master.tar.gz
mv iml-backup-master client
# remove downloaded file
rm -f iml-backup-master.tar.gz
# to set pwd to /opt/imlbackup/client:
cd client
### database backup: set local backup target ###
Create a **jobs/dirs.job** (copy the delivered *.dist file)
cd jobs
cp backup.job.dist backup.job
There are 2 defaults:
dir-localdumps = /var/iml-backup
keep-days = 7
**dir-localdumps**
{string}
The target directory for local dumps. It is used by
* the database dump scripts
* the transfer script to store the client backups
* the restore script
Below that one a directory for the service will be generated; inside that one the database dumbs with scheme name and timestamp, i.e.
/var/iml-backup/mysql/mydatabase__20190827-2300.sql.gz
**keep-days**
{integer}
The number of days how long to keep dumps locally.
Remark:
To make a database restore its dump must be located at this directory. To restore an older database you need to restore the dump from duplicity first.
If you have local Mysql daemon or Pgsql you can test it by starting
# dump all databases
sudo ./localdump.sh
# show written files
find /var/iml-backup
### Define local directories to backup ###
Edit **jobs/dirs.job** again.
There are a few include definitions:
# ----------------------------------------------------------------------
# directory list to transfer
# without ending "/"
# missing directories on a system will be ignored
# ----------------------------------------------------------------------
include = /etc
include = /var/log
include = /home
... and excludes
# ----------------------------------------------------------------------
# excludes
# see duplicity ... added as -exclude-regex parameter
# ----------------------------------------------------------------------
# exclude = .*\.(swp|tmp)
# mac file
# exclude = \.DS_Store
# all subdirs containing "cache/", i.e. any/path/zend-cache/[file]
# exclude = cache/.*
**include**
{string}
Multiple entries are allowed. Each defines a starting directory that is backed up recursive.
Do not use a trailing slash "/".
Each include line creates ist own backup volume on the backup target: One duplicity backup command will be started per include.
An include for the database dumps is not needed - it will be added automatically.
Missing directories on a system will be ignored and does NOT throw an error. So you can write a "general" single config and deploy all servers.
**exclude**
{string}
Multiple entries are allowed. Each defines a regex that is applied to all include items. This could have an negative impact. I suggest not to define an exclude - only if it is needed because of huge wasted space.
TODO: advanced stuff ... There is a possibility for directory based include and exclude rules.
### Setup the target ###
Edit **jobs/transfer.job**. This file handles the transfer of local directories
to a backup target. You find comments in the config.
By default the backp tool "restic" is activated (and recommended). You can switch to duplicity
if you feel familiar with it.
`bin = restic`
Create a repository base directory with the wanted protocol. This step has to be done
once for all systems you want to backup. The IML Backup will create a subdirectory
with the hostname for its backups. Set your target in storage:
`storage = sftp://backup@storage.example.com//netshare/backup`
## Production usage ##
Edit **jobs/transfer.job**.
Set a password to encrypt local data with it. Each system should have its own password.
Use a long password - i.e. 128 characters.
Save your password list - if you loose it you cannot restore data anymore.
`passphrase = EnterYourSecretHere`
Change the restore path if needed. A restore does not overwrite the current files.
`restore-path = /restore`
Then have a look to the section with variables that have the prefix of bin = ... (restic_... or duplicity_...).
The index files are stored in $HOME. Because /root partition could be to small for
systems with many files (fileserver) you can put the index somewhere else:
`*_cachedir = ...`
With this loglevel you get a list of new or changed files in the log:
`restic_verbose = 2`
You can set a tag that is set for backups by script.
`restic_tag = imlbackup`
The mount of backup sets is a restic feature. After mounting a backup there
you can browse via filesystem through backups and timestamps.
`restic_mountpoint = /mnt/restore`
Define how many backups you wanto to keep. After the backup of a directory
the cleanup strategy will be applied.
```
# prune
restic_keep-hourly = 100
restic_keep-daily = 90
restic_keep-weekly = 12
restic_keep-monthly = 12
restic_keep-yearly = 10
```
* Duplicity: <https://duplicity.gitlab.io/duplicity-web/>
### setup backup times ###
# Why #
### Create a cronjob ###
We rollout our linux systems automatically. We try not to configure third systems
for backup, monitoring and other general services.
### Monitoring ###
* **No central backup server**: We don't want to configure a backup set on a "central backup server" for each new node. Each new node pushes its own backup data to a given backup target.<br><br>
* We want to **push data from a private network** to target; a central backup server would not reach some clients without sattelite systems.<br><br>
* No agent needed.<br><br>
* **Automatic backup of databases**: A set of database backup scripts detects exsiting locally running database services and puts a compressed dump file per database scheme to a local backup directory.<br><br>
* We want to use a **local encryption** of all data to backup.
### Restore files ###
---
### Restore databases ###
See the [docs](./docs/) folder for more details.
\ No newline at end of file
......@@ -12,6 +12,7 @@
# 2022-02-09 ah v0.3 show diff to last backup; update pruning
# 2022-02-09 ah v0.3 update pruning; more keep-params
# 2022-03-07 ah v0.4 add verify in post task
# 2022-05-10 ah v0.5 fix handling with nocache flag (use globally as default param - not in backup only)
# ================================================================================
# --------------------------------------------------------------------------------
......@@ -58,6 +59,12 @@
# verbose to see more details
echo -n --verbose=$( _j_getvar ${STORAGEFILE} "${CFGPREFIX}verbose" )
# no cache ... to create no local cache dirs, what saves space but backup + verify is much slower
_nocacheFlag=$( _j_getvar ${STORAGEFILE} "${CFGPREFIX}nocache" )
if [ "$_nocacheFlag" != "" ] && [ "$_nocacheFlag" != "0" ] && [ "$_nocacheFlag" != "false" ]; then
echo -n "--no-cache "
fi
}
# return a string with backup parameters that will be added to defaults
function t_getParamBackup(){
......@@ -70,11 +77,6 @@
echo -n "--tag $_tag "
fi
# no cache ... to create smaller local cache dirs, but backup 3 times slower
_nocacheFlag=$( _j_getvar ${STORAGEFILE} "${CFGPREFIX}nocache" )
if [ "$_nocacheFlag" != "" ] && [ "$_nocacheFlag" != "0" ] && [ "$_nocacheFlag" != "false" ]; then
echo -n "--no-cache "
fi
}
# return a cli parameter for a single exlude directory
......@@ -185,7 +187,7 @@
echo "--- VERIFY"
# param --read-data takes a long time. Maybe use an extra job with it.
# _mycmd="time restic check ${ARGS_DEFAULT} --with-cache --read-data"
_mycmd="restic check ${ARGS_DEFAULT} --with-cache"
_mycmd="restic check ${ARGS_DEFAULT}"
echo $_mycmd
sleep 3
color cmd
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment