mirror of
https://github.com/kees-z/DuplicatiDocs.git
synced 2025-04-26 06:52:27 +00:00
.md files
This commit is contained in:
parent
b58041b12b
commit
732942739a
169
docs/01-introduction.md
Normal file
169
docs/01-introduction.md
Normal file
@ -0,0 +1,169 @@
|
|||||||
|
|
||||||
|
## About this manual
|
||||||
|
|
||||||
|
This manual can be used as a guide to lead you through the features of Duplicati, but it can also be used as a reference guide.
|
||||||
|
|
||||||
|
Some parts are marked with an icon. Very important information is marked with a small triangle containing an exclamation mark. Example:
|
||||||
|
|
||||||
|
*****
|
||||||
|
>  This is very important information.
|
||||||
|
|
||||||
|
*****
|
||||||
|
|
||||||
|
Additional information about a subject is marked with a small circle containing an i character. Example:
|
||||||
|
|
||||||
|
*****
|
||||||
|
>  This is some additional information.
|
||||||
|
|
||||||
|
*****
|
||||||
|
|
||||||
|
|
||||||
|
This manual tries to cover all types of installations, but focusses on Windows installations. Most procedures are identical for installations on different operating systems, because many operations are controlled by a built-in web interface.
|
||||||
|
If a procedure is different for an operating system other than Windows, a small section will be added to explain that procedure for Linux and/or OS X. OS-specific procedures are marked with these symbols:
|
||||||
|
|
||||||
|
*****
|
||||||
|
 This is an example of a Windows-specific procedure. 
|
||||||
|
|
||||||
|
*****
|
||||||
|
|
||||||
|
*****
|
||||||
|
 This is an example of a Windows-specific procedure. 
|
||||||
|
|
||||||
|
*****
|
||||||
|
|
||||||
|
*****
|
||||||
|
 This is an example of a Windows-specific procedure. 
|
||||||
|
|
||||||
|
*****
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
Duplicati is a backup client that securely stores encrypted, incremental, compressed backups on local storage, cloud storage services and remote file servers. The Duplicati project was inspired by Duplicity and had similar functionality until 2008\. In that year the storage model was redesigned completely and the program was rebuilt from scratch. This manual describes Duplicati 2, the version based on the new storage model.
|
||||||
|
Duplicati can be installed on a variety of operating systems. Most common platforms are Windows, Linux and OSX.
|
||||||
|
|
||||||
|
Duplicati is not:
|
||||||
|
|
||||||
|
* **A file synchronization program.**
|
||||||
|
Duplicati is a block based backup solution. Files are split up in small chunks of data, (optionally) encrypted and compressed before they are sent to the backup destination. The block based backup engine enables great features like versioning and deduplication, but disallows uploading of plain files. If you need to be able to access your files directly from the target location, you will need file synchronization software, not block based backup software like Duplicati.
|
||||||
|
* **A hard disk imaging program.**
|
||||||
|
Duplicati can make a backup of selected files and folders. Hard disk imaging software can create an image file of a complete volume or hard disk, including the boot sector. If you want to be able to restore a complete volume or hard disk, including the boot sector and operating system, you need a hard disk imaging solution.
|
||||||
|
* **Software that can make a backup of files that are stored in the cloud.**
|
||||||
|
Duplicati needs to be installed on the host where the backup source files are stored. Optionally files and folders on locations in the local network can be selected for backup by using UNC paths. Duplicati can not log into cloud services over the internet to make a backup of remotely stored files.</span>
|
||||||
|
|
||||||
|
## Features
|
||||||
|
|
||||||
|
Duplicati has a lot of advanced features, that can only be found in high-end enterprise backup solutions. Duplicati offers these features for free:
|
||||||
|
|
||||||
|
* **Strong encryption**
|
||||||
|
Duplicati uses strong AES-256 encryption to protect your backups. It is designed following the TNO principle: Trust No One. For instance, all data is encrypted locally before it is transferred to the remote storage system. The password/key to your backup never leaves your computer. Instead of AES-256 you can use a local GPG instance to encrypt your backup.
|
||||||
|
* **Incremental backups**
|
||||||
|
Duplicati performs a full backup initially. Afterwards, Duplicati updates the initial backup by adding the changed data only. That means, if only tiny parts of a huge file have changed, only those tiny parts are added to the backup. This saves time and space and the backup size usually grows slowly.</span>
|
||||||
|
* **Compression**
|
||||||
|
All backup data is compressed before it is encrypted and uploaded. Duplicati supports Zip/Deflate or 7z/LZMA2 compression. For performance reasons, Duplicati detects files that are compressed already and adds those as they are to the Zip or 7z archives. For example, media files such as mp3, jpeg or mkv files contain very well compressed data already.
|
||||||
|
* **Online backup verification**
|
||||||
|
Duplicati is built to work with simple storage systems. Many providers offer compatible storages and often at cheap prices. As a downside of this, some storage system might store corrupt data. And most people usually notice that, when they need their backup to restore files they have lost and restoring fails. To avoid that Duplicati regularly downloads a random set of backup files, restores their content and checks their integrity. That way you can detect problems with your online storage before you run into troubles.
|
||||||
|
* **Deduplication**
|
||||||
|
Duplicati analyzes the content of files and stores data blocks. Due to that, Duplicati will find duplicate files and similar content and store this only once in the backup. As Duplicati analyzes the content of files it can handle situations very well if files and folders are moved or renamed. As the content does not change, the next backup will be tiny.
|
||||||
|
* **Fail-safe design**
|
||||||
|
Duplicati is designed to handle various kinds of issues: Network hiccups, interrupted backups, unavailable or corrupt storage systems. Even if a backup run was interrupted, it can be continued at a later time. Duplicati will then backup everything that was missed in the last backup. And even if remote files get corrupted, Duplicati can try to repair them if local data is still present or restore as much as possible.
|
||||||
|
* **Web interface**
|
||||||
|
Duplicati comes with a web interface. It can be used to configure and run backups on your local machine. But is also allows you to configure and run backups remotely on headless machines like a Network Attached Storage (NAS). Just install Duplicati on your NAS and configure and run it through its web interface.
|
||||||
|
* **Command Line interface**
|
||||||
|
We did not forget about system admins! Duplicati offers all functions and feature via Duplicati.Commandline.exe. This allows you to add backup features to your scripts or run backups in a terminal window.
|
||||||
|
* **Meta data**
|
||||||
|
Duplicati also stored the meta data of files in the backup. When backup files are restored, the timestamps (last modified, created) will also be restored as well as the system's access permissions. To avoid inaccessible files e.g. when the system user's have changed, restoring of access permissions is optional.
|
||||||
|
* **Scheduler**
|
||||||
|
The built-in scheduler runs your backups automatically at the times and intervals you define. One backup everyday, at the weekend, every hour or even 3pm every 3rd Monday is possible. And even if a date is missed, Duplicati will run the job as soon as possible.
|
||||||
|
* **Auto-updater**
|
||||||
|
Duplicati comes with a built-in updater that downloads and installs the latest available version for you. That way you can easily keep Duplicati up-to-date.
|
||||||
|
* **Backup open files**
|
||||||
|
When a file is in use by a process or application, it usually cannot be read by another process, making it impossible to backup that file. On Windows systems, Duplicati can use Volume Shadow Copy Services (VSS). For Linux based devices Duplicati can use Logical Volume Management (LVM). VSS and LVM offer the possibility to create an application consistent snapshot of a volume, which is used by Duplicati to make a reliable backup of these open files.
|
||||||
|
|
||||||
|
## License
|
||||||
|
|
||||||
|
Duplicati is licensed under LGPL and available for Windows and Linux. The software is free to use, even commercially. More information about the LGPL licensing model can be found in [APPENDIX G License Agreement](#_APPENDIX_G_License).
|
||||||
|
|
||||||
|
## Supported backends
|
||||||
|
|
||||||
|
Duplicati can make backups to a large number of targets. For local backups, all devices can be used that are attached locally or using a UNC path, like:
|
||||||
|
|
||||||
|
* External USB hard disk drive
|
||||||
|
* USB thumb drive
|
||||||
|
* Shared folder on another computer in the same network
|
||||||
|
* Network-attached Storage (NAS)
|
||||||
|
|
||||||
|
Backups to these targets using the following standard network protocols are supported:
|
||||||
|
|
||||||
|
* FTP
|
||||||
|
* FTP (Alternative)
|
||||||
|
* OpenStack Object Storage / Swift
|
||||||
|
* S3 Compatible
|
||||||
|
* SFTP (SSH)
|
||||||
|
* WebDAV
|
||||||
|
|
||||||
|
The following Cloud Storage Providers are supported natively by Duplicati:
|
||||||
|
|
||||||
|
* Amazon Cloud Drive</span>
|
||||||
|
* Amazon S3
|
||||||
|
* Azure blob
|
||||||
|
* B2 Cloud Storage
|
||||||
|
* Box.com
|
||||||
|
* Dropbox
|
||||||
|
* Google Cloud Storage
|
||||||
|
* Google Drive
|
||||||
|
* HubiC
|
||||||
|
* Jottacloud
|
||||||
|
* Mega.nz
|
||||||
|
* Microsoft OneDrive
|
||||||
|
* Microsoft OneDrive for Business
|
||||||
|
* Microsoft SharePoint
|
||||||
|
* OpenStack Simple Storage
|
||||||
|
* Rackspace CloudFiles
|
||||||
|
* Sia Decentralized Cloud
|
||||||
|
|
||||||
|
Other supported targets:
|
||||||
|
|
||||||
|
* Tahoe-LAFS
|
||||||
|
|
||||||
|
## System requirements
|
||||||
|
|
||||||
|
Duplicati must be installed on a device with a supported operating system. Currently, these operating systems are supported:
|
||||||
|
|
||||||
|
* Windows Vista and higher (both 32 and 64 bit versions)
|
||||||
|
* Windows Server 2008 and higher (both 32 and 64 bit versions)
|
||||||
|
* Linux
|
||||||
|
* Apple Mac OSX
|
||||||
|
|
||||||
|
Because many devices run on an operating system based on Linux, Duplicati can be installed on some devices that are not personal computers, like a NAS or Raspberry Pi.
|
||||||
|
|
||||||
|
Windows-based devices should have .NET Framework 3.5 or higher installed. For Linux and OSX, a recent version of Mono is a requirement.
|
||||||
|
|
||||||
|
Duplicati can make backups of files that are opened by other processes. For Windows, a snapshot of the file system is created using Volume Shadowcopy Services (VSS), LVM is used on Linux Systems. To be able to create a VSS snapshot, Duplicati needs C++ run-time components for Visual Studio 2015 to be installed.
|
||||||
|
|
||||||
|
Duplicati is resource-friendly by design. There are no specific requirements for internal memory or processor performance.
|
||||||
|
|
||||||
|
Duplicati needs about 40 MB of free harddisk space for installation. However, additional space is required for execution:
|
||||||
|
|
||||||
|
* Duplicati creates a small database that contains program settings and all backup configurations.
|
||||||
|
* For each backup configuration, a local database is created, enabling Duplicati to retrieve information about files at the remote location, without actually upload or download files. This database makes Duplicati a lot faster, because a query is done to the local database instead of downloading remote files.
|
||||||
|
The size of these local databases varies, depending on the number of source files selected for backup, the total amount of data and the chosen block size. In most situations, a local database consumes 10 MB to a few GB's of local storage capacity.
|
||||||
|
* Duplicati creates temporary files while doing backup or restore operations. The amount of storage needed depends on the chosen upload volume (DBLOCK) size. The default size is 50 MB, but this value can be modified for each backup job. A small number (1 to 5) of temporary DBLOCK files are stored locally before they are uploaded to the backup target. After a successful upload, these temporary files will be deleted automatically.
|
||||||
|
|
||||||
|
## The backup process explained
|
||||||
|
|
||||||
|
Traditional backup software makes a full backup at regular intervals (for example once a week). All other backups are incremental. These incremental backups send all new and changed files to the backup target. Backdraw is that if a folder needs to be restored from the most recent backup, the latest full backup has to be restored first, followed by all incremental backups that were made after the latest full backup. This is a cumbersome and error-sensitive procedure.
|
||||||
|
|
||||||
|
Making a full backup every day results in reliable backups, but is very time consuming and resource-unfriendly. All source data has to be sent to and stored at the backup target every time the backup task is executed.
|
||||||
|
|
||||||
|
Duplicati combines the best of both worlds. When a backup is made, only changed parts of files are sent to the destination. From this point of view, Duplicati behaves like it is making an incremental backup. When one or more files (or all files and folders) need to be restored from the most recent backup, this backup (and all other ones) look like a full backup: all data can be restored with a single operation, without replaying a set of incremental backups.
|
||||||
|
|
||||||
|
Duplicati 2.0 introduced a new, revolutionary storage format for backups. The storage format is block-based. This means, it does not store the files, but chops all files into tiny blocks. Here is a simple explanation.
|
||||||
|
|
||||||
|
Imagine your local files consist of many small bricks in different shapes and colors. Duplicati takes your files, breaks them down into single bricks and stores these bricks in small bags. Whenever a bag is full, it is stored in a huge box (which is your online storage). When something changes, Duplicati puts new bricks into a new bag and puts it into the box. When a local file needs to be restored, Duplicati knows what bricks it needs and in which bags these are. So, it grabs the required bags, takes out the bricks and rebuilds your file. If the file is still on your computer (in a version you do not want anymore), Duplicati can just replace the wrong bricks, thus updating the existing file.
|
||||||
|
|
||||||
|
From time to time, Duplicati will notice that there are a few bags that contain bricks it does not need anymore. It grabs those bags, sorts the bricks. It throws away the bricks that are not needed anymore, then it puts the required bricks into new bags and puts them back into the box. Duplicati will also notice if there is a large number of bags that only contain a very small number of bricks. Duplicati grabs all those bags, takes out the bricks, puts them into a small number of new bags and puts these into the box.
|
||||||
|
|
||||||
|
And to say the good news again: There is no need to upload full backups regularly. This makes Duplicati a perfect choice for incremental backups of large media libraries.
|
||||||
|
|
241
docs/02-installation.md
Normal file
241
docs/02-installation.md
Normal file
@ -0,0 +1,241 @@
|
|||||||
|
## Duplicati components
|
||||||
|
|
||||||
|
Before installing Duplicati, you should know how the different components relate to each other and how they can be configured. This makes it easier to decide how the software can be installed the way that is matches your needs. The main components are:
|
||||||
|
|
||||||
|
* **The Server**
|
||||||
|
When the Duplicati server is started, Duplicati can perform tasks in the background, like making backups, do restore operations and perform maintenance tasks.
|
||||||
|
The server part has a built-in scheduler to start backup jobs automatically at regular intervals.
|
||||||
|
For configuring new, of modifying existing backup jobs, change settings en monitoring running backups, a web server is included in the server component.
|
||||||
|
If the server component starts, the webservice listens on the loopback interface, making it reachable from the local host only. The server tries to start listening on TCP port 8200\. If this port is unavailable (because of another running Duplicati server instance or another process that uses port 8200), port 8300 will be tried, increasing until an unused port is found. Port and interface can be customized by specifying some command line parameters.
|
||||||
|
* **The Command Line tools**
|
||||||
|
Duplicati can make backups without loading the server component, using the command line tools. To schedule backups without using the server component, almost all task schedulers can be used, for example the Windows Task Scheduler (Windows) or Cron (Linux).
|
||||||
|
Other command line tools can help with restore operations, recovering files from corrupted backups, installing the server component as a server, analyze communication with backends or update the software.
|
||||||
|
* **The Tray Icon**
|
||||||
|
When started, the Duplicati Tray icon tool creates a small icon in the System Tray for easy access to the Duplicati Web Interface. The server component is included in the Tray Icon tool. After a default installation, the Tray Icon tool will be automatically started after a user logs on, making it unnecessary to configure the server component in an everyday use case.
|
||||||
|
* **The Service**
|
||||||
|
Basically this is the same as the Duplicati Server component, but running as a Windows service. If Duplicati is registered as a Windows service, a small agent starts the server component and pings the server to verify it is running. If the server component is unreachable, the agent will restart it.
|
||||||
|
|
||||||
|
When a backup is made, Duplicati has the same permissions to the file system as the user context it is running in. If Duplicati is started with the Tray Icon tool, or if a user starts the Server component, or the command line tools are used, a backup can be made from all files that the user has read access to. Probably personal documents of other users that log on the same computer cannot be backed up by this instance.
|
||||||
|
|
||||||
|
However, other users can run their own Duplicati server instance (using different port numbers for the web server), which will give them access to their own personal files. The settings and backup configurations are stored at separate locations, so all users will see their own Duplicati environment.
|
||||||
|
|
||||||
|
If you want to be able to access all files on your computer and backup files from multiple users with the same Duplicati instance, the best option is to register Duplicati as a Windows service. Services are started by default with the Local System account. This account has NTFS permissions to the complete file system. Note that this will give the backup operator access to the files from all users that log on to the computer by using the source file picker in the Duplicati Web interface. It is highly recommended to secure the web interface with a strong password in this case.
|
||||||
|
|
||||||
|
Registering Duplicati as a service could also be a solution if you want to make use of Volume Shadowcopy Services (VSS), but your user account does not have administrative privileges.
|
||||||
|
|
||||||
|
|
||||||
|
*****
|
||||||
|
>  Running Duplicati using the Tray Icon causes the Duplicati Server component to start after the user logs on. If you want Duplicati to be able to run backups after a restart, before a user logs on, consider registering Duplicati as a service.
|
||||||
|
|
||||||
|
*****
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
*****
|
||||||
|
 Duplicati depends on other software. For Windows, Microsoft .NET Framework 4.5 or higher needs to be installed. Linux and Mac OS X require Mono to be installed.
|
||||||
|
|
||||||
|
If your system has no or an outdated version of the .NET Framework, download the latest version from [https://www.microsoft.com/net/download/framework](https://www.microsoft.com/net/download/framework) and install it.
|
||||||
|
|
||||||
|
To be able to backup files that are in use by another process, Duplicati uses AlphaVSS to accomplish this. AlphaVSS needs the Visual C++ run-time components for Visual Studio 2015. Download and install the binaries from [https://www.microsoft.com/en-us/download/details.aspx?id=48145](https://www.microsoft.com/en-us/download/details.aspx?id=48145).
|
||||||
|
<img align="right" src="/icon_windows_end.png"></img></br>
|
||||||
|
|
||||||
|
*****
|
||||||
|
|
||||||
|
*****
|
||||||
|
 Follow this procedure to install Mono on your Linux based system.</span></span>
|
||||||
|
|
||||||
|
**Ubuntu 16.04:**
|
||||||
|
```nohighlight
|
||||||
|
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF
|
||||||
|
|
||||||
|
echo "deb http://download.mono-project.com/repo/ubuntu xenial main" | sudo tee /etc/apt/sources.list.d/mono-official.list
|
||||||
|
|
||||||
|
sudo apt-get update
|
||||||
|
|
||||||
|
sudo apt-get install mono-devel
|
||||||
|
```
|
||||||
|
|
||||||
|
**Ubuntu 14.04:**
|
||||||
|
```nohighlight
|
||||||
|
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF
|
||||||
|
|
||||||
|
echo "deb http://download.mono-project.com/repo/ubuntu trusty main" | sudo tee /etc/apt/sources.list.d/mono-official.list
|
||||||
|
|
||||||
|
sudo apt-get update
|
||||||
|
|
||||||
|
sudo apt-get install mono-devel
|
||||||
|
```
|
||||||
|
|
||||||
|
**Ubuntu 12.04:**
|
||||||
|
```nohighlight
|
||||||
|
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF
|
||||||
|
|
||||||
|
echo "deb http://download.mono-project.com/repo/ubuntu precise main" | sudo tee /etc/apt/sources.list.d/mono-official.list
|
||||||
|
|
||||||
|
sudo apt-get update
|
||||||
|
|
||||||
|
sudo apt-get install mono-devel
|
||||||
|
```
|
||||||
|
|
||||||
|
**Debian 9:**
|
||||||
|
```nohighlight
|
||||||
|
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF
|
||||||
|
|
||||||
|
echo "deb http://download.mono-project.com/repo/debian stretch main" | sudo tee /etc/apt/sources.list.d/mono-official.list
|
||||||
|
|
||||||
|
sudo apt-get update
|
||||||
|
|
||||||
|
sudo apt-get install mono-devel
|
||||||
|
```
|
||||||
|
|
||||||
|
**Debian 8:**
|
||||||
|
```nohighlight
|
||||||
|
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF
|
||||||
|
|
||||||
|
echo "deb http://download.mono-project.com/repo/debian jessie main" | sudo tee /etc/apt/sources.list.d/mono-official.list
|
||||||
|
|
||||||
|
sudo apt-get update
|
||||||
|
|
||||||
|
sudo apt-get install mono-devel
|
||||||
|
```
|
||||||
|
|
||||||
|
**Debian 7:**
|
||||||
|
```nohighlight
|
||||||
|
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF
|
||||||
|
|
||||||
|
echo "deb http://download.mono-project.com/repo/debian wheezy main" | sudo tee /etc/apt/sources.list.d/mono-official.list
|
||||||
|
|
||||||
|
sudo apt-get update
|
||||||
|
|
||||||
|
sudo apt-get install mono-devel
|
||||||
|
```
|
||||||
|
|
||||||
|
**Raspbian 9:**
|
||||||
|
```nohighlight
|
||||||
|
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF
|
||||||
|
|
||||||
|
echo "deb http://download.mono-project.com/repo/debian raspbianstretch main" | sudo tee /etc/apt/sources.list.d/mono-official.list
|
||||||
|
|
||||||
|
sudo apt-get update
|
||||||
|
|
||||||
|
sudo apt-get install mono-devel
|
||||||
|
```
|
||||||
|
|
||||||
|
**Raspbian 8:**
|
||||||
|
```nohighlight
|
||||||
|
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF
|
||||||
|
|
||||||
|
echo "deb http://download.mono-project.com/repo/debian raspbianjessie main" | sudo tee /etc/apt/sources.list.d/mono-official.list
|
||||||
|
|
||||||
|
sudo apt-get update
|
||||||
|
|
||||||
|
sudo apt-get install mono-devel
|
||||||
|
```
|
||||||
|
|
||||||
|
**CentOS 7:**
|
||||||
|
```nohighlight
|
||||||
|
yum install yum-utils
|
||||||
|
|
||||||
|
rpm --import "http://keyserver.ubuntu.com/pks/lookup?op=get&search=0x3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF"
|
||||||
|
|
||||||
|
yum-config-manager --add-repo http://download.mono-project.com/repo/centos7/
|
||||||
|
|
||||||
|
yum install mono-devel
|
||||||
|
```
|
||||||
|
|
||||||
|
**CentOS 6:**
|
||||||
|
```nohighlight
|
||||||
|
yum install yum-utils
|
||||||
|
|
||||||
|
rpm --import "http://keyserver.ubuntu.com/pks/lookup?op=get&search=0x3FA7E0328081BFF6A14DA29AA6A19B38D3D831EF"
|
||||||
|
|
||||||
|
yum-config-manager --add-repo http://download.mono-project.com/repo/centos6/
|
||||||
|
|
||||||
|
yum install mono-devel
|
||||||
|
|
||||||
|
```
|
||||||
|
<img align="right" src="/icon_linux_end.png"></img></br>
|
||||||
|
|
||||||
|
*****
|
||||||
|
|
||||||
|
|
||||||
|
*****
|
||||||
|
 Download the latest Mono version from [http://www.mono-project.com/download/](http://www.mono-project.com/download/). Run the .pkg file and accept the terms of the license.
|
||||||
|
<img align="right" src="/icon_apple_end.png"></img></br>
|
||||||
|
|
||||||
|
*****
|
||||||
|
|
||||||
|
|
||||||
|
## Downloading Duplicati
|
||||||
|
|
||||||
|
Duplicati can be downloaded from [https://www.duplicati.com/download](https://www.duplicati.com/download). Choose the version that matches your operating system. The Zip file version contains the binaries without an OS-specific installer. Use the Zip file version for a portable installation or if you want to use the Command Line tools only. This version can be used for all supported operating systems.
|
||||||
|
|
||||||
|
## Installing Duplicati on Windows
|
||||||
|
|
||||||
|
*****
|
||||||
|
 The installation procedure for Windows systems is pretty straightforward. Download the Windows MSI installer package from [https://www.duplicati.com/download](https://www.duplicati.com/download). If you have a 32 bit version of Windows, download the Windows 32 bit installer package.
|
||||||
|
|
||||||
|
First step is the Welcome screen. Click Next to proceed.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
|
||||||
|
Read the license agreement. If you agree, select _I accept the terms in the License Agreement_ and click Next.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
|
||||||
|
Select which components you want to install. Pay special attention to _Launch Duplicati at startup_. If it is selected, the Duplicati Tray Icon Tool will be started automatically after logging on to Windows. The Duplicati Server component is included in the Tray Icon tool. If you want to start the server component another way (i.e. by registering the server component as a Windows service), you have 2 options:
|
||||||
|
|
||||||
|
* Disable _Launch Duplicati at startup_.
|
||||||
|

|
||||||
|
The Tray Icon tool will be installed, but not automatically started. You can start it manually by executing Duplicati.GUI.TrayIcon.exe.</span></span>
|
||||||
|
* Keep _Launch Duplicati at startup_ enabled, but modify the properties of the shortcut after the installation wizard has been completed.
|
||||||
|
This will ease access to the Duplicati Web interface, but if you don't manually deactivate the internal server component, you will end up with multiple Duplicati instances, which is probably undesireable.
|
||||||
|
|
||||||
|
|
||||||
|
Click Next to proceed.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
|
||||||
|
Click the Install button to start the installation.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
|
||||||
|
Wait for the installation procedure to complete.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
|
||||||
|
If you don't want to start the Duplicati Tray Icon tool now (i.e. if you want to register it as a service), deselect _Launch Duplicati now_.
|
||||||
|
|
||||||
|
Click Finish to complete the installation wizard.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
|
||||||
|
The first time the server component starts, Windows Firewall (or another third party firewall application) may show an alert. Allow Duplicati to communicate over the network.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
|
||||||
|
## Configuring the Duplicati Tray Icon in Windows
|
||||||
|
|
||||||
|
If you have chosen _Launch Duplicati at startup_, but don't want to use the internal server component of the Tray Icon tool, you have to edit the properties of the Duplicati shortcut in the Windows Startup folder. You can find this shortcut in `C:\ProgramData\Microsoft\Windows\Start Menu\Programs\StartUp`
|
||||||
|
|
||||||
|
Browse to this folder and edit the properties of the Duplicati 2 shortcut. Add `--no-hosted-server` to the Target field.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
A Duplicati icon will be shown in the system tray after logging in to Windows, but the Duplicati server component needs to be started separately.
|
||||||
|
|
||||||
|
*****
|
||||||
|
>  If you disabled Launch Duplicati at startup in the installation wizard and want to startup the Duplicati Server component during bootup time, you have to register Duplicati Server as a Windows service. See [Duplicati.WindowsService.exe](#duplicati-windowsservice-exe) for more information.
|
||||||
|
|
||||||
|
*****
|
||||||
|
<img align="right" src="/icon_windows_end.png"></img></br>
|
||||||
|
|
||||||
|
*****
|
||||||
|
|
784
docs/03-using-the-graphical-user-interface.md
Normal file
784
docs/03-using-the-graphical-user-interface.md
Normal file
@ -0,0 +1,784 @@
|
|||||||
|
## Introduction to the Graphical User Interface
|
||||||
|
|
||||||
|
The most convenient way to configure and control Duplicati is using the Graphical User Interface. Duplicati provides an internal web server that allows the user to configure and schedule backup jobs, perform restore operations and apply settings. This web interface is available when Duplicati.Server.exe and/or Duplicati.GUI.TrayIcon.exe is/are running. The first instance of the web server is listening on TCP port 8200. Additional instances listen on port 8300 and higher.
|
||||||
|
|
||||||
|
After a standard installation, the web interface can be started by a click on the tray icon:
|
||||||
|

|
||||||
|
|
||||||
|
|
||||||
|
Click _Open_ in the popup menu that shows up:
|
||||||
|

|
||||||
|
|
||||||
|
|
||||||
|
Note that the tray icon can have different colors:
|
||||||
|
 Duplicati is inactive.
|
||||||
|
 Duplicati is active. A backup job is running.
|
||||||
|
 There is an error message that is not yet acknowledged.
|
||||||
|
|
||||||
|
If you don't use the tray icon (for example if you disabled _Launch Duplicati at startup_ in the installation wizard), or if you want to call another Duplicati instance than the default one, open your web browser and enter the URL and port number in the address bar in your browser.
|
||||||
|
|
||||||
|
The default url is http://localhost:8200
|
||||||
|
|
||||||
|
This can be changed by providing command line options to Duplicati.Server.exe or Duplicati.GUI.TrayIcon.exe. See [Duplicati.GUI.TrayIcon.exe](#_duplicati.gui.trayicon.exe), [Duplicati.Server.exe](#_duplicati.server.exe) and [Duplicati.WinowsService.exe](#_duplicati.winowsservice.exe) for more information.
|
||||||
|
|
||||||
|
The first time you start the Duplicati Web interface, this message is presented:
|
||||||
|

|
||||||
|
|
||||||
|
Pay special attention to this message. Everyone who has access to your computer (or even another computer in your network), could potentionally have access to your personal files by using the Duplicati Web interface. If Duplicati is installed as a service, even personal files from all users on the computer could be accessible.
|
||||||
|
|
||||||
|
For this and other security reasons, it is strongly recommended to set a password to the Web interface by clicking the _Yes_ button.
|
||||||
|
|
||||||
|
In the Settings page, which is displayed, you can set a password to the interface and optionally allow remote access to the webserver. If you grant remote access, note that you also need to open the appropriate TCP port in your firewall.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
Click the _OK_ button to save your changes. After supplying a password, you are logged out from the web interface and need to re-logon with your new password.
|
||||||
|

|
||||||
|
|
||||||
|
|
||||||
|
The Duplicati main window is displayed. The responsive design makes Duplicati easy to use on screens of all sizes, including mobile devices.
|
||||||
|
|
||||||
|
On larger screens, the main page looks like this:
|
||||||
|

|
||||||
|
|
||||||
|
|
||||||
|
On screens with a lower resolution, Duplicati looks like this:
|
||||||
|

|
||||||
|
|
||||||
|
In this layout, a click on the Menu icon shows the main menu:
|
||||||
|

|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
## Components of the Graphical User Interface
|
||||||
|
|
||||||
|
At first run, the Duplicati screen is mostly empty. After one or backup backup jobs have been configured, this space will be used to present these backup jobs and some status information, giving you a quick impression of scheduling, the space used at the backend and how many versions are available. You can also start certain operations for a specific backup job here.
|
||||||
|
|
||||||
|
At the top of the page, you see the header, which consists of the Duplicati logo, the status bar, a pause button, a throttle button and some donation buttons.
|
||||||
|
|
||||||
|
The Duplicati logo tells what Update Channel you use.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
If your initial Duplicati installation was a Beta version, The default Update Channel will be Beta. This can be changed in the Settings page.
|
||||||
|
|
||||||
|
The Status Bar shows information about the currently running Backup or Restore job. If no operation is active, the next scheduled backup job is showed here. If there are no scheduled backup jobs, the Status Bar shows "No scheduled tasks".
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
The Pause and Throttle buttons can be used for keeping control over the bandwidth used by Duplicati.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
With the Pause button you can temporarily stop Duplicati uploading and downloading any file from and to the backend. With the Throttle button you can limit the bandwidth Duplicati uses by specifying a maximum upload and download speed.
|
||||||
|
|
||||||
|
If you like Duplicati, consider making a donation.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
You can use these buttons to donate using PayPal or Bitcoin. Displaying these buttons can be disabled in the Settings menu.
|
||||||
|
|
||||||
|
The main menu can be found at the left side on high resolution screens or under the Menu button in the upper right corner when using a lower resolution, for example on mobile devices.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
A short description of the menu items:
|
||||||
|
|
||||||
|
| | |
|
||||||
|
|------------------------------|------------------------------------------------------------|
|
||||||
|
|  | Leave the current submenu and return to the home screen. |
|
||||||
|
|  | Add a new backup configuration. |
|
||||||
|
|  | Restore files from an already configured backup job or directly from the backend or from an imported configuration file. |
|
||||||
|
|  | Change general program settings and define default settings for all backup jobs. |
|
||||||
|
|  | Show the Duplicati log files and view events in real time. |
|
||||||
|
|  | Show information about the current Duplicati version and system information. |
|
||||||
|
|  | Logs out from the Duplicati Graphical User Interface. This item is omitted if no password is set to the User Interface. |
|
||||||
|
|
||||||
|
|
||||||
|
## Creating a new backup job
|
||||||
|
|
||||||
|
New backup jobs can be configured and scheduled by clicking _Add backup_ in the main menu. Before the actual wizard starts, you can choose between _Configure a new backup_ and _Import from a file_. With _Import from a file_ you can import a configuration file that you exported earlier from the same computer or another computer running Duplicati. Because there is no configuration file available and we want to specify all options, we choose the first option and click _Next_.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
The wizard consists of 5 steps. In step 1 you can give the backup job a descriptive name and define the encryption settings.
|
||||||
|
|
||||||
|
*****
|
||||||
|
>  Losing your encryption key will render your backup files useless and makes restore operations impossible. Always store your encryption key in a safe place, separated from your backup files and not on your computer that contains the Duplicati source files.
|
||||||
|
|
||||||
|
*****
|
||||||
|
|
||||||
|
Enter a descriptive name, Select the encryption type and specify a strong encryption key. Duplicati gives an indication of the strengthness of the key you entered. Optionally Duplicati can generate a strong encryption key for you.
|
||||||
|
|
||||||
|
*****
|
||||||
|
>  Encryption can be disabled, but is strongly discouraged, especially if you upload your backup files over the internet to a public cloud storage solution. Click Next to continue.
|
||||||
|
|
||||||
|
*****
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
In step 2 you can specify the Storage Type you want to use for your backups and enter the URL, path and credentials. In this example, FTP is used, because it is an industry standard protocol that is easy to set up.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
*****
|
||||||
|
>  Each Storage Type has its own requirements that you need to fill in. For an S3 compatible backend, you need to specify a Bucket name, region and storage class. For other backends, like Google Drive or Microsoft OneDrive, you need to create an AuthID token to grant permission to Duplicati to get access to that backend.
|
||||||
|
|
||||||
|
*****
|
||||||
|
|
||||||
|
If all required fields are filled in, you can optionally click the _Test connection_ button. Duplicati will try to connect to the backend using the provided information. If Duplicati can connect to the backend, but the specified folder does not exist, Duplicati can create it for you.
|
||||||
|
|
||||||
|
Under _Advanced options_ you can specify a number of settings that are specific for the storage type you selected. Pick a setting from the list to add it to the Advanced Options and change the setting as needed. Click _Next_ to proceed to step 3.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
In step 3 you can select the files and folders you want to include in the backup. This can be done by selecting files and folders in the file picker. Only local files and folders can be selected using the file picker. If you want to include shared folders in your local network, you have to specify the path in the text box beneath the file picker.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
*****
|
||||||
|
>  If you want to include one or more libraries, like My Documents or My Pictures, keep in mind that the file picker shows these location in the context of the user account that is used to start the server component. If you run Duplicati using the integrated server component in the System Tray tool (this is the default setup), then these libraries point to your personal folders. However, if you registered Duplicati as a service, these libraries point to the personal folders of the SYSTEM account, which are probably empty. To select your personal libraries, don?t use the My Documents/Music/Pictures/Videos/Desktop items, but drill down through the file system, probably `C:\Users\<Username>\Documents` etc.
|
||||||
|
|
||||||
|
*****
|
||||||
|
|
||||||
|
*****
|
||||||
|
>  Clicking an item in the file picker will add that item and all child items to the source selection list. This is indicated with a green check mark. Clicking it a second time changes the check mark to a red cross. This excludes that item an all child items from the backup.
|
||||||
|
|
||||||
|
*****
|
||||||
|
|
||||||
|
There's a small button in the upper right corner of the file picker:
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
Clicking it gives access to the advanced editor. In the advanced editor you can enter the files and folders you want to include in your backup instead of browsing to them.
|
||||||
|
|
||||||
|
*****
|
||||||
|
>  You can review your selections under Source data in the file picker.
|
||||||
|
|
||||||
|
*****
|
||||||
|
|
||||||
|
Under _Filters_ you can specify what you want to be excluded from the backup. If you deselected one or more files or folders in the file picker (marked with a red cross), they show up in the list under _Filters_. You can specify more exclusions based on file- or folder name, specific files or folders or even using a Regular Expression.
|
||||||
|
There are default exclusion lists for Windows, Linux and OS X. Selecting the appropriate operating system excludes all files and folders that are known to be unneeded or impossible to be backed up (like temporary files, the paging file or the hibernation file). For Filters an advanced editor is available too using the button in the upper right corner.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
Another way to avoid unneeded files to be backed up, is excluding files with a specific attribute or files that exceed a predefined file size. Select what you want to exclude under the _Exclude_ item. Click _Next_ for Step 4.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
In step 4 you can schedule your backups. Selecting _Automatically run backups_ disables scheduling for this backup job. Once disabled, you can start the backup job manually when you want it to run. If you keep this enabled, you can specify how frequently and at which time the backup should be started. You can also exclude one or more weekdays.
|
||||||
|
|
||||||
|
If a backup job misses the defined schedule, for example if the computer is powered off, the backup job will start as soon as possible after the specified time. Click next to proceed to the final step.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
In step 5 you can set the Upload Volume size and how many backups should be available for restore operations.
|
||||||
|
|
||||||
|
An Upload Volume is an encrypted compressed file that contains a part of your backup. For normal backup operations you can keep this value unchanged, but in some scenarios, for example very large backups, you can increase size to reduce the number of files at the remote storage location. The default size for an Upload Volume is 50 MB. Increase this value if needed.
|
||||||
|
|
||||||
|
*****
|
||||||
|
>  The Upload volume size is not the maximum capacity that is offered by your storage provider. It is the size of each chunk of data that is uploaded to the backend during a backup operation. Increasing the size of an upload volume will reduce the number of files at the backend, but will require to download more data when performing restore operations. See APPENDIX C Choosing sizes in Duplicati for more information about block- and volume sizes.
|
||||||
|
|
||||||
|
*****
|
||||||
|
|
||||||
|
The retention can be set in 3 ways:
|
||||||
|
|
||||||
|
* **Unlimited:**
|
||||||
|
Backups will never be deleted. This is the most safe option, but remote storage capacity will keep increasing.
|
||||||
|
* **Until they are older than:**
|
||||||
|
Backups older than a specified number of days, weeks, months or years will be deleted.
|
||||||
|
* **A specific number:**
|
||||||
|
The specified number of backup versions will be kept, all older backups will be deleted.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
Under _Advanced options_ there is an extensive list of options to fine-tune your backup job. Click _Pick an option_ and select which option you want to set. This option is added to a list where you can change the value of that option.</span></span>
|
||||||
|
|
||||||
|
*****
|
||||||
|
>  This is only for advanced users. Don?t use this, unless you know exactly what you?re doing. Choosing incorrect values may cause unusable backups.
|
||||||
|
|
||||||
|
*****
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
Click the _Save_ button. Your first backup job should show up in the main window.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
If you have a previously exported configuration file, you can import it by selecting _Import from a file_ in the _Add backup_ menu. N the next step you can browse to the location where your configuration file is stored, enter a passphrase if the configuration file is encrypted and click _Import_. If _Save immediately_ was deselected before the _Import_ button was clicked, you can review all 5 steps and make changes if desirable. Click the _Save_ button in step 5 to save your backup job configuration.
|
||||||
|
|
||||||
|
>  When importing a backup job from a configuration file, a new database will be created, using a random filename. If the configuration file contains the name of a local database, this name will be ignored. This will prevent problems caused by multiple backup jobs using the same local database.
|
||||||
|
If you want to re-use an existing database, open the backup configuration’s Database menu after the job is imported. Enter the path and filename of the existing local database in the Local database path field.
|
||||||
|
|
||||||
|
|
||||||
|
## Running a backup job
|
||||||
|
|
||||||
|
There are 3 ways to start a backup using the Graphical User Interface:
|
||||||
|
|
||||||
|
* If it is a scheduled backup, just wait for the next scheduled time. The backup will start automatically.
|
||||||
|
* Click Run now, just under the backup name.
|
||||||
|

|
||||||
|
* Click on the backup name. Then click Run now under Operations.
|
||||||
|

|
||||||
|
|
||||||
|
The progress bar indicates that the backup job starts:
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
The first time a backup is executed, all data has to be divided into blocks, compressed, packaged into archive files, encrypted and uploaded to the backend. This can take a long time, depending on the amount of source data to be processed, the system performance and the network bandwidth to the backend. After the initial (full) backup, only new and changed data will be processed and uploaded, making successive backups much faster.
|
||||||
|
|
||||||
|
You can follow the progress in the progress bar, where the number of files and the amount of data to be processed will be showed. Also the current upload speed is displayed.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
After all files have been processed, some additional operations are performed.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
If there are files still uploading in the background, Duplicati will wait for them to complete.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
After all files are uploaded, Duplicati will randomly choose a few upload volumes from the backend, download them and view if the contents are what Duplicati expects it to be.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
After the backup has finished, the status bar will show when the next backup will run, if at least one backup job is scheduled.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
If there were any warnings or errors during the backup, they will be displayed in the bottom of the main screen, including a links to the log files and a button to dismiss the alert.
|
||||||
|
|
||||||
|
After the first backup is completed, the Duplicati main screen will display some additional information about the backup:
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
## Restoring files from a backup
|
||||||
|
|
||||||
|
If you want to restore one or more files from a backup, you can start the restore wizard:
|
||||||
|
|
||||||
|
* By clicking the backup name and click Restore files ? under Operations.
|
||||||
|

|
||||||
|
* ? By clicking Restore in the main menu, select the backup you want to restore from and clicking Next.
|
||||||
|

|
||||||
|
|
||||||
|
The Restore wizard consists of two steps. In step 1 you can specify what you want to restore and from which restore point you want to restore these files. In step 2 you can choose to what location you want to restore the files and supply some options for the restore operation.
|
||||||
|
|
||||||
|
In the first step, select the restore point from which you want to restore some files by selecting a date and time behind _Restore from_. Each restore point will list all files and folders included in the backup exactly as they were at the listed timestamp.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
In the file browser, select all files and folders you want to restore. Selections will be marked with a green check mark. Clicking a folder will select that folder and all underlying files and folders. You can exclude files and folders inside a selected folder, by clicking them. The preceding check marks will be removed from the clicked objects. Folders that are partially selected are marked with a green square.
|
||||||
|
|
||||||
|
You can find files easily by typing a part of the filename in the _Search for files_ text box. Filenames containing your search query will be highlighted in the file browser.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
*****
|
||||||
|
>  Highlighting does not actually select the files you type in. Only files with a green check mark will be restored.
|
||||||
|
|
||||||
|
*****
|
||||||
|
|
||||||
|
If you have selected all files and folders you want to restore, click _Continue_ to proceed to step 2.
|
||||||
|
|
||||||
|
The second step allows you to specify a location to restore the selected files to. Choose _Original location_ to restore the files to their original location. Choosing _Pick location_ allows you to select an alternative location to restore your files to. You can do this by typing the folder path or selecting the root folder with the Browse button.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
If you chose to restore to the original location, you can specify what Duplicati should do with files that already exist: overwrite them or restore to a new file with a timestamp in the file name.
|
||||||
|
|
||||||
|
You can also restore file access permissions. This is disabled by default, because doing this might prevent access to the files that you just restored.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
Clicking the Restore button will start the actual restore operation. The backup operation starts with scanning local files for blocks that are already available. This can reduce the amount of downloaded data from the backend significantly.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
Next part of the restore process is downloading the required upload volumes from the backend for assembling the selected files and folders to restore.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
After the operation has been completed, Duplicati will notify you and encourage you to make a donation. The donation information can be disabled in the _Settings_ menu. Warnings or errors, if any, will be showed in the bottom part of the Duplicati main screen. Clock OK to return to the main screen.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
## Restoring files if your Duplicati installation is lost
|
||||||
|
|
||||||
|
If you want to restore your files without being able to use your Duplicati installation (for example on another computer, or after a system crash), you have to restore your files directly from the backup destination.
|
||||||
|
|
||||||
|
If Duplicati isn't installed on the computer you want to restore to, download and install Duplicati first. See [Installation](#_Installation) for more information.
|
||||||
|
|
||||||
|
To start a restore operation without a configured backup job, click Restore in the main menu. You have 2 options:
|
||||||
|
|
||||||
|
* If you have exported the backup configuration earlier to a file and still have access to this file, you can import it and start restoring. This is the easiest option.
|
||||||
|
* If you don?t have an exported configuration file, you need to know the backend URL, credentials and the backup passphrase. Once entered all needed information, you can start restoring your files.
|
||||||
|
|
||||||
|
If you don't have a configuration file, you have to supply all needed information yourself. Select _Direct restore from backup files ..._ and click _Next_.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
In Step 1 (Backup location), you have to select the correct Storage Type and fill in the required information to connect to the remote storage.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
Click the _Test connection_ button to verify if the connection works. You should get a message indicating that the connection works.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
Optionally, supply one or more advanced options for the selected backend. Click next to proceed to step 2.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
In step 2 (Encryption), specify the backup passphrase and optionally supply one or more advanced options.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
Click the _Connect_ button to retrieve backup information from the backend.
|
||||||
|
|
||||||
|
If you have a configuration file, select _Restore from configuration ..._ and click _Next_.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
Browse to the location where the configuration file is stored. If this file was encrypted during the export, enter the passphrase in the text field. Click Import to continue.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
The restore wizard is showed. In step 1 (Backup location) all information to connect to the remote storage is filled in with the information from the configuration file. Click the _Test connection_ button to check if the connection works.
|
||||||
|
|
||||||
|
In step 2, the passphrase is already filled in. Click the _Connect_ button to proceed to step 3.
|
||||||
|
|
||||||
|
Duplicati connects to the remote storage and retrieves a list of available backups.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
Then file information is being retrieved.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
After the Fetching path information task has been completed, the restore process is exactly the same as described in [Restoring files from a backup](#_Restoring_files_from).
|
||||||
|
|
||||||
|
## Editing an existing backup
|
||||||
|
|
||||||
|
Sometimes changes need to be made to a backup configuration. If you create a new folder and want to add this folder as a backup source in your configuration, you have to edit the backup job. Other examples are changed credentials for the backend, defining another schedule and set or change some advanced options for your backup job.
|
||||||
|
|
||||||
|
To modify a backup job configuration, click the name of the backup job and click _Edit ..._ under _Configuration_.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
You return to the same wizard that was shown when adding the backup. The difference is that all 5 steps are already filled in with the settings you chose in the Add backup wizard.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
You can walk through the wizard by optionally change some settings and click _Next_ until you reach the last step of the wizard. You can also click the number of the step you want to edit in the selector at the top.
|
||||||
|
|
||||||
|
If all settings are correct, click the _Save_ button in step 5.
|
||||||
|
|
||||||
|
*****
|
||||||
|
>  Many settings can be modified, but some settings cannot be changed after the initial backup is made. For obvious reasons, the passphrase and the block size need to stay the same once the initial backup is completed.
|
||||||
|
|
||||||
|
*****
|
||||||
|
|
||||||
|
*****
|
||||||
|
>  Adding or modifying advanced options may have unwanted effects. Never modify settings in a backup configuration, unless you are sure what the consequences of the change are.
|
||||||
|
|
||||||
|
*****
|
||||||
|
|
||||||
|
## Exporting a backup job configuration
|
||||||
|
|
||||||
|
Backup job configurations can be exported in 2 ways:
|
||||||
|
|
||||||
|
* **As Command-line**
|
||||||
|
If you don't want to use the Graphical User Interface to manage your backups and/or you want to use another task scheduler instead of the scheduler that is integrated in Duplicati, you can use the Command-line export to generate a command that you can use to perform the current backup job with the `Duplicati.CommandLine.exe` tool.
|
||||||
|
* **To File**
|
||||||
|
When exporting to a file, a standard JSON file is generated that contains all settings of the selected backup job configuration. This file can be used later for importing in a new Duplicati installation if your computer is lost because of a disaster.
|
||||||
|
|
||||||
|
To export a backup job configuration, click its name and click _Export ..._ under _Configuration_.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
To export the current configuration as a ready-to-use command, select _As Command-line_ and click the _Export_ button.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
The result is a Duplicati backup command that you can use with a scheduler of your choice.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
If you want to export to a file, select _To file_ and click the _Export_ button.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
If you select _Encrypt file_, you can enter a passphrase to scramble your configuration file, making it unreadable for others.
|
||||||
|
|
||||||
|
 Configuration files contain sensitive information, like your backup passphrase and credentials to authenticate to your backend. This information is stored as plain text in unencrypted configuration files. If you choose not to encrypt the configuration file, be sure to store it somewhere nobody else has access to.
|
||||||
|
|
||||||
|
 Losing the passphrase will make the configuration file useless. Without the passphrase it is impossible to extract information from the configuration file. Store the passphrase in a safe place.
|
||||||
|
|
||||||
|
 Never store the configuration file and ? if applicable ? the passphrase on the computer running Duplicati. It is likely that you need them when your computer is lost. Be sure to keep access to file and passphrase if you can?t use your computer anymore.
|
||||||
|
|
||||||
|
The contents of the file (in unencrypted form) could look something like this:
|
||||||
|
|
||||||
|
```nohighlight
|
||||||
|
{
|
||||||
|
"CreatedByVersion": "2.0.2.12",
|
||||||
|
"Schedule": null,
|
||||||
|
"Backup": {
|
||||||
|
"ID": "1",
|
||||||
|
"Name": "Pictures Collection",
|
||||||
|
"Tags": [],
|
||||||
|
"TargetURL": "ftp://myftpserver.com/Backup/Pictures?auth-username=Duplicati&auth-password=backup",
|
||||||
|
"DBPath": "C:\\Users\\User\\DuplicatiCanary\\data\\NTWRLRVPKH.sqlite",
|
||||||
|
"Sources": [
|
||||||
|
"%MY_PICTURES%"
|
||||||
|
],
|
||||||
|
"Settings": [
|
||||||
|
{
|
||||||
|
"Filter": "",
|
||||||
|
"Name": "encryption-module",
|
||||||
|
"Value": "aes",
|
||||||
|
"Argument": null
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"Filter": "",
|
||||||
|
"Name": "compression-module",
|
||||||
|
"Value": "zip",
|
||||||
|
"Argument": null
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"Filter": "",
|
||||||
|
"Name": "dblock-size",
|
||||||
|
"Value": "50mb",
|
||||||
|
"Argument": null
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"Filter": "",
|
||||||
|
"Name": "keep-time",
|
||||||
|
"Value": "3M",
|
||||||
|
"Argument": null
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"Filter": "",
|
||||||
|
"Name": "passphrase",
|
||||||
|
"Value": "%@/%78kUPKlZtz",
|
||||||
|
"Argument": null
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"Filter": "",
|
||||||
|
"Name": "--skip-files-larger-than",
|
||||||
|
"Value": "2GB",
|
||||||
|
"Argument": null
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"Filter": "",
|
||||||
|
"Name": "--default-filters",
|
||||||
|
"Value": "Windows",
|
||||||
|
"Argument": null
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"Filter": "",
|
||||||
|
"Name": "--exclude-files-attributes",
|
||||||
|
"Value": "temporary",
|
||||||
|
"Argument": null
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"Filters": [
|
||||||
|
{
|
||||||
|
"Order": 0,
|
||||||
|
"Include": false,
|
||||||
|
"Expression": "desktop.ini"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"Metadata": {
|
||||||
|
"LastDuration": "00:00:05.3465555",
|
||||||
|
"LastStarted": "20171030T171703Z",
|
||||||
|
"LastFinished": "20171030T171708Z",
|
||||||
|
"LastBackupDate": "20171030T163454Z",
|
||||||
|
"BackupListCount": "4",
|
||||||
|
"TotalQuotaSpace": "0",
|
||||||
|
"FreeQuotaSpace": "0",
|
||||||
|
"AssignedQuotaSpace": "-1",
|
||||||
|
"TargetFilesSize": "454306034",
|
||||||
|
"TargetFilesCount": "26",
|
||||||
|
"TargetSizeString": "433.26 MB",
|
||||||
|
"SourceFilesSize": "216463728",
|
||||||
|
"SourceFilesCount": "79",
|
||||||
|
"SourceSizeString": "206.44 MB",
|
||||||
|
"LastBackupStarted": "20171030T163547Z",
|
||||||
|
"LastBackupFinished": "20171030T163549Z"
|
||||||
|
},
|
||||||
|
"IsTemporary": false
|
||||||
|
},
|
||||||
|
"DisplayNames": {
|
||||||
|
"%MY_PICTURES%": "My Pictures"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Deleting a backup job configuration
|
||||||
|
|
||||||
|
You can delete a backup job if you no longer need to backup the files included in that backup job, or if the source files no longer exist. To delete a backup job, click on the backup name and click _Delete ..._ under _Configuration_.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
In the _Delete backup_ screen, you can choose to keep or delete the local database associated with the selected backup job. Default setting is to delete the local database, because it is no longer needed if the backup job no longer exist. In addition to that, the database can be rebuilt from using the files at the backend.
|
||||||
|
|
||||||
|
Before deleting a backup job, it is recommended to export the backup job settings to a file. The _Export configuration_ button is a quick link to this function. More about exporting backup configuration can be found in [Exporting a backup job configuration](#_Exporting_a_backup).
|
||||||
|
|
||||||
|
If you no longer need the backup files themselves, Duplicati can delete these files from the backend, freeing up remote storage space.
|
||||||
|
|
||||||
|
*****
|
||||||
|
>  This is an irreversible process. If your storage provider does not support previous versions or something similar, restoring files from this backup set will be impossible.
|
||||||
|
|
||||||
|
*****
|
||||||
|
|
||||||
|
If you really want to delete the backup files also, click to select _Delete remote files_.
|
||||||
|
|
||||||
|
You can start the deletion with the _Delete backup_ button. If you chose to delete the remote files, you first have to fill in a captcha for security reasons.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
Confirm your choices by clicking _Yes_.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
The backup job is removed from your Duplicati installation.
|
||||||
|
|
||||||
|
|
||||||
|
## Database management
|
||||||
|
|
||||||
|
Duplicati makes use of a local database for each backup job that contains information about what is stored at the backend. Main reasons for storing this information locally are performance and reduction of bandwidth usage. Without this database, Duplicati would need to download a fair amount of data from the backend for any operation.
|
||||||
|
|
||||||
|
 If a local database is available, it will be used during restore operations. However, the database is not required. In disaster recovery scenarios where the computer holding the source files is lost, a local database is not available. Requiring this database would make the backup file useless, so Duplicati is designed such that the local database can be rebuilt if it isn?t available.
|
||||||
|
|
||||||
|
If something happens to the local database, some maintenance tasks can be performed. During a backup operation, Duplicati could find some inconsistencies in the database and will request to do a database repair. You can perform maintenance tasks to the database by clicking the backup name and click _Database ..._ under _Advanced_.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
You have several options in the screen that appears.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
Clicking the Repair button will search the database for inconsistencies and repair them automatically for you.
|
||||||
|
|
||||||
|
If a database if corrupted heavily, or if repairing doesn't help, you can try to delete it. During the next backup job, the database will be rebuilt.
|
||||||
|
|
||||||
|
If you want to rebuild the database immediately, you can click the Recreate button, which will also delete the database, but starts the rebuild process immediately.
|
||||||
|
|
||||||
|
If you want to make changes to the location where the local database is stored, you can use the Reset, Save, Save and repair and Move existing database buttons.
|
||||||
|
|
||||||
|
First type in a new path and/or filename. Clicking Reset will undo changes you made to the Local database path. Save will store the new database location (you have to copy it manually to that location). Save and repair will do the same, but additionally initiate a repair operation on the database. Move existing database will move the database from the current location to the location specified in the Local database path text field.
|
||||||
|
|
||||||
|
## Verifying backend files
|
||||||
|
|
||||||
|
At the end of each backup job, Duplicati checks the integrity by downloading a few files from the backend. The contents of these file is checked against what Duplicati expects it to be. This procedure increases the reliability of the backup files, but backups take a bit longer to complete and use some download bandwidth.
|
||||||
|
|
||||||
|
Automatic verification after backup completion can be disabled by setting an advanced option. Howver, checking the integrity of the backup files is very important. If you disabled automatic verification, or if you just want to perform an additional verification, click the backup name and click Verify files under Advanced.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
The Verify operation starts immediately after clicking _Verify files_.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
If any errors or warnings occur, they will be displayed in a popup at the bottom of the Duplicati screen.
|
||||||
|
|
||||||
|
## Compacting files at the backend
|
||||||
|
|
||||||
|
After each backup operation, old backups are marked for removal. Which backups are considered old, can be configured in the Add backup wizard in Step 5\. Under General options you can specify how many backups you want to keep, or after how many days backups can be deleted.
|
||||||
|
|
||||||
|
Upload volumes (files at the backend) likely contain blocks that do belong old backups only, as well as blocks that are used by newer backups. Because the contents of these volumes are partly needed, they cannot be deleted, resulting in unnecessary allocated storage capacity.
|
||||||
|
|
||||||
|
The compacting process takes care of this. When a predefined percentage of a volume is used by obsolete backups, the volume is downloaded, old blocks are removed and bocks that are still in use are recompressed an re-encrypted. The smaller volume without obsolete contents is uploaded and the original volume is deleted, freeing up storage capacity at the backend.
|
||||||
|
|
||||||
|
Compacting can result in a lot of small volumes at the backend. If enough small files exist that can be combined to one or more volumes of the defined volume size (default 50 MB), these small volumes are downloaded, repackaged and uploaded to the backend, replacing these small files.
|
||||||
|
|
||||||
|
The compacting procedure is triggered after each backup, but can be disabled with an advanced option. If you want to perform a compacting operation manually, click on the backup name and click _Compact now_ under _Advanced_.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
The compacting procedure starts immediately.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
Depending on the amount of obsolete data, this can take some time. If no unused blocks are found, the task stops almost instantly.
|
||||||
|
|
||||||
|
## Using the Command line tools from within the Graphical User Interface
|
||||||
|
|
||||||
|
Some tasks you can perform from the command line are not yet implemented in the Graphical user interface, for example retrieving a list of backed up files, deleting one or more backups, purging files from all backup sets or comparing 2 backups and listing the differences.
|
||||||
|
|
||||||
|
*****
|
||||||
|
>  If you want to use the command line tools, some basic knowledge of how these tools work is required. Improper use of the Commandline tools may damage or delete your backup files.
|
||||||
|
|
||||||
|
*****
|
||||||
|
|
||||||
|
To open a command line screen, click the backup name and click _Commandline ..._ under _Advanced_.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
In the Commandline screen you can specify a number of options in a few sections:
|
||||||
|
|
||||||
|
In the _Command_ section, you ocan choose which command line tool you actualle want to use. You can choose from _send-mail_, _systeminfo_, _vacuum_, _affected_, _test-filters_, _verify_, _test_, _compare_, _create-report_, _compact_, _purge-broken-files_, _list-broken-files_, _purge_, _repair_, _restore_, _backup_, _delete_, _list_, _find_, _examples_ and _help_.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
The Target URL section is already filled in with the URL and credentials that are used by the currently selected backup job. If you want to make changes to it, you can type them in the text box or click the _Target URL_ link.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
Most command line tools need one or more commandline arguments. For example, if you want to delete a specific backup, you have to supply a version number to the Delete command. The default value for this field are the source folders selected for backup, but in most situations you have to change this.
|
||||||
|
|
||||||
|
*****
|
||||||
|
>  All commandline arguments must be entered at separate lines.
|
||||||
|
|
||||||
|
*****
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
All other options that are set for the current backup job are listed under _Advanced options_. Options that do not affect the current backup command can be ignored. Conflicting options in the list can be deleted by clicking the blue X at the right side of each advanced option. The _Edit as text_ link lists all advanced options in a text box, making it easy to delete or modify multiple options.
|
||||||
|
|
||||||
|
**Example 1**: Retrieving a list of all files that can be restored from the latest backup.
|
||||||
|
|
||||||
|
You need the FIND command to list files in a particular backup. The FIND command expects a file mask to filter the list of found files as an argument. If you want a complete list, replace the contents of the _Commandline arguments_ text box with an asterisk (*). Keep the Target URL unchanged. The upper part of the Commandline screen should look something like this:
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
Click the _Run "find" command now_ button. The results are listed in the Duplicati main screen.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
**Example 2**: Find the differences between 2 backups.
|
||||||
|
|
||||||
|
The COMPARE command lists differences between 2 backups. Choose _compare_ in the _Command_ pull-down menu. Leave the Target URL unchanged and enter the base version number and the version number to compare at separate lines.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
Additional options can be typed in the Commandline arguments text box, or added to the Advanced options list. When using the Advanced option list, open the _Add advanced option_ pull-down menu and choose the option you want to add, for example _verbose_. To enable this option, place a checkmark behind it in the list or remove the checkmark to disable verbose mode. Alternatively, you can add `--verbose=true` to a new line in the _Commandline arguments_ text box.
|
||||||
|
|
||||||
|
The `--verbose` option will list all new, modified and deleted files and folders. Without this options, only the totals and the first 10 files will be listed. The results (without the `--verbose` option) looks like this:
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
## Viewing the log files of a backup job
|
||||||
|
|
||||||
|
You can view all messages and results related to backup job operations . To view these log entries, click the backup name and click _Show log ..._ under _Reporting_.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
The log data for the selected backup job is displayed. You can choose to view the general events or the events that are specific to backend operations, by clicking the _General_ or _Remote_ button.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
In the General log, a list is presented with timestamps, followed by _Result_ or _Message_. Click on a line to expand it and view the details of the result or message.
|
||||||
|
|
||||||
|
_Result_ contains statistics, warning and error messages, in case they happened. _Message_ lines contain individual messages that were generated during a particular operation.
|
||||||
|
|
||||||
|
## Creating a bug report
|
||||||
|
|
||||||
|
In case you need technical support, the Duplicati development team could ask for a bug report. A bug report contains information about the system Duplicati is installed on, some information about the Duplicati installation itself and an obfuscated version of the local databases, without the original foldernames and filenames of your local storage.
|
||||||
|
|
||||||
|
To create a bug report, click the backup name and click _Create bug report ..._ under _Reporting_.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
The creation of the bug report starts immediately.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
When completed, a message is displayed and you can download the generated report.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
Click the Download button and send the report to the Duplicati development team for further investigation.
|
||||||
|
|
||||||
|
## Settings in Duplicati
|
||||||
|
|
||||||
|
In the _Settings_ menu you can specify different types of settings. In the first place there are some general program settings. These settings influence the look and feel of the user interface and determine the way that the software is started and updated.
|
||||||
|
|
||||||
|
Additionally you can define a list of default settings that apply to all backup jobs that don't have explicitly defined that settings.
|
||||||
|
|
||||||
|
Access to user interface
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
Click to check the _Password_ checkbox to secure the web interface with a password. Type a strong password in the text field. Click the OK button to confirm your setting. Next time you access the web interface you have to type the password to access the Duplicati web interface.
|
||||||
|
|
||||||
|
When enabled, _Allow remote access_ allows access to the web interface from other hosts in the network.
|
||||||
|
|
||||||
|
*****
|
||||||
|
>  If you are using a firewall, don?t forget to make an access rule in your firewall to allow incoming traffic using the port the Duplicati sever listens on (default port is TCP port 8200).
|
||||||
|
|
||||||
|
*****
|
||||||
|
|
||||||
|
*****
|
||||||
|
>  When allowing remote access, setting a password to the user interface is highly recommended. Anonymous access to Duplicati will give anybody access to your personal files.
|
||||||
|
|
||||||
|
*****
|
||||||
|
|
||||||
|
*****
|
||||||
|
>  If Duplicati is registered as a Windows service, setting a password to the user interface is highly recommended. If the Duplicati service is started using the SYSTEM account, anyone with access to the user interface will have access to the complete file system of the local host. Don?t allow remote access when running Duplicati as a service, unless strictly needed.
|
||||||
|
|
||||||
|
*****
|
||||||
|
|
||||||
|
You can set a delay for Duplicati to become active after startup or hibernation. When Duplicati is started, no tasks will be performed until the specified time has elapsed.</span></span>
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
With the User Interface settings you can change the interface language and choose a color scheme.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
You can disable donation messages, for example if you already made a donation. Toggle donation messages by clicking on the link.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
If you have installed a beta version of Duplicati, your installation is classified in the Beta Update Channel, which means that you will get an update notification when a new beta version is available. You can change the update channel by selecting your preferred type of Duplicati builds.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
Duplicati can send anonymous reports containing information about how you use Duplicati. These reports can be viewed at [https://usage-reporter.duplicati.com/](https://usage-reporter.duplicati.com/).
|
||||||
|
|
||||||
|
You can set the level of reports to be sent (Information, Warning, Error or None). Setting this to _None_ will disable reporting usage statistics at all.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
In the Settings screen, you can generate a list of advanced options. These options will be applied to all backup jobs, unless the same options with another value are specified in a particular backup job configuration. This avoids having to set the same settings for each backup job you create. For example, if you want to send an email after each backup operation, you can set this, including mail server settings and credentials, in the _Default options_ list.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
To add a default option, open the pull-down menu by clicking _pick an option_. A long list is displayed. Find the option you want to set as default option, for example `send-mail level`. The option is added to the list and you can set the value. Type _all_ to send an email regardless of the result (successful, warning, error or fatal). The same way you can add an email recipient by selecting the `send-mail-to` option from the list. Type the email address you want to send the email to. After the options are added, you should have this list:
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
All backup jobs should send a mail to the specified email address after completion. If there is a backup job that should send the email to another address, you only have to specify the alternative email address using advanced option send-mail-to in the backup job configuration. Advanced option `send-mail-level` is already configured in the _Default options_.
|
||||||
|
|
||||||
|
## Viewing the Duplicati Server Logs
|
||||||
|
|
||||||
|
All operations that are performed by the Duplicati server component are stored in the internal log. To view it, click _Show log_ in the main menu. All stored events are listed, including date and time. Clicking on an event shows detailed information about it.
|
||||||
|
|
||||||
|
If you want to see what is happening in the background in real time, click the _Live_ button. This is disabled by default to preserve system resources. Choose one of the levels _Error_, _Warning_, _Information_ or _Profiling_. _Error_ will only display events indicating that something goes wrong, _Profiling_ lists about every single event that occurs. Clicking on an event also reveals detailed information about that event.
|
||||||
|
|
||||||
|
## Getting information about your setup
|
||||||
|
|
||||||
|
If you want more information about the Duplicati version that is installed on your system, or about the system itself Duplicati is running on, click _About_ in the main menu. The _About_ screen consists of four overviews. Each overview can be displayed using the four buttons.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
_General_ will show information about Duplicati and the version you work with. You can check for updates using the _Check for updates now_ link in this overview. If a new version for your update channel is available, a message will be displayed with the new version number. Use the _Download_ and _Activate_ buttons to update your Duplicati installation to the latest version.
|
||||||
|
|
||||||
|
In the _Changelog_ overview you can see what's changed in the software up to the installed version.
|
||||||
|
|
||||||
|
The _Libraries_ button shows all third party components that were used in the Duplicati software package, including a link to the website and a link to licensing information for that component.
|
||||||
|
|
||||||
|
_System info_ displays information about the system Duplicati is installed on.
|
||||||
|
|
||||||
|
## Updating Duplicati
|
||||||
|
|
||||||
|
Duplicati checks for new updates regularly. When a new update in your update channel is available, Duplicati will notify you about this update by displaying a message at the bottom of the main screen. You can check for updates immediately by clicking _Check for updates now_ in the _General_ overview of the _About_ screen.
|
||||||
|
|
||||||
|
Installing updates is as simple as downloading and activating the new version by using the buttons in the message.
|
||||||
|
|
300
docs/04-using-duplicati-from-the-command-line.md
Normal file
300
docs/04-using-duplicati-from-the-command-line.md
Normal file
@ -0,0 +1,300 @@
|
|||||||
|
|
||||||
|
|
||||||
|
## Introduction to the Duplicati Command Line tool
|
||||||
|
|
||||||
|
The integrated webserver in Duplicati offers a convenient way to schedule and run backup jobs. However, if you can't or don't want to use the graphical user interface and/or the built-in scheduler, you can use the Duplicati Commandline tool. The filename is `Duplicati.CommandLine.exe`, the tool can be found in the Duplicati program folder.
|
||||||
|
|
||||||
|
With the Commandline tool you can perform all operations that are available in the Graphical User Interface. You can even do more with the Commandline tool. If you want to perform specific operations, like deleting a particular backup or comparing 2 backups, you will not be able to do this in the Graphical User Interface, but the Commandline tool supports these operations.
|
||||||
|
|
||||||
|
## How to use the Duplicati Command Line tool
|
||||||
|
|
||||||
|
The Commandline tool can be used by typing `Duplicati.CommandLine.exe` followed by a number of arguments in a command prompt window. Linux and Mac OS X users should type `mono Duplicati.CommandLine.exe` or `duplicate-cli`, which is a wrapper for running `mono Duplicati.CommandLine.exe`.
|
||||||
|
|
||||||
|
Which arguments you need to specify depend on the command you run with the Commandline tool. Generally, these arguments need to be supplied:
|
||||||
|
|
||||||
|
* **The command to execute**
|
||||||
|
This tells the Commandline tool what to do. Supported commands will be described one by one. The Commandline tool supports these commands:
|
||||||
|
`backup`, `find`, `restore`, `delete`, `compact`, `test`, `compare`, `purge`, `vacuum`, `repair`, `affected`, `list-broken-files`, `purge-broken-files`
|
||||||
|
* **Target URL**
|
||||||
|
If the command needs access to the files at the backend, you need to specify the protocol, URL and credentials as the first argument.
|
||||||
|
Example: to access host.myftpserver.com/backup with username User and password Pass, using the FTP protocol, the target URL will be:
|
||||||
|
`ftp://User:Pass@host.myftpserver.com`
|
||||||
|
Each storage provider has its own set of required and optional parameters. See [Storage Providers](#_Storage_Providers) for more info about specific backends.
|
||||||
|
* **Command arguments**
|
||||||
|
Some commands need additional information. For example, if you want to compare 2 backups, you have to specify which 2 backups from the available list Duplicati should compare.
|
||||||
|
* **Advanced options**
|
||||||
|
Duplicati offers a wide range of advanced options. With advanced options you give Duplicati additional information, like the location of the local database, where to store temporary files or information to fine-tune the command you want to execute. There are general advanced options and advanced options for specific storage providers. See< [Storage Providers](#_Storage_Providers) and [Advanced options](#_Advanced_options) for more information.
|
||||||
|
|
||||||
|
Generally, each operation from the command line has the following format:
|
||||||
|
|
||||||
|
**For Windows:**
|
||||||
|
`Duplicati.CommandLine.exe <command> [storage-URL] [arguments] [advanced-options]`
|
||||||
|
|
||||||
|
**For Linux and Mac OS X:**
|
||||||
|
`duplicate-cli <command> [storage-URL] [arguments] [advanced-options]`
|
||||||
|
|
||||||
|
Storage-URL, arguments and advanced-options may or may not be mandatory, depending on the command you execute.
|
||||||
|
|
||||||
|
*****
|
||||||
|
>  From this point, only Duplicati.CommandLine.exe will be used as reference to the Commandline tool. Linux and Mac OS X users should replace this with duplicati-cli or mono Duplicati.CommandLine.exe.
|
||||||
|
|
||||||
|
*****
|
||||||
|
|
||||||
|
## Getting help from the Command Line Tools
|
||||||
|
|
||||||
|
The Commandline tool provides online help with the special `help` command. To get started, type `Duplicati.CommandLine.exe help`.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
To get help about a specific topic, add it to the help command. So if you need help about the `find` command, type `Duplicati.CommandLine.exe help find`, which will return the following result:
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
*****
|
||||||
|
>  The special help topic `example` will show a few examples of how the Commandline tool should be used to perform some simple operations.
|
||||||
|
To show the examples, type `Duplicati.CommandLine.exe help example`.
|
||||||
|
|
||||||
|
*****
|
||||||
|
|
||||||
|
|
||||||
|
## The BACKUP command
|
||||||
|
|
||||||
|
This is probably the most important command, after all Duplicati is a backup program. You can run a backup with the backup command using the following format:
|
||||||
|
|
||||||
|
`Duplicati.CommandLine.exe backup <storage-URL> "<source-path>" [<options>]`
|
||||||
|
|
||||||
|
The storage-URL should be specified in this format:
|
||||||
|
|
||||||
|
`protocol://username:password@hostname:port/path?backend_option1=value1&backend_option2=value2`
|
||||||
|
|
||||||
|
Multiple source paths can be specified if they are separated by a space.
|
||||||
|
|
||||||
|
`username` ust not contain `:` and `password` must not contain `@`. If they do, specify username and password using `--auth-username` and `--auth-password`, or url-encode them.
|
||||||
|
|
||||||
|
Add as many advanced options as needed, like `--passphrase` and `--dblock-size`.
|
||||||
|
|
||||||
|
*****
|
||||||
|
>  Instead of composing the complete backup command yourself, including all advanced options, you can create a backup job in the Graphical User Interface, without scheduling it. Once completed, you can export the backup job to the command line, resulting in a Duplicati.CommandLine.exe backup command with all settings that you specified in the wizard. You can paste this generated command in your favorite task scheduler. For more information about creating a backup job in the Graphical User Interface, see [Creating a new backup job](#_Creating-a-new-backup-job). For more information about exporting the backup job to the command line, see [Exporting a backup job configuration](#_exporting-a-backup-job-configuration).
|
||||||
|
|
||||||
|
*****
|
||||||
|
|
||||||
|
The Commandline equivalent of the backup job described in [Creating a new backup job](#_Creating_a_new) is:
|
||||||
|
|
||||||
|
```nohighlight
|
||||||
|
Duplicati.CommandLine.exe backup "ftp://myftpserver.com/Backup/Pictures?auth-username=Duplicati&auth-password=backup" "C:\Users\User\Pictures" --backup-name="Pictures Collection" --dbpath="C:\Users\User\DuplicatiCanary\data\LFYXSFKFFN.sqlite" --encryption-module="aes" --compression-module="zip" --dblock-size="50mb" --keep-time="3M" --passphrase="%@/%78kUPKlZtz" --skip-files-larger-than="2GB" --default-filters="Windows" --exclude-files-attributes="temporary" --disable-module="console-password-input" --exclude="desktop.ini"
|
||||||
|
```
|
||||||
|
|
||||||
|
## The RESTORE command
|
||||||
|
|
||||||
|
The `restore` command can restore files from a specific restore point the local system. Use the following format:
|
||||||
|
|
||||||
|
`Duplicati.CommandLine.exe restore <storage-URL> ["<filename>"] [<options>]`
|
||||||
|
|
||||||
|
This will restore `<filename>` to its original location. If the specified filename exists already, a timestamp will be added to the filename. If you want to restore all files, use "*" or leave the filename empty.
|
||||||
|
|
||||||
|
Some advanced options frequently used with restore operations are:
|
||||||
|
|
||||||
|
* `--overwrite=<boolean>`
|
||||||
|
Overwrites existing files.
|
||||||
|
* `--restore-path=<string>`
|
||||||
|
Restores files to <restore-path> instead of their original destination. Top folders are removed if possible.
|
||||||
|
* `--time=<time>`
|
||||||
|
Restore files that are older than the specified time.
|
||||||
|
* `--version=<int>`
|
||||||
|
Restore files from a specific backup.
|
||||||
|
|
||||||
|
## The FIND command
|
||||||
|
|
||||||
|
This command is used to check what's in your backup. It can show a list of all backups, or list occurrences of a specific file (or files when using wildcards or a regular expression).
|
||||||
|
|
||||||
|
Usage format for the `find` command is:
|
||||||
|
|
||||||
|
`Duplicati.CommandLine.exe find <storage-URL> ["<filename>"] [<options>]`
|
||||||
|
|
||||||
|
If `<filename>` is specified, all occurrences of `<filename>` in the backup are listed. `<filename>` can contain `*` and `?` as wildcards. File names in `[`brackets`]` are interpreted as regular expression. Latest backup is searched by default. If entire path is specified, all available versions of the file are listed. If no `<filename>` is specified, a list of all available backups is shown.
|
||||||
|
|
||||||
|
Useful advanced options are:
|
||||||
|
|
||||||
|
* `--time=<time>`
|
||||||
|
Shows what the files looked like at a specific time. Absolute and relative times can be specified.
|
||||||
|
* `--version=<int>`
|
||||||
|
Shows what the files looked like in a specific backup. If no version is specified, the latest backup (version=0) will be used. If nothing is found, older backups will be searched automatically.
|
||||||
|
* `--include=<string>`
|
||||||
|
Reduces the list of files in a backup to those that match the provided string. This is applied before the search is executed.
|
||||||
|
* `--exclude=<string>`
|
||||||
|
Removes matching files from the list of files in a backup. This is applied before the search is executed.
|
||||||
|
* `--all-versions=<boolean>`
|
||||||
|
Searches in all backup sets, instead of just searching the latest.
|
||||||
|
|
||||||
|
## The COMPARE command
|
||||||
|
|
||||||
|
This is a useful command that shows the differences between two backup versions. The versions do not need to be subsequent, you can compare any two backup versions. If no versions are given, changes are shown between the two latest backups. The versions can either be timestamps or backup version numbers. If only one version is given, the most recent backup is compared to that version. Usage:
|
||||||
|
|
||||||
|
`Duplicati.CommandLine.exe compare <storage-URL> [<base-version>] [<compare-to>] [<options>]`
|
||||||
|
|
||||||
|
Some advanced options that can be used with the `compare` command:
|
||||||
|
|
||||||
|
* `--verbose`
|
||||||
|
Shows names of files
|
||||||
|
* `--include=<filter>`
|
||||||
|
Adds an include filter (for verbose output)
|
||||||
|
* `--exclude=<filter>`
|
||||||
|
Adds an exclude filter (for verbose output)
|
||||||
|
|
||||||
|
## The TEST command
|
||||||
|
|
||||||
|
After a backup operation, some backup files are verified by downloading them and check if the contents of these files is what Duplicati expects. This automatic verification after each backup operation can be disabled. To be able to check the integrity of the backup files, use the `test` command. Usage:
|
||||||
|
|
||||||
|
`Duplicati.CommandLine.exe test <storage-URL> <samples> [<options>]`
|
||||||
|
|
||||||
|
Verifies integrity of a backup. A random sample of dlist, dindex, dblock files is downloaded, decrypted and the content is checked against recorded size values and data hashes. `<samples>` specifies the number of samples to be tested. If "all" is specified, all files in the backup will be tested. This is a rolling check, i.e. when executed another time different samples are verified than in the first run. A sample consists of 1 dlist, 1 dindex, 1 dblock.
|
||||||
|
|
||||||
|
Suggested advanced options:
|
||||||
|
|
||||||
|
* `--time=<time>`
|
||||||
|
Checks samples from a specific time.
|
||||||
|
* `--version=<int>`
|
||||||
|
Checks samples from specific versions. Delimiters are `,` and `-`.
|
||||||
|
* `--full-remote-verification`
|
||||||
|
Checks the internal structure of each file instead of just verifying the file hash.
|
||||||
|
|
||||||
|
## The COMPACT command
|
||||||
|
|
||||||
|
Old data is not deleted immediately as in most cases only small parts of a dblock file are old data. When the amount of old data in a dblock file grows it might be worth to replace it. This is especially the case when the number of dblock files and thus the required storage space can be reduced. When backups are frequently made and only few files have changed, the uploaded dblock files are small. At some point it might make sense to replace a large number of small files with one large file. This is what compacting does. Usage:
|
||||||
|
|
||||||
|
`Duplicati.CommandLine.exe compact <storage-URL> [<options>]`
|
||||||
|
|
||||||
|
A few advanced options to use with the `compact` command:
|
||||||
|
|
||||||
|
* `--small-file-max-count=<int>`
|
||||||
|
The maximum allowed number of small files.
|
||||||
|
* `--small-file-size=<int>`
|
||||||
|
Files smaller than this size are considered to be small and will be compacted with other small files as soon as there are `<small-file-max-count>` of them. `--small-file-size=20` means 20% of `<dblock-size>`.
|
||||||
|
* `--threshold=<percent_value>`
|
||||||
|
The amount of old data that a dblock file can contain before it is considered to be replaced.
|
||||||
|
|
||||||
|
## The DELETE command
|
||||||
|
|
||||||
|
Marks old data deleted and removes outdated dlist files. A backup is deleted when it is older than `<keep-time>` or when there are more newer versions than `<keep-versions>`. Data is considered old, when it is not required from any existing backup anymore. Usage:
|
||||||
|
|
||||||
|
`Duplicati.CommandLine.exe delete <storage-URL> [<options>]`
|
||||||
|
|
||||||
|
Some advanced options:
|
||||||
|
|
||||||
|
* `--keep-time=<time>`
|
||||||
|
Marks data outdated that is older than <time>.
|
||||||
|
* `--keep-versions=<int>`
|
||||||
|
Marks data outdated that is older than <int> versions.
|
||||||
|
* `--version=<int>`
|
||||||
|
Deletes all files that belong to the specified version(s).
|
||||||
|
* `--allow-full-removal`
|
||||||
|
Disables the protection against removing the final fileset.
|
||||||
|
|
||||||
|
## The PURGE command
|
||||||
|
|
||||||
|
Purges (removes) files from remote backup data. This command can either take a list of filenames or use the filters to choose which files to purge. The purge process creates new filesets on the remote destination with the purged files removed, and will start the compacting process after a purge. By default, the matching files are purged in all versions, but this can be limited by choosing one or more versions. To test what will happen, use the `--dry-run` flag. Usage:
|
||||||
|
|
||||||
|
`Duplicati.CommandLine.exe purge <storage-URL> <filenames> [<options>]`
|
||||||
|
|
||||||
|
Useful advanced options:
|
||||||
|
|
||||||
|
* `--dry-run`
|
||||||
|
Performs the operation, but does not write changes to the local database or the remote storage.
|
||||||
|
* `--version=<int>`
|
||||||
|
Selects specific versions to purge from, multiple versions can be specified with commas.
|
||||||
|
* `--time=<time>`
|
||||||
|
Selects a specific version to purge from.
|
||||||
|
* `--no-auto-compact`
|
||||||
|
Performs a compact process after purging files
|
||||||
|
* `--include=<filter>`
|
||||||
|
Selects files to purge, using filter syntax
|
||||||
|
|
||||||
|
## The REPAIR command
|
||||||
|
|
||||||
|
Tries to repair the backup. If no local database is found or the database is empty, the database is re-created with data from the storage. If the database is in place but the remote storage is corrupt, the remote storage gets repaired with local data (if available). Usage:
|
||||||
|
|
||||||
|
`Duplicati.CommandLine.exe repair <storage-URL> [<options>]`
|
||||||
|
|
||||||
|
## The AFFECTED command
|
||||||
|
|
||||||
|
Returns a report explaining what backup sets and files are affected by a remote file. You can use this option to see what source files are affected if one or more remote files are damaged or deleted. The advanced option `dbpath` is required to specify the location of the local database. Usage:
|
||||||
|
|
||||||
|
`Duplicati.CommandLine.exe affected <storage-URL> <remote-filename> [<remote-filenames>] [<options>]`
|
||||||
|
|
||||||
|
## The LIST-BROKEN-FILES command
|
||||||
|
|
||||||
|
Checks the database for missing data that cause files not not be restoreable. Files can become unrestoreable if remote data files are defect or missing. Use the list-broken-files command to see what the purge-broken-files command will remove. Usage:
|
||||||
|
|
||||||
|
`Duplicati.CommandLine.exe list-broken-files <storage-URL> [<options>]`
|
||||||
|
|
||||||
|
## The PURGE-BROKEN-FILES command
|
||||||
|
|
||||||
|
Removes all files from the database and remote storage that are no longer restoreable. Use this operation with caution, and only if you cannot recover the missing remote files, but want to continue a backup. Even with missing remote files, it may be possible to restore parts of the files that will be removed with this command. Usage:
|
||||||
|
|
||||||
|
`Duplicati.CommandLine.exe purge-broken-files <storage-URL> [<options>]`
|
||||||
|
|
||||||
|
Recommended advanced option before actually purge the files:
|
||||||
|
|
||||||
|
* `--dry-run`
|
||||||
|
Performs the operation, but does not write changes to the local database or the remote storage.
|
||||||
|
|
||||||
|
## The CREATE-REPORT command
|
||||||
|
|
||||||
|
Analyses the backup and prepares a report with anonymous information. This report can be sent to the developers for a better analysis in case something went wrong. Usage:
|
||||||
|
|
||||||
|
`Duplicati.CommandLine.exe create-report <storage-URL> <output-file> [<options>]`
|
||||||
|
|
||||||
|
## The TEST-FILTERS command
|
||||||
|
|
||||||
|
Scans the source files and tests against the filters specified, the console output shows which files and folders are examined and the result. Usgae:
|
||||||
|
|
||||||
|
`Duplicati.CommandLine.exe test-filters <source-path> [<options>]`
|
||||||
|
|
||||||
|
## The SYSTEM-INFO command
|
||||||
|
|
||||||
|
Issue this following command to see a variety of system information relevant to Duplicati. Usage:
|
||||||
|
|
||||||
|
`Duplicati.CommandLine.exe system-info`
|
||||||
|
|
||||||
|
## The SEND-MAIL command
|
||||||
|
|
||||||
|
Duplicati can send email notifications after each operation. Use the send-mail command to test this. Usage:
|
||||||
|
|
||||||
|
`Duplicati.CommandLine.exe send-mail --send-mail-to=<email-address> [<options>]`
|
||||||
|
|
||||||
|
Advanced options you can use with this command:
|
||||||
|
|
||||||
|
* `--send-mail-to=<email-address>`
|
||||||
|
Send an email to <email-address> after a backup. Valid formats are `Name <test@example.com>`, `Other <test2@example.com>`, `test3@example.com`. Multiple addresses must be separated with a comma.
|
||||||
|
* `--send-mail-from=<email-address>`
|
||||||
|
This is the sender address of the email that is sent.
|
||||||
|
* `--send-mail-subject=<subject>`
|
||||||
|
This is the subject line of the email that is sent. E.g. this can be `"Duplicati %OPERATIONNAME% Report"`
|
||||||
|
* `--send-mail-body=<body>`
|
||||||
|
The content of the email message. This should contain `"%RESULT%"`.
|
||||||
|
* `--send-mail-url=<smtp-url>`
|
||||||
|
A URL to connect to an SMTP server to send out an email. Example: `"tls://smtp.example.com:587"`, `"smtps://smtp.example.com:465"` or `"smtp://smtp.example.com:25"`.
|
||||||
|
* `--send-mail-username=<username>`
|
||||||
|
Required username to authenticate with SMTP server.
|
||||||
|
* `--send-mail-password=<password>`
|
||||||
|
Required password to authenticate with SMTP server.
|
||||||
|
* `--send-mail-level=<level>`
|
||||||
|
When email messages are sent: `Success`, `Warning`, `Error` are possible.
|
||||||
|
* `--send-mail-any-operation=true`
|
||||||
|
Also send emails after other operations like restore etc.
|
||||||
|
|
||||||
|
Allowed placeholders are:
|
||||||
|
* `%PARSEDRESULT%`
|
||||||
|
The parsed result op the operation: `Success`, `Warning`, `Error`
|
||||||
|
* `%RESULT%`
|
||||||
|
When used in the body, this is the result/log of the backup,
|
||||||
|
When used in the subject line, this is the same as `%PARSEDRESULT%`
|
||||||
|
* `%OPERATIONNAME%`
|
||||||
|
The name of the operation, usually `backup`, but could also be `restore` etc.
|
||||||
|
* `%REMOTEURL%`
|
||||||
|
The backend url.
|
||||||
|
* `%LOCALPATH%`
|
||||||
|
The path to the local folders involved (i.e. the folders being backed up).
|
||||||
|
|
||||||
|
|
||||||
|
|
608
docs/05-storage-providers.md
Normal file
608
docs/05-storage-providers.md
Normal file
@ -0,0 +1,608 @@
|
|||||||
|
|
||||||
|
|
||||||
|
Duplicati supports many storage providers to use as backend for your backups. Both standard protocols and a wide range of proprietary cloud storage solutions are supported. Each storage provider has its own set of options that you can specify. Some options are mandatory, other options are optional. Use this list of providers as a reference to compose a valid command for communication with the storage provider of your choice.
|
||||||
|
|
||||||
|
## Local folder or drive
|
||||||
|
|
||||||
|
Duplicati can use the local file system to store backups. The following target URL formats can be used:
|
||||||
|
|
||||||
|
`file://hostname/folder%20for%20backup`
|
||||||
|
`file://\\server\folder%20for%20backup (UNC path)`
|
||||||
|
`"C:\folder for backup"`
|
||||||
|
`file://c:\folder%20for%20backup (Windows)`
|
||||||
|
`file:///usr/pub/folder%20for%20backup (Linux)`
|
||||||
|
|
||||||
|
Options:
|
||||||
|
|
||||||
|
* `--auth-password`
|
||||||
|
The password used to connect to the server. This may also be supplied as the environment variable `AUTH_PASSWORD`.
|
||||||
|
* `--auth-username`
|
||||||
|
The username used to connect to the server. This may also be supplied as the environment variable `AUTH_USERNAME`.
|
||||||
|
* `--alternate-destination-marker`
|
||||||
|
This option only works when the `--alternate-target-paths` option is also specified. If there are alternate paths specified, this option indicates the name of a marker file that must be present in the folder. This can be used to handle situations where an external drive changes drive letter or mount point. By ensuring that a certain file exists, it is possible to prevent writing data to an unwanted external drive. The contents of the file are never examined, only file existence.
|
||||||
|
* `--alternate-target-paths`
|
||||||
|
This option allows multiple targets to be specified. The primary target path is placed before the list of paths supplied with this option. Before starting the backup, each folder in the list is checked for existence and optionally the presence of the marker file supplied by `--alternate-destination-marker`. The first existing path that optionally contains the marker file is then used as the destination. Multiple destinations are separated with a "`;`". On Windows, the path may be a UNC path, and the drive letter may be substituted with an asterisk (`*`), eg.: " *:\backup ", which will examine all drive letters. If a username and password is supplied, the same credentials are used for all destinations.
|
||||||
|
* `--use-move-for-put`
|
||||||
|
When storing the file, the standard operation is to copy the file and delete the original. This sequence ensures that the operation can be retried if something goes wrong. Activating this option may cause the retry operation to fail. This option has no effect unless the `--disable-streaming-transfers` options is activated.
|
||||||
|
* `--force-smb-authentication`
|
||||||
|
If this option is set, any existing authentication against the remote share is dropped before attempting to authenticate.
|
||||||
|
|
||||||
|
## FTP
|
||||||
|
|
||||||
|
Duplicati can use FTP servers to store backups. The following target URL formats can be used:
|
||||||
|
|
||||||
|
`ftp://hostname/folder`
|
||||||
|
|
||||||
|
Options:
|
||||||
|
|
||||||
|
* `--ftp-passive = false`
|
||||||
|
If this flag is set, the FTP connection is made in passive mode, which works better with some firewalls. If the `ftp-regular` flag is also set, this flag is ignored.
|
||||||
|
* `--ftp-regular = true`
|
||||||
|
If this flag is set, the FTP connection is made in active mode. Even if the `ftp-passive` flag is also set, the connection will be made in active mode.
|
||||||
|
* `--auth-password`
|
||||||
|
The password used to connect to the server. This may also be supplied as the environment variable `AUTH_PASSWORD`.
|
||||||
|
* `--auth-username`
|
||||||
|
The username used to connect to the server. This may also be supplied as the environment variable `AUTH_USERNAME`.
|
||||||
|
* `--use-ssl`
|
||||||
|
Use this flag to communicate using Secure Socket Layer (SSL) over ftp (ftps).
|
||||||
|
* `--disable-upload-verify`
|
||||||
|
To protect against network failures, every upload will be attempted verified. Use this option to disable this verification to make the upload faster but less reliable.
|
||||||
|
|
||||||
|
## FTP (Alternative)
|
||||||
|
|
||||||
|
This backend can read and write data to an FTP based backend using an alternative FTP client. Allowed formats are
|
||||||
|
|
||||||
|
`aftp://hostname/folder`
|
||||||
|
|
||||||
|
or
|
||||||
|
|
||||||
|
`aftp://username:password@hostname/folder`
|
||||||
|
|
||||||
|
upported options:
|
||||||
|
|
||||||
|
* `--auth-password (Password)`
|
||||||
|
Supplies the password used to connect to the server
|
||||||
|
The password used to connect to the server. This may also be supplied as the environment variable `AUTH_PASSWORD`.
|
||||||
|
* `--auth-username (String)`
|
||||||
|
Supplies the username used to connect to the server
|
||||||
|
The username used to connect to the server. This may also be supplied as the environment variable `AUTH_USERNAME`.
|
||||||
|
* `--disable-upload-verify (Boolean)`
|
||||||
|
Disable upload verification.
|
||||||
|
To protect against network or server failures, every upload will be attempted to be verified. Use this option to disable this verification to make the upload faster but less reliable.
|
||||||
|
* `--aftp-data-connection-type (Enumeration)`
|
||||||
|
Configure the FTP data connection type.
|
||||||
|
If this flag is set, the FTP data connection type will be changed to the selected option.
|
||||||
|
Values: `AutoPassive`, `PASV`, `PASVEX`,`EPSV`, `AutoActive`, `PORT`,`EPRT`
|
||||||
|
Default value: `AutoPassive`
|
||||||
|
* `--aftp-encryption-mode (Enumeration)`
|
||||||
|
Configure the FTP encryption mode.
|
||||||
|
If this flag is set, the FTP encryption mode will be changed to the selected option.
|
||||||
|
Values: `None`, `Implicit`, `Explicit`
|
||||||
|
Default value: `None`
|
||||||
|
* `--aftp-ssl-protocols (Flags)`
|
||||||
|
Configure the SSL policy to use when encryption is enabled.
|
||||||
|
This flag controls the SSL policy to use when encryption is enabled.
|
||||||
|
Values: `None`, `Ssl2`, `Ssl3`, `Tls`, `Default`, `Tls11`, `Tls12`
|
||||||
|
Default value: `Default`
|
||||||
|
|
||||||
|
## OpenStack Object Storage / Swift
|
||||||
|
|
||||||
|
This backend can read and write data to Swift (OpenStack Object Storage). Supported format is
|
||||||
|
|
||||||
|
`openstack://container/folder`
|
||||||
|
|
||||||
|
Supported options:
|
||||||
|
|
||||||
|
* `--auth-username (String)`
|
||||||
|
Supplies the username used to connect to the server.
|
||||||
|
The username used to connect to the server. This may also be supplied as the environment variable `AUTH_USERNAME`.
|
||||||
|
* `--auth-password (Password)`
|
||||||
|
Supplies the password used to connect to the server.
|
||||||
|
The password used to connect to the server. This may also be supplied as the environment variable `AUTH_PASSWORD`. If the password is supplied,`--openstack-tenant-name` must also be set.
|
||||||
|
* `--openstack-tenant-name (String)`
|
||||||
|
Supplies the Tenant Name used to connect to the server.
|
||||||
|
The Tenant Name is commonly the paying user account name. This option must be supplied when authenticating with a password, but is not required when using an API key.
|
||||||
|
* `--openstack-apikey (Password)`
|
||||||
|
Supplies the API key used to connect to the server.
|
||||||
|
The API key can be used to connect without supplying a password and tenant ID with some providers.
|
||||||
|
* `--openstack-authuri (String)`
|
||||||
|
Supplies the authentication URL.
|
||||||
|
The authentication URL is used to authenticate the user and find the storage service. The URL commonly ends with "/v2.0". Known providers are:
|
||||||
|
* Rackspace US: [https://identity.api.rackspacecloud.com/v2.0](https://identity.api.rackspacecloud.com/v2.0)
|
||||||
|
* Rackspace UK: [https://lon.identity.api.rackspacecloud.com/v2.0</span></span>](https://lon.identity.api.rackspacecloud.com/v2.0)
|
||||||
|
* OVH Cloud Storage: [https://auth.cloud.ovh.net/v2.0](https://auth.cloud.ovh.net/v2.0)
|
||||||
|
* Selectel Cloud Storage: [https://auth.selcdn.ru</span></span>](https://auth.selcdn.ru)
|
||||||
|
* `--openstack-region (String)`
|
||||||
|
Supplies the region used for creating a container.
|
||||||
|
This option is only used when creating a container, and is used to indicate where the container should be placed.
|
||||||
|
Consult your provider for a list of valid regions, or leave empty for the default region.
|
||||||
|
|
||||||
|
## S3 Compatible
|
||||||
|
|
||||||
|
Duplicati can use S3-compatible servers to store backups. The following target URL format is used:
|
||||||
|
|
||||||
|
`s3://bucketname/prefix`
|
||||||
|
|
||||||
|
Options:
|
||||||
|
|
||||||
|
* `--aws_secret_access_key`
|
||||||
|
The AWS "Secret Access Key" can be obtained after logging into your AWS account, this can also be supplied through the `auth-password` property.
|
||||||
|
* `--aws_access_key_id`
|
||||||
|
The AWS "Access Key ID" can be obtained after logging into your AWS account, this can also be supplied through the `auth-username` property.
|
||||||
|
* `--s3-european-buckets = false`
|
||||||
|
This flag is only used when creating new buckets. If the flag is set, the bucket is created on a European server.
|
||||||
|
This flag forces the `s3-use-new-style` flag. Amazon charges slightly more for European buckets.
|
||||||
|
* `--s3-use-rrs = false`
|
||||||
|
This flag toggles the use of the special RRS header. Files stored using RRS are more likely to disappear than those stored normally, but also costs less to store. See the full description here:
|
||||||
|
[http://aws.amazon.com/about-aws/whats-new/2010/05/19/announcing-amazon-s3-reduced-redundancy-storage/](http://aws.amazon.com/about-aws/whats-new/2010/05/19/announcing-amazon-s3-reduced-redundancy-storage/)
|
||||||
|
* `--s3-storage-class`
|
||||||
|
Use this option to specify a storage class. If this option is not used, the server will choose a default storage class.
|
||||||
|
* `--s3-server-name = s3.amazonaws.com`
|
||||||
|
Companies other than Amazon are now supporting the S3 API, meaning that this backend can read and write data to those providers as well. Use this option to set the hostname. Currently known providers are:
|
||||||
|
* Amazon S3: s3.amazonaws.com
|
||||||
|
* Hosteurope: cs.hosteurope.de
|
||||||
|
* Dunkel: dcs.dunkel.de
|
||||||
|
* DreamHost: objects.dreamhost.com
|
||||||
|
* dinCloud - Chicago: d3-ord.dincloud.com
|
||||||
|
* dinCloud - Los Angeles: d3-lax.dincloud.com
|
||||||
|
* IBM COS (S3) Public US: s3-api.us-geo.objectstorage.softlayer.net
|
||||||
|
* Wasabi Hot Storage: s3.wasabisys.com<
|
||||||
|
* `--s3-location-constraint`
|
||||||
|
This option is only used when creating new buckets. Use this option to change what region the data is stored in.
|
||||||
|
Amazon charges slightly more for non-US buckets. Known bucket locations:
|
||||||
|
* (default):
|
||||||
|
* Europe (EU): EU
|
||||||
|
* Europe (EU, Frankfurt): eu-central-1
|
||||||
|
* Europe (EU, Ireland): eu-west-1
|
||||||
|
* Europe (EU, London): eu-west-2
|
||||||
|
* US East (Northern Virginia): us-east-1
|
||||||
|
* US East (Ohio): us-east-2
|
||||||
|
* US West (Northern California): us-west-1
|
||||||
|
* US West (Oregon): us-west-2
|
||||||
|
* Canada (Central): ca-central-1
|
||||||
|
* Asia Pacific (Mumbai): ap-south-1
|
||||||
|
* Asia Pacific (Singapore): ap-southeast-1
|
||||||
|
* Asia Pacific (Sydney): ap-southeast-2
|
||||||
|
* Asia Pacific (Tokyo): ap-northeast-1
|
||||||
|
* Asia Pacific (Seoul): ap-northeast-2
|
||||||
|
* South America (São Paulo): sa-east-1
|
||||||
|
* `--use-ssl`
|
||||||
|
Use this flag to communicate using Secure Socket Layer (SSL) over http (https). Note that bucket names containing a period has problems with SSL connections.
|
||||||
|
* `--auth-password`
|
||||||
|
The password used to connect to the server. This may also be supplied as the environment variable `AUTH_PASSWORD`.
|
||||||
|
* `--auth-username`
|
||||||
|
The username used to connect to the server. This may also be supplied as the environment variable `AUTH_USERNAME`.
|
||||||
|
* `--s3-ext-forcepathstyle = False`
|
||||||
|
Extended option ForcePathStyle
|
||||||
|
* `--s3-ext-useaccelerateendpoint = False`
|
||||||
|
Extended option UseAccelerateEndpoint
|
||||||
|
* `--s3-ext-signaturemethod = HmacSHA256`
|
||||||
|
Extended option SignatureMethod
|
||||||
|
* `--s3-ext-signatureversion = 4`
|
||||||
|
Extended option SignatureVersion
|
||||||
|
* `--s3-ext-serviceurl`
|
||||||
|
Extended option ServiceURL
|
||||||
|
* `--s3-ext-usehttp = False`
|
||||||
|
Extended option UseHttp
|
||||||
|
* `--s3-ext-authenticationregion`
|
||||||
|
Extended option AuthenticationRegion
|
||||||
|
* `--s3-ext-authenticationservicename = s3`
|
||||||
|
Extended option AuthenticationServiceName
|
||||||
|
* `--s3-ext-maxerrorretry = 4`
|
||||||
|
Extended option MaxErrorRetry
|
||||||
|
* `--s3-ext-logresponse = False`
|
||||||
|
Extended option LogResponse
|
||||||
|
* `--s3-ext-readentireresponse = False`
|
||||||
|
Extended option ReadEntireResponse
|
||||||
|
* `--s3-ext-buffersize = 8192`
|
||||||
|
Extended option BufferSize
|
||||||
|
* `--s3-ext-progressupdateinterval = 102400`
|
||||||
|
Extended option ProgressUpdateInterval
|
||||||
|
* `--s3-ext-resignretries = False`
|
||||||
|
Extended option ResignRetries
|
||||||
|
* `--s3-ext-allowautoredirect = False`
|
||||||
|
Extended option AllowAutoRedirect
|
||||||
|
* `--s3-ext-logmetrics = False`
|
||||||
|
Extended option LogMetrics
|
||||||
|
* `--s3-ext-disablelogging = False`
|
||||||
|
Extended option DisableLogging
|
||||||
|
* `--s3-ext-usedualstackendpoint = False`
|
||||||
|
Extended option UseDualstackEndpoint
|
||||||
|
* `--s3-ext-throttleretries = True`
|
||||||
|
Extended option ThrottleRetries
|
||||||
|
* `--s3-ext-proxyhost`
|
||||||
|
Extended option ProxyHost
|
||||||
|
* `--s3-ext-proxyport = 0`
|
||||||
|
Extended option ProxyPort
|
||||||
|
* `--s3-ext-proxybypassonlocal = False`
|
||||||
|
Extended option ProxyBypassOnLocal
|
||||||
|
* `--s3-ext-maxidletime = 50000`
|
||||||
|
Extended option MaxIdleTime
|
||||||
|
* `--s3-ext-connectionlimit = 50`
|
||||||
|
Extended option ConnectionLimit
|
||||||
|
* `--s3-ext-usenaglealgorithm = False`
|
||||||
|
Extended option UseNagleAlgorithm
|
||||||
|
|
||||||
|
## SFTP (SSH)
|
||||||
|
|
||||||
|
Duplicati can use SSH servers to store backups. The following target URL formats can be used:
|
||||||
|
|
||||||
|
`ssh://hostname/folder`
|
||||||
|
|
||||||
|
Options:
|
||||||
|
|
||||||
|
* `--auth-password`
|
||||||
|
The password used to connect to the server. This may also be supplied as the environment variable `AUTH_PASSWORD`.
|
||||||
|
* `--auth-username`
|
||||||
|
The username used to connect to the server. This may also be supplied as the environment variable `AUTH_USERNAME`.
|
||||||
|
* `--ssh-fingerprint`
|
||||||
|
The server fingerprint used for validation of server identity. Format is eg. `ssh-rsa 4096 11:22:33:44:55:66:77:88:99:00:11:22:33:44:55:66`.
|
||||||
|
* `--ssh-accept-any-fingerprints`
|
||||||
|
To guard against man-in-the-middle attacks, the server fingerprint is verified on connection. Use this option to disable host-key fingerprint verification. You should only use this option for testing.
|
||||||
|
* `--ssh-keyfile`
|
||||||
|
Points to a valid OpenSSH keyfile. If the file is encrypted, the password supplied is used to decrypt the keyfile.
|
||||||
|
If this option is supplied, the password is not used to authenticate. This option only works when using the managed SSH client.
|
||||||
|
* `--ssh-key`
|
||||||
|
An url-encoded SSH private key. The private key must be prefixed with `sshkey://`. If the file is encrypted, the password supplied is used to decrypt the keyfile. If this option is supplied, the password is not used to authenticate. This option only works when using the managed SSH client.
|
||||||
|
* `--ssh-operation-timeout = 0`
|
||||||
|
Use this option to manage the internal timeout for SSH operations. If this options is set to zero, the operations will not time out
|
||||||
|
* `--ssh-keepalive = 0`
|
||||||
|
This option can be used to enable the keep-alive interval for the SSH connection. If the connection is idle, aggressive firewalls might close the connection. Using keep-alive will keep the connection open in this scenario. If this value is set to zero, the keep-alive is disabled.
|
||||||
|
|
||||||
|
## Amazon Cloud Drive
|
||||||
|
|
||||||
|
This backend can read and write data to Amazon Cloud Drive. Supported format is
|
||||||
|
|
||||||
|
`amzcd://folder/subfolder`
|
||||||
|
|
||||||
|
Supported options:
|
||||||
|
|
||||||
|
* `--authid (Password)`
|
||||||
|
The authorization code.
|
||||||
|
The authorization token retrieved from https://duplicati-oauth-handler.appspot.com?type=amzcd
|
||||||
|
* `--amzcd-labels (String)`
|
||||||
|
The labels to set.
|
||||||
|
Use this option to set labels on the files and folders created
|
||||||
|
Default value: `duplicati,backup`
|
||||||
|
* `--amzcd-consistency-delay (Timespan)`
|
||||||
|
The consistency delay.
|
||||||
|
Amazon Cloud drive needs a small delay for results to stay consistent.
|
||||||
|
Default value: `15s`
|
||||||
|
|
||||||
|
## Azure blob
|
||||||
|
|
||||||
|
This backend can read and write data to Azure blob storage. Allowed format:
|
||||||
|
|
||||||
|
`azure://bucketname`
|
||||||
|
|
||||||
|
Supported options:
|
||||||
|
|
||||||
|
* `--azure_account_name (String)`
|
||||||
|
The storage account name.
|
||||||
|
The Azure storage account name which can be obtained by clicking the "Manage Access Keys" button on the storage account dashboard.
|
||||||
|
* `--azure_access_key (Password)`
|
||||||
|
The access key.
|
||||||
|
The Azure access key which can be obtained by clicking the "Manage Access Keys" button on the storage account dashboard.
|
||||||
|
* `--azure_blob_container_name (String)`
|
||||||
|
The name of the storage container.
|
||||||
|
All files will be written to the container specified.
|
||||||
|
* `--auth-password (Password)`
|
||||||
|
Supplies the password used to connect to the server.
|
||||||
|
The password used to connect to the server. This may also be supplied as the environment variable `AUTH_PASSWORD`.
|
||||||
|
* `--auth-username (String)`
|
||||||
|
Supplies the username used to connect to the server.
|
||||||
|
The username used to connect to the server. This may also be supplied as the environment variable `AUTH_USERNAME`.
|
||||||
|
|
||||||
|
## B2 Cloud Storage
|
||||||
|
|
||||||
|
This backend can read and write data to the Backblaze B2 Cloud Storage. Allowed format:
|
||||||
|
|
||||||
|
`b2://bucketname/prefix`
|
||||||
|
Supported options:
|
||||||
|
|
||||||
|
* `--b2-accountid (String)`
|
||||||
|
The "B2 Cloud Storage Account ID".
|
||||||
|
The "B2 Cloud Storage Account ID" can be obtained after logging into your Backblaze account, this can also be supplied through the `auth-username` property.
|
||||||
|
Aliases: `--auth-password`
|
||||||
|
* `--b2-applicationkey (Password)`
|
||||||
|
The "B2 Cloud Storage Application Key" can be obtained after logging into your Backblaze account, this can also be supplied through the `auth-password` property.
|
||||||
|
The "B2 Cloud Storage Application Key".
|
||||||
|
Aliases: `--auth-username`
|
||||||
|
* `--auth-password (Password)`
|
||||||
|
Supplies the password used to connect to the server.
|
||||||
|
The password used to connect to the server. This may also be supplied as the environment variable `AUTH_PASSWORD`.
|
||||||
|
* `--auth-username (String)`
|
||||||
|
Supplies the username used to connect to the server.
|
||||||
|
The username used to connect to the server. This may also be supplied as the environment variable `AUTH_USERNAME`.
|
||||||
|
* `--b2-create-bucket-type (String)`
|
||||||
|
The bucket type used when creating a bucket.
|
||||||
|
By default, a private bucket is created. Use this option to set the bucket type. Refer to the B2 documentation for allowed types.
|
||||||
|
Default value: `allPrivate`
|
||||||
|
* `--b2-page-size (Integer)`
|
||||||
|
The size of file-listing pages.
|
||||||
|
Use this option to set the page size for listing contents of B2 buckets. A lower number means less data, but can increase the number of Class C transaction on B2\. Suggested values are between `100` and `1000`
|
||||||
|
Default value: `500`
|
||||||
|
|
||||||
|
## Box.com
|
||||||
|
|
||||||
|
This backend can read and write data to Box.com. Supported format is
|
||||||
|
|
||||||
|
`box://folder/subfolder`
|
||||||
|
|
||||||
|
Supported options:
|
||||||
|
|
||||||
|
* `--authid (Password)`
|
||||||
|
The authorization code.
|
||||||
|
The authorization token retrieved from https://duplicati-oauth-handler.appspot.com?type=box.com
|
||||||
|
* `--box-delete-from-trash (Boolean)`
|
||||||
|
Force delete files.
|
||||||
|
After deleting a file, it may end up in the trash folder where it will be deleted after a grace period. Use this command to force immediate removal of delete files.
|
||||||
|
|
||||||
|
## Dropbox
|
||||||
|
|
||||||
|
This backend can read and write data to Dropbox. Supported format is
|
||||||
|
|
||||||
|
`dropbox://folder/subfolder`
|
||||||
|
|
||||||
|
Supported options:
|
||||||
|
|
||||||
|
* `--authid (Password)`
|
||||||
|
The authorization code.
|
||||||
|
The authorization token retrieved from [https://duplicati-oauth-handler.appspot.com?type=dropbox](https://duplicati-oauth-handler.appspot.com?type=dropbox)
|
||||||
|
|
||||||
|
## Google Cloud Storage
|
||||||
|
|
||||||
|
This backend can read and write data to Google Cloud Storage. Supported format is
|
||||||
|
|
||||||
|
`googlecloudstore://bucket/folder`
|
||||||
|
|
||||||
|
Supported options:
|
||||||
|
|
||||||
|
* `--gcs-location (String)`
|
||||||
|
Specifies location option for creating a bucket.
|
||||||
|
This option is only used when creating new buckets. Use this option to change what region the data is stored in.
|
||||||
|
Charges vary with bucket location. Known bucket locations:
|
||||||
|
* (default):
|
||||||
|
* Europe: EU
|
||||||
|
* United States: US
|
||||||
|
* Asia: ASIA
|
||||||
|
* Eastern Asia-Pacific: ASIA-EAST1
|
||||||
|
* Central United States 1: US-CENTRAL1
|
||||||
|
* Central United States 2: US-CENTRAL2
|
||||||
|
* Eastern United States 1: US-EAST1
|
||||||
|
* Eastern United States 2: US-EAST2
|
||||||
|
* Eastern United States 3: US-EAST3
|
||||||
|
* estern United States: US-WEST1
|
||||||
|
* `--gcs-storage-class (String)`
|
||||||
|
Specifies storage class for creating a bucket.
|
||||||
|
This option is only used when creating new buckets. Use this option to change what storage type the bucket has.
|
||||||
|
Charges and functionality vary with bucket storage class. Known storage classes:
|
||||||
|
* (default):
|
||||||
|
* Europe: EU
|
||||||
|
* United States: US
|
||||||
|
* Asia: ASIA
|
||||||
|
* Eastern Asia-Pacific: ASIA-EAST1
|
||||||
|
* Central United States 1: US-CENTRAL1
|
||||||
|
* Central United States 2: US-CENTRAL2
|
||||||
|
* Eastern United States 1: US-EAST1
|
||||||
|
* Eastern United States 2: US-EAST2
|
||||||
|
* Eastern United States 3: US-EAST3
|
||||||
|
* Western United States: US-WEST1
|
||||||
|
* `--authid (Password)`
|
||||||
|
The authorization code.
|
||||||
|
The authorization token retrieved from https://duplicati-oauth-handler.appspot.com?type=gcs
|
||||||
|
* `--gcs-project (String)`
|
||||||
|
Specifies project for creating a bucket.
|
||||||
|
This option is only used when creating new buckets. Use this option to supply the project ID that the bucket is attached to. The project determines where usage charges are applied.
|
||||||
|
|
||||||
|
## Google Drive
|
||||||
|
|
||||||
|
This backend can read and write data to Google Drive. Supported format is
|
||||||
|
|
||||||
|
`googledrive://folder/subfolder`
|
||||||
|
|
||||||
|
Supported options:
|
||||||
|
|
||||||
|
* `--authid (Password)`
|
||||||
|
The authorization code.
|
||||||
|
The authorization token retrieved from [https://duplicati-oauth-handler.appspot.com?type=googledrive](https://duplicati-oauth-handler.appspot.com?type=googledrive)
|
||||||
|
|
||||||
|
## HubiC
|
||||||
|
|
||||||
|
This backend can read and write data to HubiC. Supported format is
|
||||||
|
|
||||||
|
`hubic://container/folder`
|
||||||
|
|
||||||
|
Supported options:
|
||||||
|
|
||||||
|
* `--authid (Password)`
|
||||||
|
The authorization code.
|
||||||
|
The authorization token retrieved from [https://duplicati-oauth-handler.appspot.com?type=hubic](https://duplicati-oauth-handler.appspot.com?type=hubic)
|
||||||
|
|
||||||
|
## Jottacloud
|
||||||
|
|
||||||
|
This backend can read and write data to Jottacloud using it's REST protocol. Allowed format is
|
||||||
|
|
||||||
|
`jottacloud://folder/subfolder`
|
||||||
|
|
||||||
|
Supported options:
|
||||||
|
|
||||||
|
* `--auth-password (Password)`
|
||||||
|
Supplies the password used to connect to the server.
|
||||||
|
The password used to connect to the server. This may also be supplied as the environment variable `AUTH_PASSWORD`.
|
||||||
|
* `--auth-username (String)`
|
||||||
|
Supplies the username used to connect to the server.
|
||||||
|
The username used to connect to the server. This may also be supplied as the environment variable `AUTH_USERNAME`.
|
||||||
|
* `--jottacloud-device (String)`
|
||||||
|
Supplies the backup device to use.
|
||||||
|
The backup device to use. Will be created if not already exists. You can manage your devices from the backup panel in the Jottacloud web interface. When you specify a custom device you should also specify the mount point to use on this device with the `jottacloud-mountpoint` option.
|
||||||
|
* `--jottacloud-mountpoint (String)`
|
||||||
|
Supplies the mount point to use on the server.
|
||||||
|
The mount point to use on the server. The default is `Archive` for using the built-in archive mount point. Set this option to` `Sync` to use the built-in synchronization mount point instead, or if you have specified a custom device with option `jottacloud-device` you are free to name the mount point as you like.
|
||||||
|
|
||||||
|
## Mega.nz
|
||||||
|
|
||||||
|
This backend can read and write data to Mega.co.nz. Allowed format:
|
||||||
|
|
||||||
|
`mega://folder/subfolder
|
||||||
|
|
||||||
|
Supported options:
|
||||||
|
|
||||||
|
* `--auth-password (Password)`
|
||||||
|
Supplies the password used to connect to the server.
|
||||||
|
The password used to connect to the server. This may also be supplied as the environment variable `AUTH_PASSWORD`.
|
||||||
|
* `--auth-username (String)`
|
||||||
|
Supplies the username used to connect to the server.
|
||||||
|
The username used to connect to the server. This may also be supplied as the environment variable `AUTH_USERNAME`.
|
||||||
|
|
||||||
|
## Microsoft OneDrive
|
||||||
|
|
||||||
|
Duplicati can use Microsoft Onedrive to store backups. The following target URL format is used:
|
||||||
|
|
||||||
|
`onedrive://folder/subfolder`
|
||||||
|
|
||||||
|
Options:
|
||||||
|
|
||||||
|
* `--authid`
|
||||||
|
The authorization token retrieved from [https://duplicati-oauth-handler.appspot.com?type=onedrive](https://duplicati-oauth-handler.appspot.com?type=onedrive)
|
||||||
|
|
||||||
|
## Microsoft OneDrive for Business
|
||||||
|
|
||||||
|
Supports connections to Microsoft OneDrive for Business. Allowed formats are
|
||||||
|
|
||||||
|
`od4b://tennant.sharepoint.com/personal/username_domain/Documents/subfolder`
|
||||||
|
|
||||||
|
or
|
||||||
|
|
||||||
|
`od4b://username:password@tennant.sharepoint.com/personal/username_domain/Documents/folder`
|
||||||
|
|
||||||
|
You can use a double slash '//' in the path to denote the base path from the documents folder.
|
||||||
|
|
||||||
|
Supported options:
|
||||||
|
|
||||||
|
* `--auth-password (Password)`
|
||||||
|
Supplies the password used to connect to the server.
|
||||||
|
The password used to connect to the server. This may also be supplied as the environment variable `AUTH_PASSWORD`.
|
||||||
|
* `--auth-username (String)`
|
||||||
|
Supplies the username used to connect to the server.
|
||||||
|
The username used to connect to the server. This may also be supplied as the environment variable `AUTH_USERNAME`.
|
||||||
|
* `--integrated-authentication (Boolean)`
|
||||||
|
Use windows integrated authentication to connect to the server.
|
||||||
|
If the server and client both supports integrated authentication, this option enables that authentication method.
|
||||||
|
This is likely only available with windows servers and clients.
|
||||||
|
* `--delete-to-recycler (Boolean)`
|
||||||
|
Move deleted files to the recycle bin.
|
||||||
|
Use this option to have files moved to the recycle bin folder instead of removing them permanently when compacting or deleting backups.
|
||||||
|
* `--binary-direct-mode (Boolean)`
|
||||||
|
Upload files using binary direct mode.
|
||||||
|
Use this option to upload files to SharePoint as a whole with BinaryDirect mode. This is the most efficient way of uploading, but can cause non-recoverable timeouts under certain conditions. Use this option only with very fast and stable internet connections.
|
||||||
|
Default value: `false`
|
||||||
|
* `--web-timeout (Timespan)`
|
||||||
|
Set timeout for SharePoint web operations.
|
||||||
|
Use this option to specify a custom value for timeouts of web operation when communicating with SharePoint Server.
|
||||||
|
Recommended value is `180s`.
|
||||||
|
* `--chunk-size (Size)`
|
||||||
|
Set block size for chunked uploads to SharePoint.
|
||||||
|
Use this option to specify the size of each chunk when uploading to SharePoint Server. Recommended value is 4MB.
|
||||||
|
Default value: `4mb`
|
||||||
|
|
||||||
|
## Microsoft SharePoint
|
||||||
|
|
||||||
|
Supports connections to a SharePoint server (including OneDrive for Business). Allowed formats are
|
||||||
|
|
||||||
|
`mssp://tennant.sharepoint.com/PathToWeb//BaseDocLibrary/subfolder`
|
||||||
|
|
||||||
|
or
|
||||||
|
|
||||||
|
`mssp://username:password@tennant.sharepoint.com/PathToWeb//BaseDocLibrary/subfolder`
|
||||||
|
|
||||||
|
Use a double slash '//' in the path to denote the web from the documents library.
|
||||||
|
|
||||||
|
Supported options:
|
||||||
|
|
||||||
|
* `--auth-password (Password)`
|
||||||
|
Supplies the password used to connect to the server.
|
||||||
|
The password used to connect to the server. This may also be supplied as the environment variable `AUTH_PASSWORD`.
|
||||||
|
* `--auth-username (String)`
|
||||||
|
Supplies the username used to connect to the server.
|
||||||
|
The username used to connect to the server. This may also be supplied as the environment variable `AUTH_USERNAME`.
|
||||||
|
* `--integrated-authentication (Boolean)`
|
||||||
|
Use windows integrated authentication to connect to the server.
|
||||||
|
If the server and client both supports integrated authentication, this option enables that authentication method.
|
||||||
|
This is likely only available with windows servers and clients.
|
||||||
|
* `--delete-to-recycler (Boolean)`
|
||||||
|
Move deleted files to the recycle bin.
|
||||||
|
Use this option to have files moved to the recycle bin folder instead of removing them permanently when compacting or deleting backups.
|
||||||
|
* `--binary-direct-mode (Boolean)`
|
||||||
|
Upload files using binary direct mode.
|
||||||
|
Use this option to upload files to SharePoint as a whole with BinaryDirect mode. This is the most efficient way of uploading, but can cause non-recoverable timeouts under certain conditions. Use this option only with very fast and stable internet connections.
|
||||||
|
Default value: `false`
|
||||||
|
* `--web-timeout (Timespan)`
|
||||||
|
Set timeout for SharePoint web operations.
|
||||||
|
Use this option to specify a custom value for timeouts of web operation when communicating with SharePoint Server.
|
||||||
|
Recommended value is `180s`.
|
||||||
|
* `--chunk-size (Size)`
|
||||||
|
Set block size for chunked uploads to SharePoint.
|
||||||
|
Use this option to specify the size of each chunk when uploading to SharePoint Server. Recommended value is `4MB`.
|
||||||
|
Default value: `4mb`
|
||||||
|
|
||||||
|
## Rackspace Cloudfiles
|
||||||
|
|
||||||
|
Supports connections to the CloudFiles backend. Allowed formats is
|
||||||
|
|
||||||
|
`cloudfiles://container/folder`
|
||||||
|
|
||||||
|
Supported options:
|
||||||
|
|
||||||
|
* `--auth-password (Password)`
|
||||||
|
Supplies the password used to connect to the server.
|
||||||
|
The password used to connect to the server. This may also be supplied as the environment variable `AUTH_PASSWORD`.
|
||||||
|
* `--auth-username (String)`
|
||||||
|
Supplies the username used to connect to the server.
|
||||||
|
The username used to connect to the server. This may also be supplied as the environment variable `AUTH_USERNAME`.
|
||||||
|
* `--cloudfiles-username (String)`
|
||||||
|
Supplies the username used to authenticate with CloudFiles.
|
||||||
|
Aliases: `--auth-username
|
||||||
|
* `--cloudfiles-accesskey (Password)`
|
||||||
|
Supplies the access key used to connect to the server.
|
||||||
|
Supplies the API Access Key used to authenticate with CloudFiles.
|
||||||
|
Aliases: `--auth-password`
|
||||||
|
* `--cloudfiles-uk-account (Boolean)`
|
||||||
|
Use a UK account.
|
||||||
|
Duplicati will assume that the credentials given are for a US account, use this option if the account is a UK based account. Note that this is equivalent to setting `--cloudfiles-authentication-url=https://lon.auth.api.rackspacecloud.com/v1.0`.
|
||||||
|
* `--cloudfiles-authentication-url (String)`
|
||||||
|
Provide another authentication URL.
|
||||||
|
CloudFiles use different servers for authentication based on where the account resides, use this option to set an alternate authentication URL. This option overrides `--cloudfiles-uk-account`.
|
||||||
|
Default value: `https://identity.api.rackspacecloud.com/auth`
|
||||||
|
|
||||||
|
## Sia Decentralized Cloud
|
||||||
|
|
||||||
|
This backend can read and write data to Sia. Allowed format:
|
||||||
|
|
||||||
|
`sia://server/folder/BaseDocLibrary/subfolder`
|
||||||
|
|
||||||
|
Supported options:
|
||||||
|
|
||||||
|
* `--sia-targetpath (String)`
|
||||||
|
Backup path.
|
||||||
|
Target path, ie `/backup`
|
||||||
|
Default value: `/backup`
|
||||||
|
* `--sia-password (Password)`
|
||||||
|
Sia password
|
||||||
|
* `--sia-redundancy (String)`
|
||||||
|
Minimum value is `3`.
|
||||||
|
Default value: `1.5`
|
||||||
|
|
||||||
|
## Tahoe-LAFS
|
||||||
|
|
||||||
|
Duplicati can use the TahoeLAFS to store backups. The following target URL format is used:
|
||||||
|
|
||||||
|
`tahoe://hostname:port/uri/$DIRCAP`
|
||||||
|
|
||||||
|
Options:
|
||||||
|
|
||||||
|
* `--use-ssl`
|
||||||
|
Use this flag to communicate using Secure Socket Layer (SSL) over http (https).
|
||||||
|
|
||||||
|
|
||||||
|
|
667
docs/06-advanced-options.md
Normal file
667
docs/06-advanced-options.md
Normal file
@ -0,0 +1,667 @@
|
|||||||
|
|
||||||
|
For each Duplicati command you have to specify a number of arguments. When performing a backup, you must supply the location and credentials for the backup files and one or more source folders. Optionally, you can specify one or more advanced options. These options will give you more control to how the command will be executed. Also additional features, like reporting, can be configured by supplying some advanced options.
|
||||||
|
|
||||||
|
Those additional options should only be used with care. For normal operation none of them should ever be required. Use this alphabetical list as a reference to find the advanced options that fit your needs.
|
||||||
|
|
||||||
|
## Core options
|
||||||
|
|
||||||
|
These options can be used to influence the behavior of the Duplicati backup engine.
|
||||||
|
|
||||||
|
### allow-full-removal
|
||||||
|
`--allow-full-removal = false`
|
||||||
|
By default, the last fileset cannot be removed. This is a safeguard to make sure that all remote data is not deleted by a configuration mistake. Use this flag to disable that protection, such that all filesets can be deleted.
|
||||||
|
|
||||||
|
### allow-missing-source
|
||||||
|
`--allow-missing-source = false`
|
||||||
|
Use this option to continue even if some source entries are missing.
|
||||||
|
|
||||||
|
### allow-passphrase
|
||||||
|
`--allow-passphrase-change = false`
|
||||||
|
Use this option to allow the passphrase to change, note that this option is not permitted for a backup or repair operation.
|
||||||
|
|
||||||
|
### allow-sleep
|
||||||
|
`--allow-sleep = false`
|
||||||
|
Allow system to enter sleep power modes for inactivity during backup/restore operations (Windows/OSX only).
|
||||||
|
|
||||||
|
### all-versions
|
||||||
|
`--all-versions = false`
|
||||||
|
When searching for files, only the most recent backup is searched. Use this option to show all previous versions too.
|
||||||
|
|
||||||
|
### asynchronous-upload-folder
|
||||||
|
`--asynchronous-upload-folder = C:\Users\User\AppData\Local\Temp\`
|
||||||
|
The pre-generated volumes will be placed into the temporary folder by default, this option can set a different folder for placing the temporary volumes, despite the name, this also works for synchronous runs.
|
||||||
|
|
||||||
|
### asynchronous-upload-limit
|
||||||
|
`--asynchronous-upload-limit = 4`
|
||||||
|
When performing asynchronous uploads, Duplicati will create volumes that can be uploaded. To prevent Duplicati from generating too many volumes, this option limits the number of pending uploads. Set to zero to disable the limit
|
||||||
|
|
||||||
|
### auto-cleanup
|
||||||
|
`--auto-cleanup = false`
|
||||||
|
If a backup is interrupted there will likely be partial files present on the backend. Using this flag, Duplicati will automatically remove such files when encountered.
|
||||||
|
|
||||||
|
### auto-update
|
||||||
|
`--auto-update = false`
|
||||||
|
Set this option if you prefer to have the commandline version automatically update
|
||||||
|
|
||||||
|
### auto-vacuum
|
||||||
|
`--auto-vacuum = false`
|
||||||
|
Some operations that manipulate the local database leave unused entries behind. These entries are not deleted from a hard drive until a VACUUM operation is run. This operation saves disk space in the long run but needs to temporarily create a copy of all valid entries in the database. Setting this to true will allow Duplicati to perform VACUUM operations at its discretion.
|
||||||
|
|
||||||
|
#backup-name
|
||||||
|
`--backup-name = Duplicati.CommandLine`
|
||||||
|
A display name that is attached to this backup. Can be used to identify the backup when sending mail or running scripts.
|
||||||
|
|
||||||
|
### backup-test-samples
|
||||||
|
`--backup-test-samples = 1`
|
||||||
|
After a backup is completed, some files are selected for verification on the remote backend. Use this option to change how many. If this value is set to 0 or the option --no-backend-verification is set, no remote files are verified
|
||||||
|
|
||||||
|
### block-hash-algorithm
|
||||||
|
`--block-hash-algorithm = SHA256`
|
||||||
|
This is a very advanced option! This option can be used to select a block hash algorithm with smaller or larger hash size, for performance or storage space reasons.
|
||||||
|
|
||||||
|
### blocksize
|
||||||
|
`--blocksize = 100kb`
|
||||||
|
The block size determines how files are fragmented. Choosing a large value will cause a larger overhead on file changes, choosing a small value will cause a large overhead on storage of file lists. Note that the value cannot be changed after remote files are created.
|
||||||
|
|
||||||
|
### changed-files
|
||||||
|
`--changed-files`
|
||||||
|
This option can be used to limit the scan to only files that are known to have changed. This is usually only activated in combination with a filesystem watcher that keeps track of file changes.
|
||||||
|
|
||||||
|
### check-filetime-only
|
||||||
|
`--check-filetime-only = false`
|
||||||
|
This flag instructs Duplicati to not look at metadata or filesize when deciding to scan a file for changes. Use this option if you have a large number of files and notice that the scanning takes a long time with unmodified files.
|
||||||
|
|
||||||
|
### compression-extension-file
|
||||||
|
`--compression-extension-file = C:\Program Files\Duplicati 2\default_compressed_extensions.txt`
|
||||||
|
This property can be used to point to a text file where each line contains a file extension that indicates a non-compressible file. Files that have an extension found in the file will not be compressed, but simply stored in the archive. The file format ignores any lines that do not start with a period, and considers a space to indicate the end of the extension. A default file is supplied, that also serves as an example. The default file is placed in `C:\Program Files\Duplicati 2\default_compressed_extensions.txt`.
|
||||||
|
|
||||||
|
### compression-module
|
||||||
|
`--compression-module = zip`
|
||||||
|
Duplicati supports pluggable compression modules. Use this option to select a module to use for compression. This is only applied when creating new volumes, when reading an existing file, the filename is used to select the compression module.
|
||||||
|
|
||||||
|
### control-files
|
||||||
|
`--control-files = false`
|
||||||
|
Use control files.
|
||||||
|
|
||||||
|
### dblock-size
|
||||||
|
`--dblock-size = 50mb`
|
||||||
|
This option can change the maximum size of dblock files. Changing the size can be useful if the backend has a limit on the size of each individual file.
|
||||||
|
|
||||||
|
### dbpath
|
||||||
|
`--dbpath`
|
||||||
|
Path to the file containing the local cache of the remote file database.
|
||||||
|
|
||||||
|
### debug-output
|
||||||
|
`--debug-output = false`
|
||||||
|
Activating this option will make some error messages more verbose, which may help you track down a particular issue.
|
||||||
|
|
||||||
|
### debug-retry-errors
|
||||||
|
`--debug-retry-errors = false`
|
||||||
|
When an error occurs, Duplicati will silently retry, and only report the number of retries. Enable this option to have the error messages displayed when a retry is performed.
|
||||||
|
|
||||||
|
### default-filters
|
||||||
|
`--default-filters`
|
||||||
|
Exclude files that match the given filter sets. Which default filter sets should be used. Valid sets are `Windows`, `OSX`, `Linux` and `All`. If this parameter is set with no value, the set for the current operating system will be used.
|
||||||
|
|
||||||
|
### deleted-files
|
||||||
|
`--deleted-files`
|
||||||
|
This option can be used to supply a list of deleted files. This option will be ignored unless the option `--changed-files` is also set.
|
||||||
|
|
||||||
|
### disable-autocreate-folder
|
||||||
|
`--disable-autocreate-folder = false`
|
||||||
|
If Duplicati detects that the target folder is missing, it will create it automatically. Activate this option to prevent automatic folder creation.
|
||||||
|
|
||||||
|
### disable-filepath-cache
|
||||||
|
`--disable-filepath-cache = true`
|
||||||
|
This option can be used to reduce the memory footprint by not keeping paths and modification timestamps in memory.
|
||||||
|
|
||||||
|
### disable-filetime-check
|
||||||
|
`--disable-filetime-check = false`
|
||||||
|
The operating system keeps track of the last time a file was written. Using this information, Duplicati can quickly determine if the file has been modified. If some application deliberately modifies this information, Duplicati won't work correctly unless this flag is set.
|
||||||
|
|
||||||
|
### disable-module
|
||||||
|
`--disable-module`
|
||||||
|
Supply one or more module names, separated by commas to unload them.
|
||||||
|
|
||||||
|
### disable-piped-streaming
|
||||||
|
`--disable-piped-streaming = false`
|
||||||
|
Use this option to disable multithreaded handling of up- and downloads, that can significantly speed up backend operations depending on the hardware you're running on and the transfer rate of your backend.
|
||||||
|
|
||||||
|
### disable-streaming-transfers
|
||||||
|
`--disable-streaming-transfers = false`
|
||||||
|
Enabling this option will disallow usage of the streaming interface, which means that transfer progress bars will not show, and bandwidth throttle settings will be ignored.
|
||||||
|
|
||||||
|
### disable-synthetic-filelist
|
||||||
|
`--disable-synthetic-filelist = false`
|
||||||
|
If Duplicati detects that the previous backup did not complete, it will generate a filelist that is a merge of the last completed backup and the contents that were uploaded in the incomplete backup session.
|
||||||
|
|
||||||
|
### disable-time-tolerance
|
||||||
|
`--disable-time-tolerance = false`
|
||||||
|
When matching timestamps, Duplicati will adjust the times by a small fraction to ensure that minor time differences do not cause unexpected updates. If the option `--keep-time` is set to keep a week of backups, and the backup is made the same time each week, it is possible that the clock drifts slightly, such that full week has just passed, causing Duplicati to delete the older backup earlier than expected. To avoid this, Duplicati inserts a 1% tolerance (max 1 hour). Use this option to disable the tolerance, and use strict time checking.
|
||||||
|
|
||||||
|
### dont-compress-restore-paths
|
||||||
|
`--dont-compress-restore-paths = false`
|
||||||
|
When restore a subset of a backup into a new folder, the shortest possible path is used to avoid generating deep paths with empty folders. Use this flag to skip this compression, such that the entire original folder structure is preserved, including upper level empty folders.
|
||||||
|
|
||||||
|
### dont-read-manifests
|
||||||
|
`--dont-read-manifests = false`
|
||||||
|
This option will make sure the contents of the manifest file are not read. This also implies that file hashes are not checked either. Use only for disaster recovery.
|
||||||
|
|
||||||
|
### dry-run
|
||||||
|
`--dry-run = false`
|
||||||
|
This option can be used to experiment with different settings and observe the outcome without changing actual files.
|
||||||
|
|
||||||
|
### enable-module
|
||||||
|
`--enable-module`
|
||||||
|
Supply one or more module names, separated by commas to load them.
|
||||||
|
|
||||||
|
### encryption-module
|
||||||
|
`--encryption-module = aes`
|
||||||
|
Duplicati supports pluggable encryption modules. Use this option to select a module to use for encryption. This is only applied when creating new volumes, when reading an existing file, the filename is used to select the encryption module.
|
||||||
|
|
||||||
|
### exclude
|
||||||
|
`--exclude`
|
||||||
|
Exclude files that match this filter. The special characterc `*` means any number of character, and the special character `?` means any single character, use `*.txt` to exclude all files with a txt extension. Regular expressions are also supported and can be supplied by using hard braces, i.e. `[.*\.txt]`.
|
||||||
|
|
||||||
|
### exclude-files-attributes
|
||||||
|
`--exclude-files-attributes`
|
||||||
|
Use this option to exclude files with certain attributes. Use a comma separated list of attribute names to specify more than one. Possible values are: `ReadOnly`, `Hidden`, `System`, `Directory`, `Archive`, `Device`, `Normal`, `Temporary`, `SparseFile`, `ReparsePoint`, `Compressed`, `Offline`, `NotContentIndexed`, `Encrypted`, `IntegrityStream`, `NoScrubData`.
|
||||||
|
|
||||||
|
### file-hash-algorithm
|
||||||
|
`--file-hash-algorithm = SHA256`
|
||||||
|
This is a very advanced option! This option can be used to select a file hash algorithm with smaller or larger hash size, for performance or storage space reasons.
|
||||||
|
|
||||||
|
### file-read-buffer-size
|
||||||
|
`--file-read-buffer-size = 0kb`
|
||||||
|
Use this size to control how many bytes a read from a file before processing.
|
||||||
|
|
||||||
|
### force-locale
|
||||||
|
`--force-locale`
|
||||||
|
By default, your system locale and culture settings will be used. In some cases you may prefer to run with another locale, for example to get messages in another language. This option can be used to set the locale. Supply a blank string to choose the "Invariant Culture".
|
||||||
|
|
||||||
|
### full-block-verification
|
||||||
|
`--full-block-verification = false`
|
||||||
|
Use this option to increase verification by checking the hash of blocks read from a volume before patching restored files with the data.
|
||||||
|
|
||||||
|
### full-remote-verification
|
||||||
|
`--full-remote-verification = false`
|
||||||
|
After a backup is completed, some files are selected for verification on the remote backend. Use this option to turn on full verification, which will decrypt the files and examine the insides of each volume, instead of simply verifying the external hash, If the option `--no-backend-verification` is set, no remote files are verified. This option is automatically set when then verification is performed directly.
|
||||||
|
|
||||||
|
### full-result
|
||||||
|
--full-result = false
|
||||||
|
Use this option to increase the amount of output generated as the result of the operation, including all filenames.
|
||||||
|
|
||||||
|
### hardlink-policy
|
||||||
|
`--hardlink-policy = All`
|
||||||
|
Use this option to handle hardlinks (only works on Linux/OSX). The `first` option will record a hardlink ID for each hardlink to avoid storing hardlinked paths multiple times. The option `all` will ignore hardlink information, and treat each hardlink as a unique path. The option `none` will ignore all hardlinks with more than one link.
|
||||||
|
|
||||||
|
### include
|
||||||
|
`--include`
|
||||||
|
Include files that match this filter. The special character `*` means any number of character, and the special character `?` means any single character, use `*.txt` to include all files with a txt extension. Regular expressions are also supported and can be supplied by using hard braces, i.e.`[.*\.txt]`.
|
||||||
|
|
||||||
|
### index-file-policy
|
||||||
|
`--index-file-policy = Full`
|
||||||
|
The index files are used to limit the need for downloading dblock files when there is no local database present.
|
||||||
|
The more information is recorded in the index files, the faster operations can proceed without the database. The tradeoff is that larger index files take up more remote space and which may never be used.
|
||||||
|
|
||||||
|
### keep-time
|
||||||
|
`--keep-time`
|
||||||
|
Use this option to set the timespan in which backups are kept.
|
||||||
|
|
||||||
|
### keep-versions
|
||||||
|
`--keep-versions = 0`
|
||||||
|
Use this option to set number of versions to keep, supply `-1` to keep all versions.
|
||||||
|
|
||||||
|
### list-folder-contents
|
||||||
|
`--list-folder-contents = false`
|
||||||
|
When searching for files, all matching files are returned. Use this option to return only the entries found in the folder specified as filter.
|
||||||
|
|
||||||
|
### list-prefix-only
|
||||||
|
`--list-prefix-only = false`
|
||||||
|
When searching for files, all matching files are returned. Use this option to return only the largest common prefix path.
|
||||||
|
|
||||||
|
### list-sets-only
|
||||||
|
`--list-sets-only = false`
|
||||||
|
Use this option to only list filesets and avoid traversing file names and other metadata which slows down the process.
|
||||||
|
|
||||||
|
### list-verify-uploads
|
||||||
|
`--list-verify-uploads = false`
|
||||||
|
Verify uploads by listing contents.
|
||||||
|
|
||||||
|
### log-file
|
||||||
|
`--log-file`
|
||||||
|
Log internal information.
|
||||||
|
|
||||||
|
### log-level
|
||||||
|
`--log-level = Warning`
|
||||||
|
Specifies the amount of log information to write into the file specified by `--log-file`.
|
||||||
|
|
||||||
|
### log-retention
|
||||||
|
`--log-retention = 30D`
|
||||||
|
Set the time after which log data will be purged from the database.
|
||||||
|
|
||||||
|
### no-auto-compact
|
||||||
|
`--no-auto-compact = false`
|
||||||
|
If a large number of small files are detected during a backup, or wasted space is found after deleting backups, the remote data will be compacted. Use this option to disable such automatic compacting and only compact when running the compact command.
|
||||||
|
|
||||||
|
### no-backend-verification
|
||||||
|
`--no-backend-verification = false`
|
||||||
|
If this flag is set, the local database is not compared to the remote filelist on startup. The intended usage for this option is to work correctly in cases where the filelisting is broken or unavailable.
|
||||||
|
|
||||||
|
### no-connection-reuse
|
||||||
|
`--no-connection-reuse = false`
|
||||||
|
Duplicati will attempt to perform multiple operations on a single connection, as this avoids repeated login attempts, and thus speeds up the process. This option can be used to ensure that each operation is performed on a separate connection.
|
||||||
|
|
||||||
|
### no-encryption
|
||||||
|
`--no-encryption = false`
|
||||||
|
If you store the backups on a local disk, and prefer that they are kept unencrypted, you can turn of encryption completely by using this switch.
|
||||||
|
|
||||||
|
### no-local-blocks
|
||||||
|
`--no-local-blocks = false`
|
||||||
|
Duplicati will attempt to use data from source files to minimize the amount of downloaded data. Use this option to skip this optimization and only use remote data.
|
||||||
|
|
||||||
|
### no-local-db
|
||||||
|
`--no-local-db = false`
|
||||||
|
When listing contents or when restoring files, the local database can be skipped. This is usually slower, but can be used to verify the actual contents of the remote store.
|
||||||
|
|
||||||
|
### number-of-retries
|
||||||
|
`--number-of-retries = 5`
|
||||||
|
If an upload or download fails, Duplicati will retry a number of times before failing. Use this to handle unstable network connections better.
|
||||||
|
|
||||||
|
### overwrite
|
||||||
|
`--overwrite = false`
|
||||||
|
Use this option to overwrite target files when restoring, if this option is not set the files will be restored with a timestamp and a number appended.
|
||||||
|
|
||||||
|
### parameters-file
|
||||||
|
`--parameters-file`
|
||||||
|
This option can be used to store some or all of the options given to the commandline client. The file must be a plain text file, UTF-8 encoding is preferred. Each line in the file should be of the format `--option=value`. The special options `--source` and `--target` can be used to override the localpath and the remote destination uri, respectively. The options in this file take precedence over the options provided on the commandline. You cannot specify filters in both the file and on the commandline. Instead, you can use the special`--replace-filter`, `--append-filter`, or `--prepend-filter` options to specify filters inside the parameter file. Each filter must be prefixed with either a `+` or a `-`, and multiple filters must be joined with `;`.
|
||||||
|
|
||||||
|
### passphrase
|
||||||
|
`--passphrase`
|
||||||
|
Supply a passphrase that Duplicati will use to encrypt the backup volumes, making them unreadable without the passphrase. This variable can also be supplied through the environment variable `PASSPHRASE`.
|
||||||
|
|
||||||
|
### patch-with-local-blocks
|
||||||
|
`--patch-with-local-blocks = false`
|
||||||
|
Enable this option to look into other files on this machine to find existing blocks. This is a fairly slow operation but can limit the size of downloads.
|
||||||
|
|
||||||
|
### prefix
|
||||||
|
`--prefix = duplicate`
|
||||||
|
A string used to prefix the filenames of the remote volumes, can be used to store multiple backups in the same remote folder. The prefix cannot contain a hyphen (`-`), but can contain all other characters allowed by the remote storage.
|
||||||
|
|
||||||
|
### quiet-console
|
||||||
|
`--quiet-console = false`
|
||||||
|
If this option is set, progress reports and other messages that would normally go to the console will be redirected to the log.
|
||||||
|
|
||||||
|
### quota-size
|
||||||
|
`--quota-size`
|
||||||
|
This value can be used to set a known upper limit on the amount of space a backend has. If the backend reports the size itself, this value is ignored.
|
||||||
|
|
||||||
|
### repair-only-paths
|
||||||
|
`--repair-only-paths = false`
|
||||||
|
Use this option to build a searchable local database which only contains path information. This option is usable for quickly building a database to locate certain content without needing to reconstruct all information. The resulting database can be searched, but cannot be used to restore data with.
|
||||||
|
|
||||||
|
### restore-path
|
||||||
|
`--restore-path`
|
||||||
|
By default, files will be restored in the source folders, use this option to restore to another folder.
|
||||||
|
|
||||||
|
### restore-permissions
|
||||||
|
`--restore-permissions = false`
|
||||||
|
By default permissions are not restored as they might prevent you from accessing your files. Use this option to restore the permissions as well.
|
||||||
|
|
||||||
|
### retention-policy
|
||||||
|
`--retention-policy`
|
||||||
|
Use this option to reduce the number of versions that are kept with increasing version age by deleting most of the old backups. The expected format is a comma separated list of colon separated time frame and interval pairs. For example the value `7D:0s,3M:1D,10Y:2M` means "For 7 day keep all backups, for 3 months keep one backup per day and for 10 years one backup every 2nd month.
|
||||||
|
|
||||||
|
### retry-delay
|
||||||
|
`--retry-delay = 10s`
|
||||||
|
After a failed transmission, Duplicati will wait a short period before attempting again. This is useful if the network drops out occasionally during transmissions.
|
||||||
|
|
||||||
|
### skip-file-hash-checks
|
||||||
|
`--skip-file-hash-checks = false`
|
||||||
|
If the hash for the volume does not match, Duplicati will refuse to use the backup. Supply this flag to allow Duplicati to proceed anyway.
|
||||||
|
|
||||||
|
### skip-files-larger-than
|
||||||
|
`--skip-files-larger-than`
|
||||||
|
This option allows you to exclude files that are larger than the given value. Use this to prevent backups becoming extremely large.
|
||||||
|
|
||||||
|
### skip-metadata
|
||||||
|
`--skip-metadata = false`
|
||||||
|
Use this option to disable the storage of metadata, such as file timestamps. Disabling metadata storage will speed up the backup and restore operations, but does not affect file size much.
|
||||||
|
|
||||||
|
### skip-restore-verification
|
||||||
|
`--skip-restore-verification = false`
|
||||||
|
After restoring files, the file hash of all restored files are checked to verify that the restore was successful.
|
||||||
|
Use this option to disable the check and avoid waiting for the verification.
|
||||||
|
|
||||||
|
### small-file-max-count
|
||||||
|
`--small-file-max-count = 20`
|
||||||
|
To avoid filling the remote storage with small files, this value can force grouping small files. The small volumes will always be combined when they can fill an entire volume.
|
||||||
|
|
||||||
|
### small-file-size
|
||||||
|
`--small-file-size`
|
||||||
|
When examining the size of a volume in consideration for compacting, a small tolerance value is used, by default 20 percent of the volume size. This ensures that large volumes which may have a few bytes wasted space are not downloaded and rewritten.
|
||||||
|
|
||||||
|
### snapshot-policy
|
||||||
|
`--snapshot-policy = off`
|
||||||
|
This setting controls the usage of snapshots, which allows Duplicati to backup files that are locked by other programs. If this is set to `off`, Duplicati will not attempt to create a disk snapshot. Setting this to `auto` makes Duplicati attempt to create a snapshot, and fail silently if that was not allowed or supported. A setting of `on` will also make Duplicati attempt to create a snapshot, but will produce a warning message in the log if it fails. Setting it to `required` will make Duplicati abort the backup if the snapshot creation fails. On Windows this uses the Volume Shadow Copy Services (VSS) and requires administrative privileges. On Linux this uses Logical Volume Management (LVM) and requires root privileges.
|
||||||
|
|
||||||
|
### store-metadata
|
||||||
|
`--store-metadata = true`
|
||||||
|
Stores metadata, such as file timestamps and attributes. This increases the required storage space as well as the processing time.
|
||||||
|
|
||||||
|
### symlink-policy
|
||||||
|
`--symlink-policy = Store`
|
||||||
|
Use this option to handle symlinks differently. The `store` option will simply record a symlink with its name and destination, and a restore will recreate the symlink as a link. Use the option `ignore` to ignore all symlinks and not store any information about them. Previous versions of Duplicati used the setting `follow`, which will cause symlinked files to be included and restore as normal files.
|
||||||
|
|
||||||
|
### synchronous-upload
|
||||||
|
`--synchronous-upload = false`
|
||||||
|
Duplicati will upload files while scanning the disk and producing volumes, which usually makes the backup faster.
|
||||||
|
Use this flag to turn the behavior off, so that Duplicati will wait for each volume to complete.
|
||||||
|
|
||||||
|
### tempdir
|
||||||
|
`--tempdir = C:\Users\User\AppData\Local\Temp\`
|
||||||
|
Duplicati will use the system default temporary folder. This option can be used to supply an alternative folder for temporary storage. Note that SQLite will always put temporary files in the system default temporary folder.
|
||||||
|
Consider using the `TMPDIR` environment variable on Linux to set the temporary folder for both Duplicati and SQLite.
|
||||||
|
|
||||||
|
### thread-priority
|
||||||
|
`--thread-priority = normal`
|
||||||
|
Selects another thread priority for the process. Use this to set Duplicati to be more or less CPU intensive.
|
||||||
|
|
||||||
|
### threshold
|
||||||
|
`--threshold = 25`
|
||||||
|
As files are changed, some data stored at the remote destination may not be required. This option controls how much wasted space the destination can contain before being reclaimed. This value is a percentage used on each volume and the total storage.
|
||||||
|
|
||||||
|
### throttle-download
|
||||||
|
`--throttle-download = 0kb`
|
||||||
|
By setting this value you can limit how much bandwidth Duplicati consumes for downloads. Setting this limit can make the backups take longer, but will make Duplicati less intrusive.
|
||||||
|
|
||||||
|
### throttle-upload
|
||||||
|
`--throttle-upload = 0kb`
|
||||||
|
By setting this value you can limit how much bandwidth Duplicati consumes for uploads. Setting this limit can make the backups take longer, but will make Duplicati less intrusive.
|
||||||
|
|
||||||
|
### time
|
||||||
|
`--time = now`
|
||||||
|
By default, Duplicati will list and restore files from the most recent backup, use this option to select another item. You may use relative times, like "-2M" for a backup from two months ago.
|
||||||
|
|
||||||
|
### upload-unchanged-backups
|
||||||
|
`--upload-unchanged-backups = false`
|
||||||
|
If no files have changed, Duplicati will not upload a backup set. If the backup data is used to verify that a backup was executed, this option will make Duplicati upload a backupset even if it is empty.
|
||||||
|
|
||||||
|
### upload-verification-file
|
||||||
|
`--upload-verification-file = false`
|
||||||
|
Use this option to upload a verification file after changing the remote storage. The file is not encrypted and contains the size and SHA256 hashes of all the remote files and can be used to verify the integrity of the files.
|
||||||
|
|
||||||
|
### use-block-cache
|
||||||
|
`--use-block-cache = false`
|
||||||
|
Store an in-memory block cache.
|
||||||
|
|
||||||
|
### usn-policy
|
||||||
|
`--usn-policy = off`
|
||||||
|
This setting controls the usage of NTFS USN numbers, which allows Duplicati to obtain a list of files and folders much faster. If this is set to `off`, Duplicati will not attempt to use USN. Setting this to `auto` makes Duplicati attempt to use USN, and fail silently if that was not allowed or supported. A setting of `on` will also make Duplicati attempt to use USN, but will produce a warning message in the log if it fails. Setting it to `required` will make Duplicati abort the backup if the USN usage fails. This feature is only supported on Windows and requires administrative privileges.
|
||||||
|
|
||||||
|
### verbose
|
||||||
|
`--verbose = false`
|
||||||
|
Use this option to increase the amount of output generated when running an option. Generally this option will produce a line for each file processed.
|
||||||
|
|
||||||
|
### version
|
||||||
|
`--version`
|
||||||
|
By default, Duplicati will list and restore files from the most recent backup, use this option to select another item. You may enter multiple values separated with comma, and ranges using `-`, e.g. `0,2-4,7`.
|
||||||
|
|
||||||
|
### vss-exclude-writers
|
||||||
|
`--vss-exclude-writers`
|
||||||
|
Use this option to exclude faulty writers from a snapshot. This is equivalent to the -wx flag of the vshadow.exe tool, except that it only accepts writer class GUIDs, and not component names or instance GUIDs. Multiple GUIDs must be separated with a semicolon, and most forms of GUIDs are allowed, including with and without curly braces.
|
||||||
|
|
||||||
|
### vss-use-mapping
|
||||||
|
`--vss-use-mapping = false`
|
||||||
|
Activate this option to map VSS snapshots to a drive (similar to SUBST, using Win32 DefineDosDevice). This will create temporary drives that are then used to access the contents of a snapshot. This workaround can speed up file access on Windows XP.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
## HTTP options
|
||||||
|
|
||||||
|
With these options can be used to change the way http requests are issued. The module that provide these options is loaded automatically, use `--disable-module` to prevent this.
|
||||||
|
|
||||||
|
### disable-expect100-continue
|
||||||
|
`--disable-expect100-continue (Boolean)`
|
||||||
|
Disable the expect header.
|
||||||
|
The default HTTP request has the header "Expect: 100-Continue" attached, which allows some optimizations when authenticating, but also breaks some web servers, causing them to report "417 - Expectation failed".
|
||||||
|
Default value: `false`
|
||||||
|
|
||||||
|
### disable-nagling
|
||||||
|
`--disable-nagling (Boolean)`
|
||||||
|
Disable nagling.
|
||||||
|
By default the http requests use the RFC 896 nagling algorithm to support transfer of small packages more efficiently.
|
||||||
|
Default value: `false`
|
||||||
|
|
||||||
|
### accept-specified-ssl-hash
|
||||||
|
`--accept-specified-ssl-hash (String)`
|
||||||
|
Optionally accept a known SSL certificate.
|
||||||
|
If your server certificate is reported as invalid (eg. with self-signed certificates), you can supply the certificate hash to approve it anyway. The hash value must be entered in hex format without spaces. You can enter multiple hashes separated by commas.
|
||||||
|
|
||||||
|
### accept-any-ssl-certificate
|
||||||
|
`--accept-any-ssl-certificate (Boolean)`
|
||||||
|
Accept any server certificate.
|
||||||
|
Use this option to accept any server certificate, regardless of what errors it may have. Please use `--accept-specified-ssl-hash` instead, whenever possible.
|
||||||
|
|
||||||
|
### oauth-url
|
||||||
|
`--oauth-url (String)`
|
||||||
|
Alternate OAuth URL.
|
||||||
|
Duplicati uses an external server to support the OAuth authentication flow. If you have set up your own Duplicati OAuth server, you can supply the refresh url.
|
||||||
|
Default value: `https://duplicati-oauth-handler.appspot.com/refresh`
|
||||||
|
|
||||||
|
### allowed-ssl-versions
|
||||||
|
`--allowed-ssl-versions (Flags)`
|
||||||
|
Sets allowed SSL versions.
|
||||||
|
This option changes the default SSL versions allowed. This is an advanced option and should only be used if you want to enhance security or work around an issue with a particular SSL protocol.
|
||||||
|
Values: `Ssl3`, `Tls`, `Tls11`, `Tls12`, `SystemDefault`
|
||||||
|
Default value: `SystemDefault,Ssl3,Tls`
|
||||||
|
|
||||||
|
### http-operation-timeout
|
||||||
|
`--http-operation-timeout (Timespan)`
|
||||||
|
Sets the default operation timeout.
|
||||||
|
This option changes the default timeout for any HTTP request, the time covers the entire operation from initial packet to shutdown.
|
||||||
|
|
||||||
|
### http-readwrite-timeout
|
||||||
|
`--http-readwrite-timeout (Timespan)`
|
||||||
|
Sets readwrite.
|
||||||
|
This option changes the default read-write timeout. Read-write timeouts are used to detect a stalled requests, and this option configures the maximum time between activity on a connection.
|
||||||
|
|
||||||
|
### http-enable-buffering
|
||||||
|
`--http-enable-buffering (Boolean)`
|
||||||
|
Sets HTTP buffering.
|
||||||
|
This option sets the HTTP buffering. Setting this to "true" can cause memory leaks, but can also improve performance in some cases.
|
||||||
|
Default value: `false`
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
## Scripting options
|
||||||
|
|
||||||
|
With these options you can executes a script before starting an operation, and again on completion. Module is loaded automatically, use `--disable-module` to prevent this.
|
||||||
|
|
||||||
|
### run-script-before
|
||||||
|
`--run-script-before (Path)`
|
||||||
|
Run a script on startup.
|
||||||
|
Executes a script before performing an operation. The operation will block until the script has completed or timed out.
|
||||||
|
|
||||||
|
### run-script-after
|
||||||
|
`--run-script-after (Path)`
|
||||||
|
Run a script on exit.
|
||||||
|
Executes a script after performing an operation. The script will receive the operation results written to stdout.
|
||||||
|
|
||||||
|
### run-script-before-required
|
||||||
|
`--run-script-before-required (Path)`
|
||||||
|
Run a required script on startup.
|
||||||
|
Executes a script before performing an operation. The operation will block until the script has completed or timed out. If the script returns a non-zero error code or times out, the operation will be aborted.
|
||||||
|
|
||||||
|
### run-script-timeout
|
||||||
|
`--run-script-timeout (Timespan)`
|
||||||
|
Sets the script timeout.
|
||||||
|
Sets the maximum time a script is allowed to execute. If the script has not completed within this time, it will continue to execute but the operation will continue too, and no script output will be processed.
|
||||||
|
Default value: `60s`
|
||||||
|
|
||||||
|
## Reporting options
|
||||||
|
|
||||||
|
These options provide support for sending status reports via HTTP messages. Modules sendhttp, sendmail and sendxxmp are loaded automatically, use `--disable-module` to prevent this.
|
||||||
|
|
||||||
|
### send-http-url
|
||||||
|
`--send-http-url (String)`
|
||||||
|
HTTP report url.
|
||||||
|
|
||||||
|
### send-http-message
|
||||||
|
`--send-http-message (String)`
|
||||||
|
The message template.
|
||||||
|
This value can be a filename. If the file exists, the file contents will be used as the message. In the message, certain tokens are replaced:
|
||||||
|
|
||||||
|
* `%OPERATIONNAME%`
|
||||||
|
The name of the operation, normally `Backup`.
|
||||||
|
* `%REMOTEURL%`
|
||||||
|
Remote server url.
|
||||||
|
* `%LOCALPATH%`
|
||||||
|
The path to the local files or folders involved in the operation (if any).
|
||||||
|
* `%PARSEDRESULT%`
|
||||||
|
* The parsed result, if the operation is a backup. Possible values are: `Error`, `Warning`, `Success`
|
||||||
|
|
||||||
|
All command line options are also reported within `%value%`, e.g. `%volsize%`. Any unknown/unset value is removed.
|
||||||
|
Default value: `Duplicati %OPERATIONNAME% report for %backup-name%`
|
||||||
|
`%RESULT%`
|
||||||
|
|
||||||
|
### send-http-message-parameter-name
|
||||||
|
`--send-http-message-parameter-name (String)`
|
||||||
|
The name of the parameter to send the message as.
|
||||||
|
Default value: `message`
|
||||||
|
|
||||||
|
### send-http-extra-parameters
|
||||||
|
`--send-http-extra-parameters (String)`
|
||||||
|
Extra parameters to add to the http message. I.e. `parameter1=value1¶meter2=value2`.
|
||||||
|
|
||||||
|
### send-http-level
|
||||||
|
`--send-http-level (Enumeration)`
|
||||||
|
The messages to send.
|
||||||
|
You can specify one of `Success`, `Warning`, `Error`, `Fatal`. You can supply multiple options with a comma separator, e.g. `Success,Warning`. The special value `All is a shorthand for `Success,Warning,Error,Fatal` and will cause all backup operations to send a message.
|
||||||
|
Values: `Unknown`, `Success`, `Warning`, `Error`, `Fatal`, `All`
|
||||||
|
Default value: `all`
|
||||||
|
|
||||||
|
### send-http-any-operation
|
||||||
|
`--send-http-any-operation (Boolean)`
|
||||||
|
Send messages for all operations.
|
||||||
|
By default, messages will only be sent after a Backup operation. Use this option to send messages for all operations.
|
||||||
|
|
||||||
|
### send-mail-to
|
||||||
|
`--send-mail-to (String)`
|
||||||
|
Email recipient(s).
|
||||||
|
This setting is required if mail should be sent, all other settings have default values. You can supply multiple email addresses separated with commas, and you can use the normal address format as specified by RFC2822 section 3.4.
|
||||||
|
Example with 3 recipients: `Peter Sample <peter@example.com>, John Sample <john@example.com>, admin@example.com`
|
||||||
|
|
||||||
|
### send-mail-from
|
||||||
|
`--send-mail-from (String)`
|
||||||
|
Email sender.
|
||||||
|
Address of the email sender. If no host is supplied, the hostname of the first recipient is used. Examples of allowed formats:
|
||||||
|
|
||||||
|
* `sender`
|
||||||
|
* `sender@example.com`
|
||||||
|
* `Mail Sender <sender>`
|
||||||
|
* `Mail Sender <sender@example.com>`
|
||||||
|
|
||||||
|
Default value: `no-reply`
|
||||||
|
|
||||||
|
### send-mail-subject
|
||||||
|
`--send-mail-subject (String)`
|
||||||
|
The email subject.
|
||||||
|
This setting supplies the email subject. Values are replaced as described in the description for `--send-mail-body`.
|
||||||
|
Default value: `Duplicati %OPERATIONNAME% report for %backup-name%`
|
||||||
|
|
||||||
|
### send-mail-body
|
||||||
|
`--send-mail-body (String)`
|
||||||
|
The message body.
|
||||||
|
This value can be a filename. If the file exists, the file contents will be used as the message body.
|
||||||
|
In the message body, certain tokens are replaced:
|
||||||
|
|
||||||
|
* `%OPERATIONNAME%`
|
||||||
|
The name of the operation, normally `Backup`.
|
||||||
|
* `%REMOTEURL%`
|
||||||
|
Remote server url.
|
||||||
|
* `%LOCALPATH%`
|
||||||
|
The path to the local files or folders involved in the operation (if any).
|
||||||
|
* `%PARSEDRESULT%`
|
||||||
|
The parsed result, if the operation is a backup. Possible values are: `Error`, `Warning`, `Success`.
|
||||||
|
|
||||||
|
All command line options are also reported within `%value%`, e.g.`%volsize%`. Any unknown/unset value is removed.
|
||||||
|
Default value: `%RESULT%`
|
||||||
|
|
||||||
|
### send-mail-url
|
||||||
|
`--send-mail-url (String)`
|
||||||
|
SMTP Url.
|
||||||
|
A url for the SMTP server, e.g. `smtp://example.com:25`. Multiple servers can be supplied in a prioritized list, separated with semicolon. If a server fails, the next server in the list is tried, until the message has been sent.
|
||||||
|
If no server is supplied, a DNS lookup is performed to find the first recipient's MX record, and all SMTP servers are tried in their priority order until the message is sent.
|
||||||
|
To enable SMTP over SSL, use the format `smtps://example.com`. To enable SMTP STARTTLS, use the format `smtp://example.com:25/?starttls=when-available` or `smtp://example.com:25/?starttls=always`. If no port is specified, port 25 is used for non-ssl, and 465 for SSL connections. To force not to use STARTTLS use `smtp://example.com:25/?starttls=never`.
|
||||||
|
|
||||||
|
### send-mail-username
|
||||||
|
`--send-mail-username (String)`
|
||||||
|
SMTP Username.
|
||||||
|
The username used to authenticate with the SMTP server if required.
|
||||||
|
|
||||||
|
### send-mail-password
|
||||||
|
`--send-mail-password (String)`
|
||||||
|
SMTP Password.
|
||||||
|
The password used to authenticate with the SMTP server if required.
|
||||||
|
|
||||||
|
### send-mail-level
|
||||||
|
`--send-mail-level (String)`
|
||||||
|
The messages to send.
|
||||||
|
You can specify one of `Success`, `Warning`, `Error`, `Fatal`. You can supply multiple options with a comma separator, e.g. `Success,Warning`. The special value `All` is a shorthand for `Success,Warning,Error,Fatal` and will cause all backup operations to send an email.
|
||||||
|
Values: `Unknown`, `Success`, `Warning`, `Error`, `Fatal`, `All`
|
||||||
|
Default value: `all`
|
||||||
|
|
||||||
|
### send-mail-any-operation
|
||||||
|
`--send-mail-any-operation (Boolean)`
|
||||||
|
Send email for all operations.
|
||||||
|
By default, mail will only be sent after a Backup operation. Use this option to send mail for all operations.
|
||||||
|
|
||||||
|
### send-xmpp-to
|
||||||
|
`--send-xmpp-to (String)`
|
||||||
|
XMPP recipient email.
|
||||||
|
The users who should have the messages sent, specify multiple users separated with commas.
|
||||||
|
|
||||||
|
### send-xmpp-message
|
||||||
|
`--send-xmpp-message (String)`
|
||||||
|
The message template.
|
||||||
|
This value can be a filename. If the file exists, the file contents will be used as the message.
|
||||||
|
In the message, certain tokens are replaced:
|
||||||
|
|
||||||
|
* `%OPERATIONNAME%`
|
||||||
|
The name of the operation, normally `Backup`.
|
||||||
|
* `%REMOTEURL%`
|
||||||
|
Remote server url.
|
||||||
|
* `%LOCALPATH%`
|
||||||
|
The path to the local files or folders involved in the operation (if any).
|
||||||
|
* `%PARSEDRESULT%`
|
||||||
|
The parsed result, if the operation is a backup. Possible values are: `Error`, `Warning`, `Success`.
|
||||||
|
|
||||||
|
All command line options are also reported within `%value%`, e.g. `%volsize%`. Any unknown/unset value is removed.
|
||||||
|
Default value: `Duplicati %OPERATIONNAME% report for %backup-name%`
|
||||||
|
`%RESULT%`
|
||||||
|
|
||||||
|
### send-xmpp-username
|
||||||
|
`--send-xmpp-username (String)`
|
||||||
|
The XMPP username.
|
||||||
|
The username for the account that will send the message, including the hostname. I.e. `account@jabber.org/Home`.
|
||||||
|
|
||||||
|
### send-xmpp-password
|
||||||
|
`--send-xmpp-password (String)`
|
||||||
|
The XMPP password.
|
||||||
|
The password for the account that will send the message.
|
||||||
|
|
||||||
|
### send-xmpp-level
|
||||||
|
`--send-xmpp-level (Enumeration)`
|
||||||
|
The messages to send.
|
||||||
|
You can specify one of `Success`, `Warning`, `Error`, `Fatal`.
|
||||||
|
You can supply multiple options with a comma separator, e.g. `Success,Warning`. The special value `All` is a shorthand for `Success,Warning,Error,Fatal` and will cause all backup operations to send a message.
|
||||||
|
Values: `Unknown`, `Success`, `Warning`, `Error`, `Fatal`, `All`
|
||||||
|
Default value: `all`
|
||||||
|
|
||||||
|
### send-xmpp-any-operation
|
||||||
|
`--send-xmpp-any-operation (Boolean)<`
|
||||||
|
Send messages for all operations.
|
||||||
|
By default, messages will only be sent after a Backup operation. Use this option to send messages for all operations.
|
||||||
|
|
||||||
|
|
295
docs/07-other-command-line-utilities.md
Normal file
295
docs/07-other-command-line-utilities.md
Normal file
@ -0,0 +1,295 @@
|
|||||||
|
|
||||||
|
A number of Command Line Utilities are included in the Duplicati package. Each command Line Utility serves a particular purpose. Using the Command Line Utilities, you can backup and restore files without using the Graphical User Interface (from the command prompt or by using your favorite task scheduler), launch a server instance, register Duplicati as a Windows service and perform disaster recovery tasks.
|
||||||
|
|
||||||
|
All Command Line Utilities can be found in the Duplicati program folder.
|
||||||
|
|
||||||
|
## Duplicati.GUI.TrayIcon.exe
|
||||||
|
|
||||||
|
This utility starts the Duplicati tray icon. Without additional parameters specified, the included webserver is activated. The webserver listens on TCP port 8200 by default. If port 8200 is unavailable, port 8300 is tried, increasing until a free port is found. You can disable the internal webserver if you are using a separate instance of the Duplicati Server component.
|
||||||
|
|
||||||
|
These command line options are supported:
|
||||||
|
|
||||||
|
* `--help`
|
||||||
|
Displays the integrated help text.
|
||||||
|
* `--toolkit:`
|
||||||
|
Choose the toolkit used to generate the TrayIcon, note that it will fail if the selected toolkit is not supported on this machine.
|
||||||
|
Supported toolkits: `winforms`.
|
||||||
|
* `--hosturl`
|
||||||
|
Supply the url that the TrayIcon will connect to and show status for.
|
||||||
|
* `--no-hosted-server`
|
||||||
|
Set this option to not spawn a local service, use if the TrayIcon should connect to a running service.
|
||||||
|
* `--read-config-from-db`
|
||||||
|
Set this option to read server connection info for running service from its database (only together with no-hosted-server).
|
||||||
|
* `--browser-command`
|
||||||
|
Set this option to override the default browser detection.
|
||||||
|
* `--detached-process`
|
||||||
|
This option runs the tray-icon in detached mode, meaning that the process will exit immediately and not send output to the console of the caller.
|
||||||
|
|
||||||
|
Additionally, these server options are also supported:
|
||||||
|
|
||||||
|
* `--unencrypted-database`
|
||||||
|
* `--portable-mode`
|
||||||
|
* `--log-file`
|
||||||
|
* `--log-level`
|
||||||
|
* `--webservice-webroot`
|
||||||
|
* `--webservice-port`
|
||||||
|
* `--webservice-sslcertificatefile`
|
||||||
|
* `--webservice-sslcertificatepassword`
|
||||||
|
* `--webservice-interface`
|
||||||
|
* `--ping-pong-keepalive`
|
||||||
|
* `--log-retention`
|
||||||
|
* `--server-datafolder`
|
||||||
|
* `--server-encryption-key`
|
||||||
|
|
||||||
|
See [Duplicati.Server.exe](#_Duplicati.Server.exe) for more information about these command line options.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
## Duplicati.Server.exe
|
||||||
|
|
||||||
|
Once started, The Duplicati Server component can perform a number of tasks in the background. A scheduler keeps track of configured backup jobs and starts them at scheduled points in time. Backup jobs can be configured and monitored using the built-in webserver. If no port is specified, the webserver listens on TCP port 8200\. If this port is unavailable, the webserver will try to start listening on port 8300 and so on, until an available port is found.
|
||||||
|
|
||||||
|
The server component is completely included in Duplicati.GUI.TrayIcon.exe, so launching this utility will result in starting the Duplicati Server component also, unless disabled with the `--no-hosted-server` command line option.
|
||||||
|
|
||||||
|
The following command line options can be specified:
|
||||||
|
|
||||||
|
* `--help`
|
||||||
|
Displays the help text.
|
||||||
|
* `--unencrypted-database`
|
||||||
|
Disables database encryption.
|
||||||
|
* `--portable-mode`
|
||||||
|
Activates portable mode where the database is placed below the program executable.
|
||||||
|
* `--log-file`
|
||||||
|
Outputs log information to the file given.
|
||||||
|
* `--log-level`
|
||||||
|
Determines the amount of information written in the log file.
|
||||||
|
* `--webservice-webroot`
|
||||||
|
The path to the folder where the static files for the webserver is present. The folder must be located beneath the installation folder.
|
||||||
|
* `--webservice-port`
|
||||||
|
The port the webserver listens on. Multiple values may be supplied with a comma in between.
|
||||||
|
* `--webservice-sslcertificatefile`
|
||||||
|
The certificate and key file in PKCS #12 format the webserver use for SSL. Only RSA/DSA keys are supported.
|
||||||
|
* `--webservice-sslcertificatepassword`
|
||||||
|
The password for decryption of certificate PKCS #12 file.
|
||||||
|
* `--webservice-interface`
|
||||||
|
The interface the webserver listens on. The special values `*` and `any` means any interface. The special value `loopback` means the loopback adapter.
|
||||||
|
* `--webservice-password`
|
||||||
|
The password required to access the webserver. This option is saved so you do not need to set it on each run. Setting an empty value disables the password.
|
||||||
|
* `--ping-pong-keepalive`
|
||||||
|
When running as a server, the service daemon must verify that the process is responding. If this option is enabled, the server reads stdin and writes a reply to each line read.
|
||||||
|
* `--log-retention`
|
||||||
|
Set the time after which log data will be purged from the database.
|
||||||
|
* `--server-datafolder`
|
||||||
|
Duplicati needs to store a small database with all settings. Use this option to choose where the settings are stored. This option can also be set with the environment variable `DUPLICATI_HOME`.
|
||||||
|
* `--server-encryption-key`
|
||||||
|
This option sets the encryption key used to scramble the local settings database. This option can also be set with the environment variable `DUPLICATI_DB_KEY`. Use the option `--unencrypted-database` to disable the database scrambling.
|
||||||
|
|
||||||
|
|
||||||
|
## Duplicati.WindowsService.exe
|
||||||
|
|
||||||
|
This command line tool can be used to register Duplicati.Server.exe as a Windows service. Windows services are started in the background at bootup time, regardless if a user is logged on or not.
|
||||||
|
|
||||||
|
The service is started with the SYSTEM account by default. The SYSTEM account has full access to the complete local filesystem. As a result, all files on the local system can be accessed by the Duplicati Web interface. Therefore it is strongly recommended to secure the web interface with a password when using Duplicati as a Windows service.
|
||||||
|
|
||||||
|
To register Duplicati.Server.exe as a Windows service, open an elevated command prompt (Run as Administrator). `Duplicati.WindowsService.exe` accepts the following commands:
|
||||||
|
|
||||||
|
* `install`
|
||||||
|
Installs the service.
|
||||||
|
* `uninstall`
|
||||||
|
Uninstalls the service.
|
||||||
|
|
||||||
|
Supported options for the install command:
|
||||||
|
* `/localuser`
|
||||||
|
Installs the service as a local user.
|
||||||
|
|
||||||
|
It is possible to pass arguments to `Duplicati.Server.exe`, simply add them to the commandline:
|
||||||
|
`Duplicati.WindowsService.exe install --webservice-interface=loopback --log-retention=3M`
|
||||||
|
|
||||||
|
See [Duplicati.Server.exe](#_Duplicati.Server.exe) for more information about supported command line options.
|
||||||
|
|
||||||
|
## Duplicati.CommandLine.BackendTester.exe
|
||||||
|
|
||||||
|
Before you start using a particular backend to use as a backup target, you can use the Backend Tester to get an indication of the integrity of that backend. The Backend Tester will perform the following actions:
|
||||||
|
|
||||||
|
* Generate a number of files with a random size and random filenames.
|
||||||
|
* Upload these files to the backend.
|
||||||
|
* Download the uploaded files.
|
||||||
|
* Check the hash of each downloaded file to verify its integrity.
|
||||||
|
* Repeat this procedure a number of times.
|
||||||
|
* The results of all individual actions are reported back.
|
||||||
|
|
||||||
|
You can specify how many files will be generated, what size they should have, which characters are allowed for the file names and how many times the test procedure should be repeated.
|
||||||
|
|
||||||
|
Usage:
|
||||||
|
`<protocol>://<username>:<password>@<path>`
|
||||||
|
|
||||||
|
Supported protocols are:
|
||||||
|
`aftp`, `amzcd`, `azure`, `b2`, `box`, `cloudfiles`, `dropbox`, `file`, `ftp`, `googledrive`, `gcs`, `hubic`, `jottacloud`, `mega`, `onedrive`, `openstack`, `s3`, `od4b`, `mssp`, `ssh`, `tahoe`, `webdav`
|
||||||
|
|
||||||
|
Use one or more of the following command line options:
|
||||||
|
|
||||||
|
* `--reruns`
|
||||||
|
Value: Integer
|
||||||
|
The number of test runs to perform.
|
||||||
|
A number that describes how many times the test is performed.
|
||||||
|
Default value: `5`
|
||||||
|
* `--tempdir`
|
||||||
|
Value: Path
|
||||||
|
The path used to store temporary files.
|
||||||
|
The backend tester will use the system default temp path. You can set this option to choose another path.
|
||||||
|
* `--extended-chars`
|
||||||
|
Value: String
|
||||||
|
A list of allowed extended filename chars. A list of characters besides `{a-z, A-Z, 0-9}` to use when generating filenames.
|
||||||
|
Default value:`-_',=)(&%$#@! +`
|
||||||
|
* `--number-of-files`
|
||||||
|
Value: Integer
|
||||||
|
The number of files to test with.
|
||||||
|
An integer describing how many files to upload during a test run.
|
||||||
|
Default value: `10`
|
||||||
|
* `--min-file-size`
|
||||||
|
Value: Size
|
||||||
|
The minimum allowed file size.
|
||||||
|
File sizes are chosen at random, this value is the lower bound.
|
||||||
|
Default value: `1kb`
|
||||||
|
* `--max-file-size`
|
||||||
|
Value: Size
|
||||||
|
The maximum allowed file size.
|
||||||
|
File sizes are chosen at random, this value is the upper bound.
|
||||||
|
Default value: `50mb`
|
||||||
|
* `--min-filename-length`
|
||||||
|
Value: Integer
|
||||||
|
The minimum allowed filename length.
|
||||||
|
File name lengths are chosen at random, this value is the lower bound.
|
||||||
|
Default value: `5`
|
||||||
|
* `--max-filename-length`
|
||||||
|
Value: Integer
|
||||||
|
The minimum allowed filename length.
|
||||||
|
File name lengths are chosen at random, this value is the upper bound
|
||||||
|
Default value: `80`
|
||||||
|
* `--auto-create-folder`
|
||||||
|
Value: Boolean
|
||||||
|
Allows automatic folder creation.
|
||||||
|
A value that indicates if missing folders are created automatically.
|
||||||
|
Default value: `false`
|
||||||
|
* `--skip-overwrite-test`
|
||||||
|
Value: Boolean
|
||||||
|
Bypasses the overwrite test.
|
||||||
|
A value that indicates if dummy files should be uploaded prior to uploading the real files.
|
||||||
|
Default value: `false`
|
||||||
|
* `--auto-clean`
|
||||||
|
Value: Boolean
|
||||||
|
Removes any files found in target folder.
|
||||||
|
A value that indicates if all files in the target folder should be deleted before starting the first test.
|
||||||
|
Default value: `false`
|
||||||
|
* `--force`
|
||||||
|
Value: Boolean
|
||||||
|
Activates file deletion.
|
||||||
|
A value that indicates if existing files should really be deleted when using auto-clean.
|
||||||
|
Default value: `false`
|
||||||
|
|
||||||
|
Example:
|
||||||
|
`Duplicati.CommandLineBackendTester.exe ftp://user:pass@server/folder`
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
## Duplicati.CommandLine.BackendTool.exe
|
||||||
|
|
||||||
|
A wide range of backends can be used by Duplicati, both using standard protocols like FTP or WebDAV and proprietary cloud storage services like Google Drive, Microsoft OneDrive or Dropbox. The requirements for these backends are very low. Basically, the only requirements are that Duplicati can perform the following operations to the backend:
|
||||||
|
|
||||||
|
* **PUT**
|
||||||
|
Of course Duplicati should be able to write files to the backend in order to store the data to be backed up.
|
||||||
|
* **GET**
|
||||||
|
For restore operations and verification of backups, Duplicati needs to be able to read (download) files from the backend.
|
||||||
|
* **LIST**
|
||||||
|
Duplicati needs to retrieve the contents of the backend by requesting a list of files.
|
||||||
|
* **DELETE**
|
||||||
|
To remove old or unneeded files and for reorganizing backend files, Duplicati should be able to delete files from the backend.
|
||||||
|
* **CREATEFOLDER**
|
||||||
|
When adding a new backup job, Duplicati can automatically create a new folder at the backend to store the backup files. Once configured, the Create Folder is no longer needed.
|
||||||
|
|
||||||
|
There are no more requirements. Duplicati doesn't need to rename files or add data to existing files.
|
||||||
|
|
||||||
|
With the Backend tool you can perform the 5 operations to the backend from the command line. This could be useful for testing purposes or disaster recovery (if one or more backup files are missing or corrupt).
|
||||||
|
|
||||||
|
Use this tool very carefully! Usually Duplicati will take care of checking, repairing and recovering from inconsistencies at the backend. Incorrect use of this tool may cause unrecoverable data loss. Only advanced users should use it.
|
||||||
|
|
||||||
|
Usage:
|
||||||
|
`Duplicati.CommandLine.BackendTool.exe <command> <protocol>://<username>:<password>@<path> [filename]`
|
||||||
|
|
||||||
|
The following commands are supported:
|
||||||
|
`GET`, `PUT`, `LIST`, `DELETE`, `CREATEFOLDER`
|
||||||
|
|
||||||
|
he supported protocols are:
|
||||||
|
`aftp`, `amzcd`, `azure`, `b2`, `box`, `cloudfiles`, `dropbox`, `file`, `ftp`, `googledrive`, `gcs`, `hubic`, `jottacloud`, `mega`, `onedrive`, `openstack`, `s3`, `od4b`, `mssp`, `ssh`, `tahoe`, `webdav`
|
||||||
|
|
||||||
|
Example:
|
||||||
|
`LIST ftp://user:pass@server/folder`
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
## Duplicati.Library.Snapshots.exe
|
||||||
|
|
||||||
|
Duplicati can create snapshots, enabling making backups of files that are in use. Windows provides Volume Shadowcopy Services (VSS) to backup open files, Linux and OS X provide LVM for the same purpose. You can use Duplicati.Library.Snapshots.exe to test if snapshots can be created successfully and if they can be used to access open files.
|
||||||
|
|
||||||
|
To test this, this utility creates a small file and locks it. Then a snapshot is created, which is tried to be read. If the open file can be read, open files can be backed up. The Duplicati.Library.Snapshot utility reports these steps back, which may help troubleshooting failing backups of open files.
|
||||||
|
|
||||||
|
The tool doesn't need any command line arguments, it just needs to be executed from a command line interface with administrative privileges.
|
||||||
|
|
||||||
|
## Duplicati.Library.AutoUpdater.exe
|
||||||
|
|
||||||
|
With this tool you can check if a new Duplicati version is available and install updates unattended. You can subscribe to one of the following update channels: Stable, Beta, Experimental, Canary, Nightly, Debug. The Stable Update channel guarantees that you only will receive versions that were thoroughly tested, the Debug channel is for developers only.
|
||||||
|
|
||||||
|
How the Updater tool works, can be customized by some environment variables:
|
||||||
|
|
||||||
|
* `AUTOUPDATER_Duplicati_SKIP_UPDATE`
|
||||||
|
Disables updates completely.
|
||||||
|
* `AUTOUPDATER_Duplicati_POLICY`
|
||||||
|
Choose how to handle updates, valid settings: `CheckBefore`, `CheckDuring`, `CheckAfter`, `InstallBefore`, `InstallDuring`, `InstallAfter`, `Never`
|
||||||
|
* `AUTOUPDATER_Duplicati_URLS`
|
||||||
|
Use alternate updates urls.
|
||||||
|
* `AUTOUPDATER_Duplicati_CHANNEL`
|
||||||
|
Choose different channel than the default Experimental, valid settings: `Stable`, `Beta`, `Experimental`, `Canary`, `Nightly`, `Debug`
|
||||||
|
|
||||||
|
Updates are downloaded from: https://updates.duplicati.com/experimental/latest.manifest and
|
||||||
|
https://alt.updates.duplicati.com/experimental/latest.manifest
|
||||||
|
|
||||||
|
Updates are installed in `C:\ProgramData\Duplicati\updates`.
|
||||||
|
|
||||||
|
Usage:
|
||||||
|
`Duplicati.Library.AutoUpdater.exe [LIST|CHECK|INSTALL|HELP]`
|
||||||
|
|
||||||
|
The following commands are supported:
|
||||||
|
|
||||||
|
* `help`
|
||||||
|
Displays the help text.
|
||||||
|
* `list`
|
||||||
|
Show which updates are downloaded and available at your local system.
|
||||||
|
* `check`
|
||||||
|
Check online if new updates are available in the currently used update channel.
|
||||||
|
* `install`
|
||||||
|
Install the latest update from the currently selected update channel.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
## Duplicati.CommandLine.RecoveryTool.exe
|
||||||
|
|
||||||
|
This tool can be used in very specific situations, where you have to restore data from a corrupted backup. The procedure for recovering from this scenario is covered in [Disaster Recovery](#_Disaster_Recovery).
|
||||||
|
|
||||||
|
Additionally, you can use the Recovery Tool to convert your backup files to another compression type. When creating a new backup job, you have to choose a compression type (default is Zip). After the first backup is made, the compression type cannot be changed. With the Duplicati Recovery Tool, you can download all files from the backend to your local filesystem, encrypt and uncompress them.
|
||||||
|
|
||||||
|
Then you can recompress these files using a different compression type, re-encrypt the files and upload them back to the backend. Change the backup configuration to use the new compression type for future backup operations to complete the procedure.
|
||||||
|
|
||||||
|
Use the Duplicati Recoverytool with the `recompress` command for this task:
|
||||||
|
`Duplicati.RecoveryTool.exe recompress <targetcompression> <remoteurl> <localfolder> --reupload --reencrypt [options]`
|
||||||
|
|
||||||
|
This command downloads all files to the local folder specified by `<localfolder>`. Then files are uncompressed and recompressed using the compression type specified with `<targetcompression>`. If `--reencrypt` is supplied, all files are re-encrypted using the same passphrase. If `--reupload` is supplied, files with the old compression type are deleted and recompressed files are uploaded back to remote storage.
|
||||||
|
|
||||||
|
*****
|
||||||
|
>  This is a radical operation to your backup files. Therefore, it is recommended to keep a copy of your remote files before this operation is performed. Using the Duplicati Recovery Tool incorrectly may result in unusable backup files.
|
||||||
|
|
||||||
|
*****
|
||||||
|
|
||||||
|
*****
|
||||||
|
>  Warning: Before recompress delete local database and after recompress recreate local database before executing any operation on backup. This allows Duplicati to read new file names from remote storage.
|
||||||
|
|
||||||
|
*****
|
469
docs/08-disaster-recovery.md
Normal file
469
docs/08-disaster-recovery.md
Normal file
@ -0,0 +1,469 @@
|
|||||||
|
|
||||||
|
## Definition of a disaster
|
||||||
|
|
||||||
|
This chapter describes how to recover from a disaster. Before we continue, we first have to define what a disaster actually is. Two categories can be distinguished:
|
||||||
|
|
||||||
|
* Loss or corruption of source files or the complete source system.
|
||||||
|
* Missing or corrupted backup files.
|
||||||
|
|
||||||
|
How to restore files to the original location of the same system and how to restore files from a consistent backup to a new computer is described in [Restoring files from a backup](#_Restoring_files_from) and [Restoring files if your Duplicati installation is lost](#_Restoring_files_if).
|
||||||
|
|
||||||
|
This chapter describes the process of restoring as much as possible from a backup that is inconsistent due to corrupted or missing files at the backend, without access to the source files and the Duplicati setup.
|
||||||
|
|
||||||
|
Usually you can install Duplicati on any computer and point to the location that contains your backup to restore files. Duplicati will try to automatically recover from problems it finds, but is there is significant damage in your backup files, the restore process may fail, resulting in aborting the restore operation, leaving files unrecovered that are potentially restorable. In this situation you can use the `Duplicati.CommandLine.RecoveryTool.exe` to restore files that are not affected by the backup corruption. You can use this tool to perform the operations manually that are normally done automatically by the standard tools.
|
||||||
|
|
||||||
|
## Test scenario
|
||||||
|
|
||||||
|
To explain the working of the `Duplicati.CommandLine.RecoveryTool.exe`, this setup is assumed:
|
||||||
|
|
||||||
|
The computer that contained the source files had 4 backup versions of the My Pictures folder. This computer, including Duplicati installation and picture files are assumed to be lost.
|
||||||
|
|
||||||
|
The backup location is an FTP server. The default Upload Volume size of 50MB is decreased to 10MB, resulting in more, but smaller files, which makes more sense for this example. After 4 backup operations, the files at the backend look like this:
|
||||||
|
|
||||||
|
```nohighlight
|
||||||
|
10,453,901 duplicati-b69a2a32a50bb4c6d8780389efdbf7442.dblock.zip.aes
|
||||||
|
8,173 duplicati-i84de11dd9a334727a080a3cdedc11f76.dindex.zip.aes
|
||||||
|
10,409,309 duplicati-bb1e603b91cae420787ed855d40e7cc04.dblock.zip.aes
|
||||||
|
9,677 duplicati-i77fdd0fa598d49fa93c5fedf3dbf4003.dindex.zip.aes
|
||||||
|
10,408,733 duplicati-b8fd38dcd303c4bcdb65dc15611f9b13b.dblock.zip.aes
|
||||||
|
8,317 duplicati-id6042b5ed9c34faa86e41fd3bcff72d2.dindex.zip.aes
|
||||||
|
10,465,421 duplicati-bb1c167fdb8ef46e6a83fe1d5b8b33cbf.dblock.zip.aes
|
||||||
|
6,765 duplicati-i1070213c1cea4844b3ace60c305854de.dindex.zip.aes
|
||||||
|
10,484,045 duplicati-b2b4fb88d1edd4eccade6b0ea6fdbfcf3.dblock.zip.aes
|
||||||
|
11,341 duplicati-i0c85219ca5764fb183b4306e65ed2034.dindex.zip.aes
|
||||||
|
10,472,541 duplicati-bc70688944c1b4875b7561e8046dd582d.dblock.zip.aes
|
||||||
|
8,301 duplicati-i726df1085487421b98bf9786e40d045f.dindex.zip.aes
|
||||||
|
10,384,701 duplicati-bf43697c750e746aead28ceb71af19359.dblock.zip.aes
|
||||||
|
9,101 duplicati-ie084e3c9380847a8a01acefcb8245fe3.dindex.zip.aes
|
||||||
|
10,392,317 duplicati-bf797fdce00794d0dbeb31de1f3867240.dblock.zip.aes
|
||||||
|
7,501 duplicati-i91b0156c9ced43bda26b6cef88f969b3.dindex.zip.aes
|
||||||
|
1,754,637 duplicati-b13a41763d40e4001911fd6f5d5d6c53d.dblock.zip.aes
|
||||||
|
3,709 duplicati-id79f5ccc6cb54f5faa7bbcf72c8e7428.dindex.zip.aes
|
||||||
|
5,133 duplicati-20171109T100606Z.dlist.zip.aes
|
||||||
|
10,415,597 duplicati-bb4cb32561132426eba2e190089585362.dblock.zip.aes
|
||||||
|
8,125 duplicati-ifd7c3d7bed47403197515b40821075fc.dindex.zip.aes
|
||||||
|
10,433,469 duplicati-b385c55aa15bd403e9fcb5a321339e76a.dblock.zip.aes
|
||||||
|
6,909 duplicati-i59e5c4064b3d422995200772bd267645.dindex.zip.aes
|
||||||
|
10,447,373 duplicati-b4e0bcd6b8c0b4d648a97e53c32550cce.dblock.zip.aes
|
||||||
|
8,829 duplicati-i21d3fcafc53b42bfa0dfe4cfdcc6a0d9.dindex.zip.aes
|
||||||
|
10,425,485 duplicati-b34811487cc1843e289ac577b6a7a8533.dblock.zip.aes
|
||||||
|
8,093 duplicati-i92dcf425b2d14c8780c2938f84e2bc2c.dindex.zip.aes
|
||||||
|
8,052,173 duplicati-bc385594379874159b0863dca12818ac7.dblock.zip.aes
|
||||||
|
8,621 duplicati-ib02c8a76143440b99ec94773a7b00c90.dindex.zip.aes
|
||||||
|
6,829 duplicati-20171109T100653Z.dlist.zip.aes
|
||||||
|
10,397,085 duplicati-b836b265755ff41ae908ef64e551ec63b.dblock.zip.aes
|
||||||
|
8,717 duplicati-i0903704f50ec4277a0cb6e224bde49cb.dindex.zip.aes
|
||||||
|
10,404,509 duplicati-bc230fc2ccec54a33b2035fcbc0231ce4.dblock.zip.aes
|
||||||
|
7,741 duplicati-i53b454d535de49e1983364b97bd681a1.dindex.zip.aes
|
||||||
|
10,463,597 duplicati-b54b864868bf341bcb88bff9ad786b8a3.dblock.zip.aes
|
||||||
|
4,365 duplicati-id1e3d59e5e3e48f59a6fbd7f46b4bb8c.dindex.zip.aes
|
||||||
|
10,413,933 duplicati-b2b06934c5d764025a6965f1749d86a90.dblock.zip.aes
|
||||||
|
11,421 duplicati-ifd2b08eaf75b43748739c5a140cb1267.dindex.zip.aes
|
||||||
|
10,468,957 duplicati-b5f8cd40e22a54b5b988689370b8cde34.dblock.zip.aes
|
||||||
|
6,749 duplicati-ib9a135574fa84222beea313ac583463b.dindex.zip.aes
|
||||||
|
10,383,805 duplicati-bfa5ecd54953f455496a67b089e2ad35b.dblock.zip.aes
|
||||||
|
10,301 duplicati-i19aa26f9c15e432ea5114c93acd52661.dindex.zip.aes
|
||||||
|
4,505,165 duplicati-bd14cf57e975040808ac8d0f4bd9d5e36.dblock.zip.aes
|
||||||
|
4,797 duplicati-id08a17cbe0a04407b4be37d2c48e8ab9.dindex.zip.aes
|
||||||
|
8,749 duplicati-20171109T100737Z.dlist.zip.aes
|
||||||
|
10,483,357 duplicati-be6e935d55f0b443b8c716c83aebccb93.dblock.zip.aes
|
||||||
|
9,885 duplicati-ic209185103c045bb87a680e98b78269b.dindex.zip.aes
|
||||||
|
974,909 duplicati-b7fa18a6f863a42fea5f210cc5f1416e5.dblock.zip.aes
|
||||||
|
2,125 duplicati-i57ce5109be6c4441998fc2bfb2cd0f3a.dindex.zip.aes
|
||||||
|
9,901 duplicati-20171109T100815Z.dlist.zip.aes
|
||||||
|
```
|
||||||
|
|
||||||
|
There is one .dlist file for each backup version. The data itself is stored in a number of .dblock files. Each .dblock file has an accompanying .dindex file. This is a consistent backup, but in this test scenario, some files intentionally are corrupted by replacing the contents with random data and by removing a .dblock file.
|
||||||
|
|
||||||
|
## Inventory of files that are going to be corrupted
|
||||||
|
|
||||||
|
Prior to corrupting the consistent backup, we can inventory what the consequences are if these files get lost. You can use the Duplicati command `affected` to see which files are affected by a remote file. The `affected` command needs the local database, so you can perform this operation only if you have a fully working Duplicati installation for this backup job. See [The AFFECTED command>](#_The_AFFECTED_command) for more information.
|
||||||
|
|
||||||
|
The first command returns which source files need information from the remote file `duplicati-b69a2a32a50bb4c6d8780389efdbf7442.dblock.zip.aes`.
|
||||||
|
|
||||||
|
```nohighlight
|
||||||
|
Duplicati.CommandLine.exe affected "ftp://myftpserver.com/Backup/Pictures?auth-username=Duplicati&auth-password=backup" duplicati-b69a2a32a50bb4c6d8780389efdbf7442.dblock.zip.aes --dbpath="C:\Users\User\DuplicatiCanary\data\WCHNJBICGG.sqlite" --full-result
|
||||||
|
```
|
||||||
|
|
||||||
|
This command will return that all 4 backup versions need data from this file. These files are affected:
|
||||||
|
|
||||||
|
```nohighlight
|
||||||
|
C:\Users\User\Pictures\file0001079221497.jpg
|
||||||
|
C:\Users\User\Pictures\file0001116000079.jpg
|
||||||
|
C:\Users\User\Pictures\file0001141038889.jpg
|
||||||
|
C:\Users\User\Pictures\file0001176452626.jpg
|
||||||
|
```
|
||||||
|
|
||||||
|
The second command returns affected backup versions and source files for remote file `duplicati-b5f8cd40e22a54b5b988689370b8cde34.dblock.zip.aes`:
|
||||||
|
|
||||||
|
```nohighlight
|
||||||
|
Duplicati.CommandLine.exe affected "ftp://myftpserver.com/Backup/Pictures?auth-username=Duplicati&auth-password=backup" duplicati-b5f8cd40e22a54b5b988689370b8cde34.dblock.zip.aes --dbpath="C:\Users\User\DuplicatiCanary\data\WCHNJBICGG.sqlite" --full-result</span></span>
|
||||||
|
```
|
||||||
|
|
||||||
|
Only the last 2 backup versions are affected (version 0 and 1), These 2 files cannot be restored if this remote file is missing or corrupted:
|
||||||
|
|
||||||
|
```nohighlight
|
||||||
|
C:\Users\User\Pictures\file451264266022.jpg
|
||||||
|
C:\Users\User\Pictures\file621250696198.jpg
|
||||||
|
```
|
||||||
|
|
||||||
|
**Conclusion:** if the 2 remote files mentioned above are not available, the 6 picture files should be considered lost, but with the Duplicati Recoverytool all other files in the backup should be recoverable.
|
||||||
|
|
||||||
|
## Making the backup inconsistent
|
||||||
|
|
||||||
|
This is, of course, something that **never** should be done in a production environment, but for this test scenario we will intentionally damage the backup set, making it unusable for standard backup- and restore operations.
|
||||||
|
|
||||||
|
The following actions are performed on the backend:
|
||||||
|
|
||||||
|
* File `duplicati-b69a2a32a50bb4c6d8780389efdbf7442.dblock.zip.aes` is deleted.
|
||||||
|
* File `duplicati-b5f8cd40e22a54b5b988689370b8cde34.dblock.zip.aes` is replaced by a file with the same name containing random data.
|
||||||
|
|
||||||
|
Restoring files from this corrupted backup set will fail before the first file is actually restored. You can recover from this situation by using one of these procedures:
|
||||||
|
|
||||||
|
* Recovering by purging unrestorable files from the backups.
|
||||||
|
* Recovering by using the Duplicati Recovery Tool.
|
||||||
|
|
||||||
|
## Prerequisites for recovery
|
||||||
|
|
||||||
|
To be able to restore files in these scenarios, you will need:
|
||||||
|
|
||||||
|
* The protocol, location and credentials of the remote location where your backup files are stored.
|
||||||
|
* The passphrase used to encrypt your backup (if any).
|
||||||
|
* A computer that you can use for restoring data with enough free storage capacity for all files you want to restore
|
||||||
|
* The Duplicati Command Line tools. These tools are part of a standard Duplicati setup.
|
||||||
|
* If you are using the Duplicati Recovery Tool: temporary local storage with enough free space to store all backup files.
|
||||||
|
|
||||||
|
## Recovering by purging files
|
||||||
|
|
||||||
|
If you still have access to your computer running Duplicati and the backup job has a valid local database, Duplicati can analyze the files that should be in the backup and compare this what's actually at the remote location. Use the Duplicati command `list-broken-files` to list files that cannot be restored due to corruption or missing data. The command `purge-broken-files` actually deletes these files from all backup versions.
|
||||||
|
|
||||||
|
To get an impression of the damage to the backup set, run this command:
|
||||||
|
|
||||||
|
```nohighlight
|
||||||
|
Duplicati.CommandLine.exe list-broken-files "ftp://myftpserver.com/Backup/Pictures?auth-username=Duplicati&auth-password=backup" --dbpath="C:\Users\User\DuplicatiCanary\data\WCHNJBICGG.sqlite" --passphrase="4u7P_re5&+Gb>6NO{" --full-result
|
||||||
|
```
|
||||||
|
|
||||||
|
The result is:
|
||||||
|
|
||||||
|
```nohighlight
|
||||||
|
No broken filesets found in database, checking for missing remote files
|
||||||
|
Listing remote folder ...
|
||||||
|
remote file duplicati-b5f8cd40e22a54b5b988689370b8cde34.dblock.zip.aes is listed as Verified with size 9569280 but should be 10468957, please verify the sha256 hash "hIPABrSE/6xN041ut6IKb0sUSMxYGRI3ZqAWwY+q6JM="
|
||||||
|
Marked 1 remote files for deletion
|
||||||
|
3 : 11/9/2017 11:06:06 AM (5 match(es))
|
||||||
|
C:\Users\User\Pictures\file0001079221497.jpg (4.89 MB)
|
||||||
|
C:\Users\User\Pictures\file0001116000079.jpg (302.49 KB)
|
||||||
|
C:\Users\User\Pictures\file0001141038889.jpg (3.50 MB)
|
||||||
|
C:\Users\User\Pictures\file0001176452626.jpg (1.65 MB)
|
||||||
|
2 : 11/9/2017 11:06:53 AM (4 match(es))
|
||||||
|
C:\Users\User\Pictures\file0001079221497.jpg (4.89 MB)
|
||||||
|
C:\Users\User\Pictures\file0001116000079.jpg (302.49 KB)
|
||||||
|
C:\Users\User\Pictures\file0001141038889.jpg (3.50 MB)
|
||||||
|
C:\Users\User\Pictures\file0001176452626.jpg (1.65 MB)
|
||||||
|
1 : 11/9/2017 11:07:37 AM (4 match(es))
|
||||||
|
C:\Users\User\Pictures\file0001079221497.jpg (4.89 MB)
|
||||||
|
C:\Users\User\Pictures\file0001116000079.jpg (302.49 KB)
|
||||||
|
C:\Users\User\Pictures\file0001141038889.jpg (3.50 MB)
|
||||||
|
C:\Users\User\Pictures\file0001176452626.jpg (1.65 MB)
|
||||||
|
0 : 11/9/2017 11:08:15 AM (4 match(es))
|
||||||
|
C:\Users\User\Pictures\file0001079221497.jpg (4.89 MB)
|
||||||
|
C:\Users\User\Pictures\file0001116000079.jpg (302.49 KB)
|
||||||
|
C:\Users\User\Pictures\file0001141038889.jpg (3.50 MB)
|
||||||
|
C:\Users\User\Pictures\file0001176452626.jpg (1.65 MB)
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
The 4 files from the deleted .dblock file are detected to be broken. Also detected is a change to the .dblock file that contains the other 2 files.
|
||||||
|
|
||||||
|
First we solve the problem with the deleted remote .dblock file by using the `purge-broken-files` command.
|
||||||
|
|
||||||
|
*****
|
||||||
|
>  Add advanced option `--dry-run` to the command below to see what the command will do, before actually purging the files from the backups.
|
||||||
|
|
||||||
|
*****
|
||||||
|
|
||||||
|
```nohighlight
|
||||||
|
Duplicati.CommandLine.exe purge-broken-files "ftp://myftpserver.com/Backup/Pictures?auth-username=Duplicati&auth-password=backup" --dbpath="C:\Users\User\DuplicatiCanary\data\WCHNJBICGG.sqlite" --passphrase="4u7P_re5&+Gb>6NO{" --full-result
|
||||||
|
```
|
||||||
|
|
||||||
|
The purge-broken-files command returns this information:
|
||||||
|
|
||||||
|
```nohighlight
|
||||||
|
No broken filesets found in database, checking for missing remote files
|
||||||
|
Listing remote folder ...
|
||||||
|
remote file duplicati-b5f8cd40e22a54b5b988689370b8cde34.dblock.zip.aes is listed as Verified with size 9569280 but should be 10468957, please verify the sha256 hash "hIPABrSE/6xN041ut6IKb0sUSMxYGRI3ZqAWwY+q6JM="
|
||||||
|
Marked 1 remote files for deletion
|
||||||
|
Found 4 broken filesets with 17 affected files, purging files
|
||||||
|
Purging 5 file(s) from fileset 11/9/2017 11:06:06 AM
|
||||||
|
Starting purge operation
|
||||||
|
Replacing fileset duplicati-20171109T100606Z.dlist.zip.aes with duplicati-20171109T100607Z.dlist.zip.aes which has with 5 fewer file(s) (10.33 MB reduction)
|
||||||
|
Uploading file (4.47 KB) ...
|
||||||
|
Deleting file duplicati-20171109T100606Z.dlist.zip.aes ...
|
||||||
|
Purging 4 file(s) from fileset 11/9/2017 11:06:53 AM
|
||||||
|
Starting purge operation
|
||||||
|
Replacing fileset duplicati-20171109T100653Z.dlist.zip.aes with duplicati-20171109T100654Z.dlist.zip.aes which has with 4 fewer file(s) (10.33 MB reduction)
|
||||||
|
Uploading file (6.22 KB) ...
|
||||||
|
Deleting file duplicati-20171109T100653Z.dlist.zip.aes ...
|
||||||
|
Purging 4 file(s) from fileset 11/9/2017 11:07:37 AM
|
||||||
|
Starting purge operation
|
||||||
|
Replacing fileset duplicati-20171109T100737Z.dlist.zip.aes with duplicati-20171109T100738Z.dlist.zip.aes which has with 4 fewer file(s) (10.33 MB reduction)
|
||||||
|
Uploading file (8.09 KB) ...
|
||||||
|
Deleting file duplicati-20171109T100737Z.dlist.zip.aes ...
|
||||||
|
Purging 4 file(s) from fileset 11/9/2017 11:08:15 AM
|
||||||
|
Starting purge operation
|
||||||
|
Replacing fileset duplicati-20171109T100815Z.dlist.zip.aes with duplicati-20171109T100816Z.dlist.zip.aes which has with 4 fewer file(s) (10.33 MB reduction)
|
||||||
|
Uploading file (9.22 KB) ...
|
||||||
|
Deleting file duplicati-20171109T100815Z.dlist.zip.aes ...
|
||||||
|
Deleting file duplicati-b69a2a32a50bb4c6d8780389efdbf7442.dblock.zip.aes (9.97 MB) ...
|
||||||
|
Operation Delete with file duplicati-b69a2a32a50bb4c6d8780389efdbf7442.dblock.zip.aes attempt 1 of 5 failed with message: The remote server returned an error: (550) File unavailable (e.g., file not found, no access). => The remote server returned an error: (550) File unavailable (e.g., file not found, no access).
|
||||||
|
```
|
||||||
|
|
||||||
|
Some information from the messages above:
|
||||||
|
|
||||||
|
* `duplicati-b5f8cd40e22a54b5b988689370b8cde34.dblock.zip.aes` is corrupted and marked for deletion.
|
||||||
|
* Files that cannot be restored are deleted from all backup versions that contain these files (17 total). Note that this are not 17 unique source files, one file is usually included in multiple backup versions.
|
||||||
|
* New, consistent backup files are generated and uploaded to the backend.
|
||||||
|
|
||||||
|
## Recovering by using the Duplicati Recovery tool
|
||||||
|
|
||||||
|
The Duplicati Recovery tool allows to perform actions manually that are normally done automatically when running backup or restore operations. A normal restore consists of the following operatrions:
|
||||||
|
|
||||||
|
* Duplicati determines which remote files are needed to restore the specified files.
|
||||||
|
* Duplicati downloads the first required remote file.
|
||||||
|
* The file is decrypted using the supplied passphrase.
|
||||||
|
* Duplicati uses the .DINDEX files to determine how files can be recreated by merging blocks inside .DBLOCK files in the correct order.
|
||||||
|
* The recreated files are moved to the supplied Restore location.
|
||||||
|
|
||||||
|
The Duplicati Recovery Tool can perform these actions step by step, giving you more control over each step in the restore process.
|
||||||
|
|
||||||
|
In disaster recovery scenarios, the Duplicati Recovery Tool performs 3 steps:
|
||||||
|
|
||||||
|
* All remote files are downloaded from the backend, decrypted and stored in the local filesystem.
|
||||||
|
* An index is built that allows Duplicati to keep track of what information is stored in which file.
|
||||||
|
* Files are restored from the downloaded backend files by recreating them using the blocks inside the .DBLOCK files.
|
||||||
|
|
||||||
|
Optionally, these additional actions can be performed:
|
||||||
|
|
||||||
|
* List files that are available in the downloaded remote files.
|
||||||
|
* Recompress and/or re-upload files to the backend. This is useful if you want to change the compression type of an existing backup job. Changing the compression type (.7z to .zip) is not supported, but you can do this by downloading the complete backup, decrypt all files, extract all files, recompress the files using another compression type, re-encrypt the files and re-upload them to the backend. Additionally edit the backup configuration to use the new compression type for future backups.
|
||||||
|
|
||||||
|
## Downloading all remote files using the Recovery Tool
|
||||||
|
|
||||||
|
The first step is downloading all files that are used by the backup job. This step is required, because a lot of read/write operations have to be performed to the remote files. All files must be decrypted and the contents of all files must be read to analyze the contents.
|
||||||
|
|
||||||
|
Remote files can be downloaded using the `download` command:
|
||||||
|
|
||||||
|
`Duplicati.RecoveryTool.exe download <remoteurl> <localfolder> [options]`
|
||||||
|
|
||||||
|
This command fill download all remote files from `<remoteurl>`, decrypt the files and store the decrypted files in `<localfolder>`.
|
||||||
|
|
||||||
|
Required information:
|
||||||
|
|
||||||
|
* **Storage type**
|
||||||
|
In this example the backup is stored using FTP, but all storage types are supported. See [Storage Providers](#_Storage_Providers) for more information.
|
||||||
|
* **Address, path and credentials to access the remote files**
|
||||||
|
In this example the address is `myftpserver.com`, the path is `/Backup/Pictures`, the FTP username is `duplicati` and the FTP password is `backup`.
|
||||||
|
* **The passphrase used to encrypt the backup**
|
||||||
|
In this example the passphrase `4u7P_re5&+Gb>6NO{` was used for the backup.
|
||||||
|
* **Optional advanced options for access to the remote files**
|
||||||
|
If you applied any options that are needed to get access to the backend files, supply these options here. See [Storage Providers](#_Storage_Providers) for more information.
|
||||||
|
|
||||||
|
*****
|
||||||
|
>  Store information about your backup configuration (storage provider, storage location, credentials and passphrase) on a safe location that is also available when your computer Duplicati is lost. Without this information, your backup files are useless, because the passphrase is the only way to decrypt the files in your backup.
|
||||||
|
|
||||||
|
*****
|
||||||
|
|
||||||
|
Create an empty folder in your local filesystem, for example `C:\BackendFiles`. Be sure that the location you download the backup files to has enough free space to store **all** backup files.
|
||||||
|
|
||||||
|
*****
|
||||||
|
>  If you are unsure about the required free space, verify how many space is used by all files with a filename that start with duplicati- (or any prefix you specified in the backup job with the --prefix option). If still unsure, use an empty external disk with enough capacity. You have to start over the complete download process if free space runs out when downloading files.
|
||||||
|
|
||||||
|
*****
|
||||||
|
|
||||||
|
This command downloads and decrypts all backup files and stores these files in `C:\BackendFiles`:
|
||||||
|
|
||||||
|
```nohighlight
|
||||||
|
Duplicati.CommandLine.RecoveryTool.exe download "ftp://myftpserver.com/Backup/Pictures?auth-username=duplicati&auth-password=backup" C:\BackendFiles --passphrase="4u7P_re5&+Gb>6NO{"
|
||||||
|
```
|
||||||
|
|
||||||
|
The output is something like this:
|
||||||
|
|
||||||
|
```nohighlight
|
||||||
|
Listing files on backend: ftp ...
|
||||||
|
Found 49 files
|
||||||
|
0: duplicati-20171109T100606Z.dlist.zip.aes - downloading (5.01 KB)... - decrypting ... done!
|
||||||
|
1: duplicati-20171109T100653Z.dlist.zip.aes - downloading (6.67 KB)... - decrypting ... done!
|
||||||
|
2: duplicati-20171109T100737Z.dlist.zip.aes - downloading (8.54 KB)... - decrypting ... done!
|
||||||
|
3: duplicati-20171109T100815Z.dlist.zip.aes - downloading (9.67 KB)... - decrypting ... done!
|
||||||
|
4: duplicati-b13a41763d40e4001911fd6f5d5d6c53d.dblock.zip.aes - downloading (1.67 MB)... - decrypting ... done!
|
||||||
|
5: duplicati-b2b06934c5d764025a6965f1749d86a90.dblock.zip.aes - downloading (9.93 MB)... - decrypting ... done!
|
||||||
|
6: duplicati-b2b4fb88d1edd4eccade6b0ea6fdbfcf3.dblock.zip.aes - downloading (10.00 MB)... - decrypting ... done!
|
||||||
|
7: duplicati-b34811487cc1843e289ac577b6a7a8533.dblock.zip.aes - downloading (9.94 MB)... - decrypting ... done!
|
||||||
|
8: duplicati-b385c55aa15bd403e9fcb5a321339e76a.dblock.zip.aes - downloading (9.95 MB)... - decrypting ... done!
|
||||||
|
9: duplicati-b4e0bcd6b8c0b4d648a97e53c32550cce.dblock.zip.aes - downloading (9.96 MB)... - decrypting ... done!
|
||||||
|
10: duplicati-b54b864868bf341bcb88bff9ad786b8a3.dblock.zip.aes - downloading (9.98 MB)... - decrypting ... done!
|
||||||
|
11: duplicati-b5f8cd40e22a54b5b988689370b8cde34.dblock.zip.aes - downloading (9.13 MB)... - decrypting ... error:
|
||||||
|
System.IO.InvalidDataException: Invalid header marker
|
||||||
|
at SharpAESCrypt.SharpAESCrypt.ReadEncryptionHeader(String password, Boolean skipFileSizeCheck)
|
||||||
|
at SharpAESCrypt.SharpAESCrypt..ctor(String password, Stream stream, OperationMode mode, Boolean skipFileSizeCheck)
|
||||||
|
at Duplicati.Library.Encryption.AESEncryption.Decrypt(Stream input)
|
||||||
|
at Duplicati.Library.Encryption.EncryptionBase.Decrypt(Stream input, Stream output)
|
||||||
|
at Duplicati.Library.Encryption.EncryptionBase.Decrypt(String inputfile, String outputfile)
|
||||||
|
at Duplicati.CommandLine.RecoveryTool.Download.Run(List`1 args, Dictionary`2 options, IFilter filter)
|
||||||
|
12: duplicati-b7fa18a6f863a42fea5f210cc5f1416e5.dblock.zip.aes - downloading (952.06 KB)... - decrypting ... done!
|
||||||
|
13: duplicati-b836b265755ff41ae908ef64e551ec63b.dblock.zip.aes - downloading (9.92 MB)... - decrypting ... done!
|
||||||
|
14: duplicati-b8fd38dcd303c4bcdb65dc15611f9b13b.dblock.zip.aes - downloading (9.93 MB)... - decrypting ... done!
|
||||||
|
15: duplicati-bb1c167fdb8ef46e6a83fe1d5b8b33cbf.dblock.zip.aes - downloading (9.98 MB)... - decrypting ... done!
|
||||||
|
16: duplicati-bb1e603b91cae420787ed855d40e7cc04.dblock.zip.aes - downloading (9.93 MB)... - decrypting ... done!
|
||||||
|
17: duplicati-bb4cb32561132426eba2e190089585362.dblock.zip.aes - downloading (9.93 MB)... - decrypting ... done!
|
||||||
|
18: duplicati-bc230fc2ccec54a33b2035fcbc0231ce4.dblock.zip.aes - downloading (9.92 MB)... - decrypting ... done!
|
||||||
|
19: duplicati-bc385594379874159b0863dca12818ac7.dblock.zip.aes - downloading (7.68 MB)... - decrypting ... done!
|
||||||
|
20: duplicati-bc70688944c1b4875b7561e8046dd582d.dblock.zip.aes - downloading (9.99 MB)... - decrypting ... done!
|
||||||
|
21: duplicati-bd14cf57e975040808ac8d0f4bd9d5e36.dblock.zip.aes - downloading (4.30 MB)... - decrypting ... done!
|
||||||
|
22: duplicati-be6e935d55f0b443b8c716c83aebccb93.dblock.zip.aes - downloading (10.00 MB)... - decrypting ... done!
|
||||||
|
23: duplicati-bf43697c750e746aead28ceb71af19359.dblock.zip.aes - downloading (9.90 MB)... - decrypting ... done!
|
||||||
|
24: duplicati-bf797fdce00794d0dbeb31de1f3867240.dblock.zip.aes - downloading (9.91 MB)... - decrypting ... done!
|
||||||
|
25: duplicati-bfa5ecd54953f455496a67b089e2ad35b.dblock.zip.aes - downloading (9.90 MB)... - decrypting ... done!
|
||||||
|
26: duplicati-i0903704f50ec4277a0cb6e224bde49cb.dindex.zip.aes - Filetype Index, skipping
|
||||||
|
26: duplicati-i0c85219ca5764fb183b4306e65ed2034.dindex.zip.aes - Filetype Index, skipping
|
||||||
|
26: duplicati-i1070213c1cea4844b3ace60c305854de.dindex.zip.aes - Filetype Index, skipping
|
||||||
|
26: duplicati-i19aa26f9c15e432ea5114c93acd52661.dindex.zip.aes - Filetype Index, skipping
|
||||||
|
26: duplicati-i21d3fcafc53b42bfa0dfe4cfdcc6a0d9.dindex.zip.aes - Filetype Index, skipping
|
||||||
|
26: duplicati-i53b454d535de49e1983364b97bd681a1.dindex.zip.aes - Filetype Index, skipping
|
||||||
|
26: duplicati-i57ce5109be6c4441998fc2bfb2cd0f3a.dindex.zip.aes - Filetype Index, skipping
|
||||||
|
26: duplicati-i59e5c4064b3d422995200772bd267645.dindex.zip.aes - Filetype Index, skipping
|
||||||
|
26: duplicati-i726df1085487421b98bf9786e40d045f.dindex.zip.aes - Filetype Index, skipping
|
||||||
|
26: duplicati-i77fdd0fa598d49fa93c5fedf3dbf4003.dindex.zip.aes - Filetype Index, skipping
|
||||||
|
26: duplicati-i84de11dd9a334727a080a3cdedc11f76.dindex.zip.aes - Filetype Index, skipping
|
||||||
|
26: duplicati-i91b0156c9ced43bda26b6cef88f969b3.dindex.zip.aes - Filetype Index, skipping
|
||||||
|
26: duplicati-i92dcf425b2d14c8780c2938f84e2bc2c.dindex.zip.aes - Filetype Index, skipping
|
||||||
|
26: duplicati-ib02c8a76143440b99ec94773a7b00c90.dindex.zip.aes - Filetype Index, skipping
|
||||||
|
26: duplicati-ib9a135574fa84222beea313ac583463b.dindex.zip.aes - Filetype Index, skipping
|
||||||
|
26: duplicati-ic209185103c045bb87a680e98b78269b.dindex.zip.aes - Filetype Index, skipping
|
||||||
|
26: duplicati-id08a17cbe0a04407b4be37d2c48e8ab9.dindex.zip.aes - Filetype Index, skipping
|
||||||
|
26: duplicati-id1e3d59e5e3e48f59a6fbd7f46b4bb8c.dindex.zip.aes - Filetype Index, skipping
|
||||||
|
26: duplicati-id6042b5ed9c34faa86e41fd3bcff72d2.dindex.zip.aes - Filetype Index, skipping
|
||||||
|
26: duplicati-id79f5ccc6cb54f5faa7bbcf72c8e7428.dindex.zip.aes - Filetype Index, skipping
|
||||||
|
26: duplicati-ie084e3c9380847a8a01acefcb8245fe3.dindex.zip.aes - Filetype Index, skipping
|
||||||
|
26: duplicati-ifd2b08eaf75b43748739c5a140cb1267.dindex.zip.aes - Filetype Index, skipping
|
||||||
|
26: duplicati-ifd7c3d7bed47403197515b40821075fc.dindex.zip.aes - Filetype Index, skipping
|
||||||
|
Download complete, of 49 remote files, 0 were downloaded with 1 errors
|
||||||
|
|
||||||
|
```
|
||||||
|
In this example, 49 files were found at the backend. From all .DBLOCK files, 1 file was corrupt and could not be decrypted. Files with the .DINDEX extension are index files that will be recreated, therefore they are not downloaded. 4 .DLIST files were found and downloaded to `C:\BackendFiles`.
|
||||||
|
|
||||||
|
As a result, the `C:\BackendFiles` folder contains 25 unencrypted .Zip files: 4 `.dlist.zip` files and 21 `.dblock.zip` files.
|
||||||
|
|
||||||
|
## Creating an index of downloaded files using the Recovery Tool
|
||||||
|
|
||||||
|
When all files that contain applicable information are downloaded, an index file must be created. Without this index, we have nothing more than a bunch of files containing hashes and raw data. The index can be created with the Duplicati Recovery Tool using the `index` command:
|
||||||
|
|
||||||
|
`Duplicati.RecoveryTool.exe index <localfolder> [options]`
|
||||||
|
|
||||||
|
This command only requires the location of the local folder to be specified, in this example `C:\BackendFiles`. The index file will be created in the same folder. If you want the index file to be created in another folder, use advanced option `--indexfile` to specify the location. The Temporary files folder is used intensively by this process. Optionally you can specify a custom location with the `--tempdir` option.
|
||||||
|
|
||||||
|
To build an index of the files in C:\BackendFiles, use this command:
|
||||||
|
|
||||||
|
`Duplicati.CommandLine.RecoveryTool.exe index "C:\BackendFiles"`
|
||||||
|
|
||||||
|
The output is similar to this:
|
||||||
|
|
||||||
|
```nohighlight
|
||||||
|
Processing 26 files
|
||||||
|
0: C:\BackendFiles\duplicati-20171109T100606Z.dlist.zip - Filetype Files, skipping
|
||||||
|
0: C:\BackendFiles\duplicati-20171109T100653Z.dlist.zip - Filetype Files, skipping
|
||||||
|
0: C:\BackendFiles\duplicati-20171109T100737Z.dlist.zip - Filetype Files, skipping
|
||||||
|
0: C:\BackendFiles\duplicati-20171109T100815Z.dlist.zip - Filetype Files, skipping
|
||||||
|
0: C:\BackendFiles\duplicati-b13a41763d40e4001911fd6f5d5d6c53d.dblock.zip 21 hashes found, sorting ... done!
|
||||||
|
Merging 21 hashes ... done!
|
||||||
|
1: C:\BackendFiles\duplicati-b2b06934c5d764025a6965f1749d86a90.dblock.zip 111 hashes found, sorting ... done!
|
||||||
|
Merging 132 hashes ... done!
|
||||||
|
2: C:\BackendFiles\duplicati-b2b4fb88d1edd4eccade6b0ea6fdbfcf3.dblock.zip 116 hashes found, sorting ... done!
|
||||||
|
Merging 248 hashes ... done!
|
||||||
|
3: C:\BackendFiles\duplicati-b34811487cc1843e289ac577b6a7a8533.dblock.zip 113 hashes found, sorting ... done!
|
||||||
|
Merging 361 hashes ... done!
|
||||||
|
4: C:\BackendFiles\duplicati-b385c55aa15bd403e9fcb5a321339e76a.dblock.zip 109 hashes found, sorting ... done!
|
||||||
|
Merging 470 hashes ... done!
|
||||||
|
5: C:\BackendFiles\duplicati-b4e0bcd6b8c0b4d648a97e53c32550cce.dblock.zip 108 hashes found, sorting ... done!
|
||||||
|
Merging 578 hashes ... done!
|
||||||
|
6: C:\BackendFiles\duplicati-b54b864868bf341bcb88bff9ad786b8a3.dblock.zip 103 hashes found, sorting ... done!
|
||||||
|
Merging 681 hashes ... done!
|
||||||
|
7: C:\BackendFiles\duplicati-b7fa18a6f863a42fea5f210cc5f1416e5.dblock.zip 15 hashes found, sorting ... done!
|
||||||
|
Merging 696 hashes ... done!
|
||||||
|
8: C:\BackendFiles\duplicati-b836b265755ff41ae908ef64e551ec63b.dblock.zip 114 hashes found, sorting ... done!
|
||||||
|
Merging 810 hashes ... done!
|
||||||
|
9: C:\BackendFiles\duplicati-b8fd38dcd303c4bcdb65dc15611f9b13b.dblock.zip 109 hashes found, sorting ... done!
|
||||||
|
Merging 919 hashes ... done!
|
||||||
|
10: C:\BackendFiles\duplicati-bb1c167fdb8ef46e6a83fe1d5b8b33cbf.dblock.zip 111 hashes found, sorting ... done!
|
||||||
|
Merging 1030 hashes ... done!
|
||||||
|
11: C:\BackendFiles\duplicati-bb1e603b91cae420787ed855d40e7cc04.dblock.zip 120 hashes found, sorting ... done!
|
||||||
|
Merging 1150 hashes ... done!
|
||||||
|
12: C:\BackendFiles\duplicati-bb4cb32561132426eba2e190089585362.dblock.zip 108 hashes found, sorting ... done!
|
||||||
|
Merging 1258 hashes ... done!
|
||||||
|
13: C:\BackendFiles\duplicati-bc230fc2ccec54a33b2035fcbc0231ce4.dblock.zip 108 hashes found, sorting ... done!
|
||||||
|
Merging 1366 hashes ... done!
|
||||||
|
14: C:\BackendFiles\duplicati-bc385594379874159b0863dca12818ac7.dblock.zip 86 hashes found, sorting ... done!
|
||||||
|
Merging 1452 hashes ... done!
|
||||||
|
15: C:\BackendFiles\duplicati-bc70688944c1b4875b7561e8046dd582d.dblock.zip 114 hashes found, sorting ... done!
|
||||||
|
Merging 1566 hashes ... done!
|
||||||
|
16: C:\BackendFiles\duplicati-bd14cf57e975040808ac8d0f4bd9d5e36.dblock.zip 50 hashes found, sorting ... done!
|
||||||
|
Merging 1616 hashes ... done!
|
||||||
|
17: C:\BackendFiles\duplicati-be6e935d55f0b443b8c716c83aebccb93.dblock.zip 122 hashes found, sorting ... done!
|
||||||
|
Merging 1738 hashes ... done!
|
||||||
|
18: C:\BackendFiles\duplicati-bf43697c750e746aead28ceb71af19359.dblock.zip 112 hashes found, sorting ... done!
|
||||||
|
Merging 1850 hashes ... done!
|
||||||
|
19: C:\BackendFiles\duplicati-bf797fdce00794d0dbeb31de1f3867240.dblock.zip 110 hashes found, sorting ... done!
|
||||||
|
Merging 1960 hashes ... done!
|
||||||
|
20: C:\BackendFiles\duplicati-bfa5ecd54953f455496a67b089e2ad35b.dblock.zip 107 hashes found, sorting ... done!
|
||||||
|
Merging 2067 hashes ... done!
|
||||||
|
21: C:\BackendFiles\index.txt - Not a Duplicati file, ignoring
|
||||||
|
Processed 21 files and found 2067 hashes
|
||||||
|
```
|
||||||
|
|
||||||
|
The resulting index file index.txt contains a list of hashes and `.DBLOCK` filenames.
|
||||||
|
|
||||||
|
## List backup versions and files using the Recovery Tool
|
||||||
|
|
||||||
|
Before the actual restore operation is performed, you can see what is inside the downloaded and encrypted remote files. Use the Recovery Tool's `list` command to retrieve this information:
|
||||||
|
|
||||||
|
`Duplicati.RecoveryTool.exe list <localfolder> [version] [options]`
|
||||||
|
|
||||||
|
Without a version specified, all available backup versions are listed. When a version number is supplied, all restorable files from that backup version are listed. Try these commands:
|
||||||
|
|
||||||
|
`Duplicati.CommandLine.RecoveryTool.exe list C:\BackendFiles`
|
||||||
|
|
||||||
|
`Duplicati.CommandLine.RecoveryTool.exe list C:\BackendFiles 0`
|
||||||
|
|
||||||
|
## Restoring files using the Recovery Tool
|
||||||
|
|
||||||
|
After all backup files are downloaded, decrypted and indexed, you can start with the actual restore process. With the Duplicati Recovery Tool, use the`restore` command to restore all files that can be recovered from any backup version to the location of your choice:
|
||||||
|
|
||||||
|
`Duplicati.RecoveryTool.exe restore <localfolder> [version] [options]`
|
||||||
|
|
||||||
|
`<localfolder>` is a required option. It should point to the location where your downloaded remote files are stored. Optionally add `--targetpath` to specify where files must be restored to, otherwise the files are restored to their original locations. Use filters or the `--exclude` option to perform a partial restore. See [exclude](#_exclude) and [APPENDIX E Filters](#_APPENDIX_E_Filters) for more information.
|
||||||
|
|
||||||
|
In this example, files are restored to `C:\Restore`, so an empty folder `C:\Restore` is created first.
|
||||||
|
|
||||||
|
This command will restore all files from the latest backup version (0) to `C:\Restore`:
|
||||||
|
|
||||||
|
`Duplicati.CommandLine.RecoveryTool.exe restore C:\BackendFiles 0 --targetpath="C:\Restore"`
|
||||||
|
|
||||||
|
The output starts with something similar to this:
|
||||||
|
|
||||||
|
```nohighlight
|
||||||
|
Sorting index file ... done!
|
||||||
|
Building lookup table for file hashes
|
||||||
|
Index file has 2047 hashes in total
|
||||||
|
Building lookup table with 2046 entries, giving increments of 1
|
||||||
|
Computing restore path
|
||||||
|
Restoring 75 files to C:\Restore
|
||||||
|
Removing common prefix C:\Users\User\ from files
|
||||||
|
```
|
||||||
|
|
||||||
|
All restored files are listed. The list probably contains errors, because files that need data from corrupted blocks cannot be restored.
|
||||||
|
|
||||||
|
In this example, from a corrupted backup with one deleted dblock file and one corrupted dblock file, 69 of 75 picture files were recovered successfully.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
|
304
docs/appendix-a-how-the-backup-process-works.md
Normal file
304
docs/appendix-a-how-the-backup-process-works.md
Normal file
@ -0,0 +1,304 @@
|
|||||||
|
|
||||||
|
# How the backup process works
|
||||||
|
|
||||||
|
### Introduction
|
||||||
|
|
||||||
|
Duplicati is an open source backup application, that has no server-side components and thus it can support a wide variety of cloud-based storage providers. This also means, Duplicati has to handle large latencies, disconnects and it can only add and delete files but not modify existing files. Duplicati copes with it by using a storage format that merges small files and splits large files and that supports features like encryption, compression and de-duplication, versioning and incremental backups. In this article, we walk through the process of backing up a few files to a remote storage, to illustrate how it basically works.
|
||||||
|
|
||||||
|
### The source data
|
||||||
|
|
||||||
|
For this article, we will assume you want to make a backup of a small folder on a Windows machine, the content of that folder is:
|
||||||
|
|
||||||
|
```nohighlight
|
||||||
|
C:\data
|
||||||
|
|----> mydoc.txt, 4kb
|
||||||
|
|----> myvideo.mp4, 210kb
|
||||||
|
|----> extra
|
||||||
|
|-----> olddoc.txt, 2kb
|
||||||
|
|-----> samevideo.mp4, 210kb
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
### The backup process
|
||||||
|
|
||||||
|
Duplicati will always traverse the filesystem in "filesystem order", meaning whichever order the operating system returns the files and folders from a listing. This is usually the fastest way, as it relates to how the files are physically stored on the disk.
|
||||||
|
|
||||||
|
As Duplicati only works with absolute paths, it will see the following list:
|
||||||
|
|
||||||
|
```nohighlight
|
||||||
|
C:\data\
|
||||||
|
C:\data\mydoc.txt
|
||||||
|
C:\data\myvideo.mp4
|
||||||
|
C:\data\extra\
|
||||||
|
C:\data\extra\olddoc.txt
|
||||||
|
C:\data\extra\samevideo.mp4
|
||||||
|
```
|
||||||
|
|
||||||
|
For a real-world example, the list would be longer and would likely also have multiple filters, but for this example we omit these details.
|
||||||
|
|
||||||
|
To store the information about what is in the backup, Duplicati relies on standard file formats, and uses the JSON data format and Zip compression.
|
||||||
|
|
||||||
|
To store the file list, Duplicati creates a file named `duplicati-20161014090000.dlist.zip` locally, where the numbers represent the current date and time in the UTC timezone. Inside this zip archive is a single JSON file named `filelist.jso`n, which starts out by being an empty list, which is expressed in JSON as `[]`.
|
||||||
|
|
||||||
|
To store the data from files, Duplicati creates a file named `duplicati-7af781d3401eb90cd371.dblock.zip`, where the letters and numbers are chosen at random and have no relation to the data nor the current time. Initally this zip file is empty.
|
||||||
|
|
||||||
|
You can see an overview of the process here:
|
||||||
|
|
||||||
|
*****
|
||||||
|

|
||||||
|
*****
|
||||||
|
|
||||||
|
*****
|
||||||
|

|
||||||
|
*****
|
||||||
|
|
||||||
|
### Processing a folder
|
||||||
|
|
||||||
|
When Duplicati recieves the first entry, `C:\data\`, it notices that the entry is a folder, and thus has no actual data, so it simply adds this entry to the `filelist.json` mentioned above, such that it now looks like:
|
||||||
|
|
||||||
|
```nohighlight
|
||||||
|
[
|
||||||
|
{
|
||||||
|
"type": "folder",
|
||||||
|
"path": "C:\\data\\"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
In the actual implementaion, it also stores metadata, such as permissions, modification times, etc, but we will omit those details here.
|
||||||
|
|
||||||
|
### Processing a small file
|
||||||
|
|
||||||
|
The next entry is `C:\data\mydoc.txt`, which is a file and thus has actual contents. Duplicati will read the file, a "block" at a time, which is 100kb. As the file is only 4kb, it all "fits" inside a single block. Once the block is read, Duplicati computes a SHA-256 hash value and encodes it as with Base64 encoding to get a string like `qaFXpxVTuYCuibb9P41VSeVn4pIaK8o3jUpJKqI4VF4=`. It then computes the SHA-256 value for the entire file and encodes it as Base64, but as the block and the file have the exact same contents (i.e. the whole file fits in a block), the value is the same: `qaFXpxVTuYCuibb9P41VSeVn4pIaK8o3jUpJKqI4VF4=`.
|
||||||
|
|
||||||
|
Note that no additional data is added to the hash. This is not required as the hash values are not visible after the zip volumes are encrypted, thus giving no hints for an attacker as to what the backup contains.
|
||||||
|
|
||||||
|
The data from the file (the 4kb) are then added to the `dblock` file mentioned above, using the string as the filename. This means that the `dblock` zip file contents are now:
|
||||||
|
|
||||||
|
```nohighlight
|
||||||
|
qaFXpxVTuYCuibb9P41VSeVn4pIaK8o3jUpJKqI4VF4= (4kb)
|
||||||
|
```
|
||||||
|
|
||||||
|
The file is then added to `filelist.json`, which now looks like this:
|
||||||
|
|
||||||
|
|
||||||
|
```nohighlight
|
||||||
|
[
|
||||||
|
{
|
||||||
|
"type": "Folder",
|
||||||
|
"path": "C:\\data\\"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "File",
|
||||||
|
"path": "C:\\data\\mydoc.txt",
|
||||||
|
"size": 4096,
|
||||||
|
"hash": "qaFXpxVTuYCuibb9P41VSeVn4pIaK8o3jUpJKqI4VF4="
|
||||||
|
}
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
### Processing a large file
|
||||||
|
|
||||||
|
For the entry `C:\data\myvideo.mp4`, the same approach is used as described for `C:\data\mydoc.txt`, but the file is larger than the "block size" (100kb). This simply means that Duplicati computes 3 SHA-256 block hashes, where the first two are 100kb each, and the last is the remaining 10kb.
|
||||||
|
|
||||||
|
Each of these data blocks, or partial file contents, are added to the `dblock` file, which now contains:
|
||||||
|
|
||||||
|
```nohighlight
|
||||||
|
qaFXpxVTuYCuibb9P41VSeVn4pIaK8o3jUpJKqI4VF4= (4kb)
|
||||||
|
0td8NEaS7SMrQc5Gs0Sdxjb/1MXEEuwkyxRpguDiWsY= (100kb)
|
||||||
|
PN2oO6eQudCRSdx3zgk6SJvlI5BquP6djt5hG4ZfRCQ= (100kb)
|
||||||
|
uS/2KMSmm2IWlZ77JiHH1p/yp7Cvhr8CKmRHJNMRqwA= (10kb)
|
||||||
|
```
|
||||||
|
|
||||||
|
Additionally, a file hash is computed, but unlike a small file, the file hash is now different: `4sGwVN/QuWHD+yVI10qgYa4e2F5M4zXLKBQaf1rtTCs=`.
|
||||||
|
|
||||||
|
We could choose to store these values directly in `filelist.json`, for example:
|
||||||
|
|
||||||
|
```nohighlight
|
||||||
|
{
|
||||||
|
"type": "File",
|
||||||
|
"path": "C:\\data\\myvideo.mp4",
|
||||||
|
"size": 215040,
|
||||||
|
"hash": "4sGwVN/QuWHD+yVI10qgYa4e2F5M4zXLKBQaf1rtTCs=",
|
||||||
|
"blocks": [
|
||||||
|
"0td8NEaS7SMrQc5Gs0Sdxjb/1MXEEuwkyxRpguDiWsY=",
|
||||||
|
"PN2oO6eQudCRSdx3zgk6SJvlI5BquP6djt5hG4ZfRCQ=",
|
||||||
|
"uS/2KMSmm2IWlZ77JiHH1p/yp7Cvhr8CKmRHJNMRqwA="
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Since we would then store around 47 characters for each 100kb of file data, a 1GB file would add 482kb of additional data into the filelist, making the filelists prohibitively large.
|
||||||
|
|
||||||
|
Instead, Duplicati adds an "indirection block", meaning that it creates a new block of data with only the hashes. Since the SHA-256 hash is 32 bytes, if not encoded with base64, we can store 3200 block hashes in a single block, meaning that the `filelist.json` file will only grow with 47 bytes, for approximately each 300MB of data.
|
||||||
|
|
||||||
|
For `C:\data\myvideo.mp4` it generated three blocks, so the new block with the three blockhashes takes up only 96 bytes. This new block is treated no differently than other blocks, and a SHA-256 hash is computed, giving the base64 encoded "blockhash" value: `Uo1f4rVjNRX10HkxQxXauCrRv0wJOvStqt9gaUT0uPA=`.
|
||||||
|
|
||||||
|
This new block is then added to the dblock file, which now contains:
|
||||||
|
|
||||||
|
```nohighlight
|
||||||
|
qaFXpxVTuYCuibb9P41VSeVn4pIaK8o3jUpJKqI4VF4= (4kb)
|
||||||
|
0td8NEaS7SMrQc5Gs0Sdxjb/1MXEEuwkyxRpguDiWsY= (100kb)
|
||||||
|
PN2oO6eQudCRSdx3zgk6SJvlI5BquP6djt5hG4ZfRCQ= (100kb)
|
||||||
|
uS/2KMSmm2IWlZ77JiHH1p/yp7Cvhr8CKmRHJNMRqwA= (10kb)
|
||||||
|
Uo1f4rVjNRX10HkxQxXauCrRv0wJOvStqt9gaUT0uPA= (96b)
|
||||||
|
```
|
||||||
|
|
||||||
|
The new file entry is then stored in filelist, which then looks like:
|
||||||
|
|
||||||
|
```nohighlight
|
||||||
|
[
|
||||||
|
{
|
||||||
|
"type": "Folder",
|
||||||
|
"path": "C:\\data\\"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "File",
|
||||||
|
"path": "C:\\data\\mydoc.txt",
|
||||||
|
"size": 4096,
|
||||||
|
"hash": "qaFXpxVTuYCuibb9P41VSeVn4pIaK8o3jUpJKqI4VF4="
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "File",
|
||||||
|
"path": "C:\\data\\myvideo.mp4",
|
||||||
|
"size": 215040,
|
||||||
|
"hash": "4sGwVN/QuWHD+yVI10qgYa4e2F5M4zXLKBQaf1rtTCs=",
|
||||||
|
"blocklists": [ "Uo1f4rVjNRX10HkxQxXauCrRv0wJOvStqt9gaUT0uPA=" ]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
### Processing similar data
|
||||||
|
|
||||||
|
There are now three entries remaining in the source list, where one is a folder, which is stored the same way as described earlier.
|
||||||
|
|
||||||
|
The file `C:\data\extra\olddoc.txt` is an older version of the document `C:\\data\\mydoc.txt` which was already backed up. But as Duplicati simply computes the hash of the blocks in the new file, it computes `R/XSNsb4ln/SkeJwFDd4Fv4OnW2QNIxMR4HItgg9qCE=` which does not match the previously computed hash for `C:\data\mydoc.txt`, and thus it is treated as a new block.
|
||||||
|
|
||||||
|
Some backup solutions will identify that fragments of the two files match, and produce smaller backups in this scenario.
|
||||||
|
|
||||||
|
Duplicati chooses instead to focus on simplicity and speed and foregoes this potential space saver.
|
||||||
|
|
||||||
|
We chose to omit this part based on a number of observations:
|
||||||
|
|
||||||
|
* The files are ultimately compressed, so if the two similar files end up in the same compressed volume, the space will be saved by the compression algorithm anyway
|
||||||
|
* Small shifts are most commonly found in plain-text files (i.e. source code), as larger files are either:
|
||||||
|
* not rewritten (databases, videos, old photos, etc)
|
||||||
|
* rewritten completely (images, videos)
|
||||||
|
* rewritten by compression (docx, images)
|
||||||
|
* plain-text files tend to be small (compared to other, say images)
|
||||||
|
* plain-text files compress well
|
||||||
|
|
||||||
|
This means that an additional entry for `C:\data\extra\olddoc.txt` will occur in `filelist.json`:
|
||||||
|
|
||||||
|
```nohighlight
|
||||||
|
[
|
||||||
|
{
|
||||||
|
"type": "Folder",
|
||||||
|
"path": "C:\\data\\"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "File",
|
||||||
|
"path": "C:\\data\\mydoc.txt",
|
||||||
|
"size": 4096,
|
||||||
|
"hash": "qaFXpxVTuYCuibb9P41VSeVn4pIaK8o3jUpJKqI4VF4="
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "File",
|
||||||
|
"path": "C:\\data\\myvideo.mp4",
|
||||||
|
"size": 215040,
|
||||||
|
"hash": "4sGwVN/QuWHD+yVI10qgYa4e2F5M4zXLKBQaf1rtTCs=",
|
||||||
|
"blocklists": [ "Uo1f4rVjNRX10HkxQxXauCrRv0wJOvStqt9gaUT0uPA=" ]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "Folder",
|
||||||
|
"path": "C:\\data\\extra"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "File",
|
||||||
|
"path": "C:\\data\\extra\\olddoc.txt",
|
||||||
|
"size": 2048,
|
||||||
|
"hash": "R/XSNsb4ln/SkeJwFDd4Fv4OnW2QNIxMR4HItgg9qCE="
|
||||||
|
}
|
||||||
|
]
|
||||||
|
```
|
||||||
|
And the new block will also be added to the `dblock` file:
|
||||||
|
|
||||||
|
```nohighlight
|
||||||
|
qaFXpxVTuYCuibb9P41VSeVn4pIaK8o3jUpJKqI4VF4= (4kb)
|
||||||
|
0td8NEaS7SMrQc5Gs0Sdxjb/1MXEEuwkyxRpguDiWsY= (100kb)
|
||||||
|
PN2oO6eQudCRSdx3zgk6SJvlI5BquP6djt5hG4ZfRCQ= (100kb)
|
||||||
|
uS/2KMSmm2IWlZ77JiHH1p/yp7Cvhr8CKmRHJNMRqwA= (10kb)
|
||||||
|
Uo1f4rVjNRX10HkxQxXauCrRv0wJOvStqt9gaUT0uPA= (96b)
|
||||||
|
R/XSNsb4ln/SkeJwFDd4Fv4OnW2QNIxMR4HItgg9qCE= (2kb)
|
||||||
|
```
|
||||||
|
|
||||||
|
Finally, the file `C:\data\extra\samevideo.mp4` is processed. Duplicati will treat each block individually, but figure out that it has already made a backup of this block and not emit it to the `dblock` file. After all 3 blocks are computed, it will then create a new block to store these 3 hashes, but also finds that such a block is already stored as well.
|
||||||
|
|
||||||
|
This approach is also known as deduplication ensuring that each "chunk" of data is stored just only once. With this approach, duplicate files are detected regardless of their names or locations. For systems like databases this works well, in that they usually append or replace parts of their storage file, which can then be isolated into changed 100kb blocks.
|
||||||
|
|
||||||
|
The final contents of `filelist.json` is then:
|
||||||
|
|
||||||
|
```nohighlight
|
||||||
|
[
|
||||||
|
{
|
||||||
|
"type": "Folder",
|
||||||
|
"path": "C:\\data\\"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "File",
|
||||||
|
"path": "C:\\data\\mydoc.txt",
|
||||||
|
"size": 4096,
|
||||||
|
"hash": "qaFXpxVTuYCuibb9P41VSeVn4pIaK8o3jUpJKqI4VF4="
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "File",
|
||||||
|
"path": "C:\\data\\myvideo.mp4",
|
||||||
|
"size": 215040,
|
||||||
|
"hash": "4sGwVN/QuWHD+yVI10qgYa4e2F5M4zXLKBQaf1rtTCs=",
|
||||||
|
"blocklists": [ "Uo1f4rVjNRX10HkxQxXauCrRv0wJOvStqt9gaUT0uPA=" ]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "Folder",
|
||||||
|
"path": "C:\\data\\extra"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "File",
|
||||||
|
"path": "C:\\data\\extra\\olddoc.txt",
|
||||||
|
"size": 2048,
|
||||||
|
"hash": "R/XSNsb4ln/SkeJwFDd4Fv4OnW2QNIxMR4HItgg9qCE="
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "File",
|
||||||
|
"path": "C:\\data\\extra\\samevideo.mp4",
|
||||||
|
"size": 215040,
|
||||||
|
"hash": "4sGwVN/QuWHD+yVI10qgYa4e2F5M4zXLKBQaf1rtTCs=",
|
||||||
|
"blocklists": [ "Uo1f4rVjNRX10HkxQxXauCrRv0wJOvStqt9gaUT0uPA=" ]
|
||||||
|
},
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
And the final contents of the `dblock` file is then:
|
||||||
|
|
||||||
|
```nohighlight
|
||||||
|
qaFXpxVTuYCuibb9P41VSeVn4pIaK8o3jUpJKqI4VF4= (4kb)
|
||||||
|
0td8NEaS7SMrQc5Gs0Sdxjb/1MXEEuwkyxRpguDiWsY= (100kb)
|
||||||
|
PN2oO6eQudCRSdx3zgk6SJvlI5BquP6djt5hG4ZfRCQ= (100kb)
|
||||||
|
uS/2KMSmm2IWlZ77JiHH1p/yp7Cvhr8CKmRHJNMRqwA= (10kb)
|
||||||
|
Uo1f4rVjNRX10HkxQxXauCrRv0wJOvStqt9gaUT0uPA= (96b)
|
||||||
|
R/XSNsb4ln/SkeJwFDd4Fv4OnW2QNIxMR4HItgg9qCE= (2kb)
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
### Further processing
|
||||||
|
|
||||||
|
Many details were omitted from the above example run, some of those details can be summarized as:
|
||||||
|
|
||||||
|
* Metadata is treated like a normal block of data
|
||||||
|
* When a `dblock` size is too big, a new one is created
|
||||||
|
* Zip archives contains a `manifest` file that describes the setup
|
||||||
|
* A local database is used to keep track of hashes and files
|
||||||
|
* A `dindex` file is created to keep track of which `dblock` files have each hash
|
||||||
|
|
||||||
|
Some more details can be found in the whitepaper A block-based storage model for remote online backups in a trust-no-one environment). Even more details in the Duplicati source code.
|
||||||
|
|
||||||
|
|
133
docs/appendix-b-how-the-restore-process-works.md
Normal file
133
docs/appendix-b-how-the-restore-process-works.md
Normal file
@ -0,0 +1,133 @@
|
|||||||
|
|
||||||
|
|
||||||
|
# How the restore process works
|
||||||
|
|
||||||
|
|
||||||
|
### Duplicati restore process
|
||||||
|
|
||||||
|
If you have read the How the backup process works document, you might be wondering how the restore process uses the stored data to restore your files. This document explains this process, using the `Duplicati.CommandLine.RecoveryTool.exe` process as the starting point, so we can skip the complications added by the local database. Unless you are curious about the inner workings of Duplicati, you do not need to read this document, you can simply use the normal restore process, or the recovery tool.
|
||||||
|
|
||||||
|
### The example files
|
||||||
|
|
||||||
|
For this document we continue with the example from the backup document, and walk through the restore process.
|
||||||
|
|
||||||
|
From the initial backup, we saw that the directory structure that was backed up looks like this:
|
||||||
|
|
||||||
|
```nohighlight
|
||||||
|
C:\data
|
||||||
|
|----> mydoc.txt, 4kb
|
||||||
|
|----> myvideo.mp4, 210kb
|
||||||
|
|----> extra
|
||||||
|
|-----> olddoc.txt, 2kb
|
||||||
|
|-----> samevideo.mp4, 210kb
|
||||||
|
```
|
||||||
|
|
||||||
|
To simplify matters, we will not use a local database, and we only look at file data, not metadata and not directories.
|
||||||
|
|
||||||
|
### Getting the initial list
|
||||||
|
|
||||||
|
To start with, we need to pick the `dlist` file that contains the version of the files that we want. In a real-world example, you can observe the names of the `dlist` files, which have a timestamp embedded in them. The timestamps are stored in UTC, so you may need to adjust to your local timezone.
|
||||||
|
|
||||||
|
For this example we only have a single `dlist` file, so we pick that one, download it and see the contents:
|
||||||
|
|
||||||
|
```nohighlight
|
||||||
|
[
|
||||||
|
{
|
||||||
|
"type": "Folder",
|
||||||
|
"path": "C:\\data\\"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "File",
|
||||||
|
"path": "C:\\data\\mydoc.txt",
|
||||||
|
"size": 4096,
|
||||||
|
"hash": "qaFXpxVTuYCuibb9P41VSeVn4pIaK8o3jUpJKqI4VF4="
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "File",
|
||||||
|
"path": "C:\\data\\myvideo.mp4",
|
||||||
|
"size": 215040,
|
||||||
|
"hash": "4sGwVN/QuWHD+yVI10qgYa4e2F5M4zXLKBQaf1rtTCs=",
|
||||||
|
"blocklists": [ "Uo1f4rVjNRX10HkxQxXauCrRv0wJOvStqt9gaUT0uPA=" ]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "Folder",
|
||||||
|
"path": "C:\\data\\extra"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "File",
|
||||||
|
"path": "C:\\data\\extra\\olddoc.txt",
|
||||||
|
"size": 2048,
|
||||||
|
"hash": "R/XSNsb4ln/SkeJwFDd4Fv4OnW2QNIxMR4HItgg9qCE="
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "File",
|
||||||
|
"path": "C:\\data\\extra\\samevideo.mp4",
|
||||||
|
"size": 215040,
|
||||||
|
"hash": "4sGwVN/QuWHD+yVI10qgYa4e2F5M4zXLKBQaf1rtTCs=",
|
||||||
|
"blocklists": [ "Uo1f4rVjNRX10HkxQxXauCrRv0wJOvStqt9gaUT0uPA=" ]
|
||||||
|
},
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
From this we can see that there are 4 files we need to restore. We can also see that there are files that need "blocklists".
|
||||||
|
|
||||||
|
|
||||||
|
### Expanding blocklists
|
||||||
|
|
||||||
|
As explained in the How the Backup Process Works document, the blocklists are data blocks that contain additional hashes needed to restore the files. Thus we start by "expanding" the blocklists into a list of hashes that we need.
|
||||||
|
|
||||||
|
As the two files share the blocklist has `Uo1f4rVjNRX10HkxQxXauCrRv0wJOvStqt9gaUT0uPA=`, we only need to get this block to complete the expansion phase.
|
||||||
|
|
||||||
|
Unfortunately, there is no correlation between the names of the `dblock` files and the data they contain, so we need to download all of them, until we find the data we need. Since this is slow in a real-world scenario, Duplicati replicates this information in the `dindex` files, which are much smaller than the `dblock` files.
|
||||||
|
|
||||||
|
Assuming we have the same `dblock` file as mentioned in the backup document, we get a zip file with these files:
|
||||||
|
|
||||||
|
```nohighlight
|
||||||
|
qaFXpxVTuYCuibb9P41VSeVn4pIaK8o3jUpJKqI4VF4= (4kb)
|
||||||
|
0td8NEaS7SMrQc5Gs0Sdxjb/1MXEEuwkyxRpguDiWsY= (100kb)
|
||||||
|
PN2oO6eQudCRSdx3zgk6SJvlI5BquP6djt5hG4ZfRCQ= (100kb)
|
||||||
|
uS/2KMSmm2IWlZ77JiHH1p/yp7Cvhr8CKmRHJNMRqwA= (10kb)
|
||||||
|
Uo1f4rVjNRX10HkxQxXauCrRv0wJOvStqt9gaUT0uPA= (96b)
|
||||||
|
R/XSNsb4ln/SkeJwFDd4Fv4OnW2QNIxMR4HItgg9qCE= (2kb)
|
||||||
|
```
|
||||||
|
|
||||||
|
Here we see that the file `Uo1f4rVjNRX10HkxQxXauCrRv0wJOvStqt9gaUT0uPA=` is 96 bytes, and we know that a sha256 is 32 bytes. We can then compute that this chunk of data expands to `96/32 = 3` hashes. We could also compute this size, by looking at the size of the file, and then find out how many 100kb blocks it spans.
|
||||||
|
|
||||||
|
For efficiency, the hashes are stored in raw binary format, but here we can represent them as base64 encoded strings:
|
||||||
|
|
||||||
|
```nohighlight
|
||||||
|
0td8NEaS7SMrQc5Gs0Sdxjb/1MXEEuwkyxRpguDiWsY=
|
||||||
|
PN2oO6eQudCRSdx3zgk6SJvlI5BquP6djt5hG4ZfRCQ=
|
||||||
|
uS/2KMSmm2IWlZ77JiHH1p/yp7Cvhr8CKmRHJNMRqwA=
|
||||||
|
```
|
||||||
|
|
||||||
|
We can now expand `Uo1f4rVjNRX10HkxQxXauCrRv0wJOvStqt9gaUT0uPA=` into the three real hashes, and we now have an expanded list of hashes for each file we need to restore.
|
||||||
|
|
||||||
|
|
||||||
|
### Restoring small files
|
||||||
|
|
||||||
|
Restoring the small files is simple: extract the contents and put them into a file with that name. The process is then to locate the `dblock` file that contains the block we need, and extracting it. In Duplicati, this is improved with `dindex` files, that contains a map showing which blocks can be found in a `dblock` file, such that a download can be avoided.
|
||||||
|
|
||||||
|
|
||||||
|
### Restoring large files
|
||||||
|
|
||||||
|
As we have expanded the list of hashes, we follow the same process as for the small files, and simply locate blocks, one at a time. If we restore the blocks in the order specified, each block will represent the correct length (100kb), and we can simply extract the files from the zip archive, and append it to the file. Since we use a fixed size block, we can also choose to restore out-of-order, by computing the file offset for each block before inserting the data.
|
||||||
|
|
||||||
|
### Verification of the restored files
|
||||||
|
|
||||||
|
As we saw in the file list contents, each file also has a `hash` property, which we can use after the restore process to verify that each file is restored correctly.
|
||||||
|
|
||||||
|
### Building a local index
|
||||||
|
|
||||||
|
For the `Duplicati.CommandLine.RecoveryTool.exe` the above process is followed as described, but with an two additional steps: download and index.
|
||||||
|
|
||||||
|
The download process, merely downloads (and decrypts) all the dblock files it can find on the remote storage, such that all the following operations can be done with local files.
|
||||||
|
|
||||||
|
Since the recovery tool does not rely on `dindex` files, it would be extremely slow if it had to open all zip files to check if they contain the block it wants to process. The index process speeds this up significantly, by producing a plain text file, where each line presents a (block, zip archive) pair.
|
||||||
|
|
||||||
|
The index step then opens each `dblock` file, and lists the contents, and outputs a line for each data block it finds. After each `dblock` file has been processed, the file is sorted alphabetically.
|
||||||
|
|
||||||
|
There are more efficient ways to store this data, but the text file allows an expert user to monitor, update and adjust the index file with a simple text editor, should something go wrong. An expert user can also investigate the `dlist` file, and use the index file to figure out where a particular block is.
|
||||||
|
|
||||||
|
For the final phase in the recovery tool, restore, the sorted index is used to locate the `dblock` file. This search relies on the alphabetic sorting to ensure that lookup times does not grow linearly when the number of data blocks.
|
||||||
|
|
57
docs/appendix-c-choosing-sizes-in-duplicati.md
Normal file
57
docs/appendix-c-choosing-sizes-in-duplicati.md
Normal file
@ -0,0 +1,57 @@
|
|||||||
|
|
||||||
|
# Choosing sizes in Duplicati
|
||||||
|
|
||||||
|
All options in Duplicati are chosen to fit a wide range of users, such that as few as possible of the users need to change settings.
|
||||||
|
|
||||||
|
Some of these options are related to sizes of various elements. Choosing these options optimally is a balance between different usage scenarios and has different tradeoffs. This documents explains what these tradeoffs are and how to choose those that fit a specific backup best.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
### The block size
|
||||||
|
|
||||||
|
As Duplicati makes backups with `blocks`, aka "file chunks", one option is to choose what size a "chunk" should be.
|
||||||
|
|
||||||
|
The chunk size is set via the advanced option `--block-size` and is set to 100kb by default. If a file is smaller than the chunk size, or the size is not evenly divisible by the block size, it will generate a block that is smaller than the chunk size.
|
||||||
|
|
||||||
|
Due to the way blocks are referenced (by hashes), it is not possible to change the chunk size after the first backup has been made. Duplicati will abort the operation with an error if you attempt to change the chunk size on an existing backup.
|
||||||
|
|
||||||
|
### Using larger block sizes
|
||||||
|
|
||||||
|
If you choose a larger chunk size, that will obviously generate "fewer but larger blocks", provided your files are larger than the chunk size. It is also possible to choose a smaller chunk size, but for most cases this has a negative impact.
|
||||||
|
|
||||||
|
Internally each block needs to be stored, so having fewer blocks, means smaller (and thus faster) lookup tables. This effect is more noticeable if the database is stored on non-ssd disks (aka spinning disks).
|
||||||
|
|
||||||
|
If you have large files, choosing a large chunk size, will also reduce the storage overhead a bit. When restoring, this is also a benefit, as more data can be streamed into the new file, and the data will likely span fewer remote files.
|
||||||
|
|
||||||
|
The downside to choosing a large chunk size, is that change detection and deduplication covers a larger area.
|
||||||
|
|
||||||
|
If a single byte is changed in a file, Duplicati will need to upload a new chunk. If there are many small changes to the files, this will generate many new blocks, which increases the required storage space as well as the required bandwidth.
|
||||||
|
|
||||||
|
With larger chunk sizes, it is also less likely that deduplication will detect any matching chunks, as the shared chunks contain more data.
|
||||||
|
|
||||||
|
If there is sufficient bandwidth to the remote destination, choosing a larger chunk size is usually beneficial. The lower limit is 10kb, and there is no upper limit, but choosing values larger than 1mb should only be done after evaluating the above impacts.
|
||||||
|
|
||||||
|
### Remote Volume Size
|
||||||
|
|
||||||
|
Rather than storing the chunks individually, Duplicati groups data in volumes, which reduces the number of the remote files, and calls to the remote server. The volumes are then compressed, which saves storage space and bandwidth. Encryption is applied to the volumes, which reduces the possibility of someone deducing properties about the contents inside the volume.
|
||||||
|
|
||||||
|
The volume size can be set in the graphical user interface, as well as on the commandline with the option `--dblock-size`. The remote volumes are called `dblock` files internally, and that is the extension used for the files.
|
||||||
|
|
||||||
|
The default size is 50mb, which is chosen as a sensible default for home users with limited upload speeds.
|
||||||
|
|
||||||
|
Unlike the chunk size described above, it can be beneficial to both increase or decrease the volume size to fit your connection characteristics. Also, the volume size can be changed after a backup has been created.
|
||||||
|
|
||||||
|
### Increasing the Remote Volume Size
|
||||||
|
|
||||||
|
If you increase the volume size, it will again mean "fewer but larger files". On some servers, FTP in particular, there may be a limit on the number of files that can be listed. If you decrease the number of files, you can avoid hitting that limitation, and some servers will be faster when listing the contents. This delay is caused by the servers, such as Amazon S3 and OneDrive, using "pagination" where large file lists must be collected with multiple calls.
|
||||||
|
|
||||||
|
As there are significantly fewer volume files than chunks, the impact on the local database and related operations is insignificant.
|
||||||
|
|
||||||
|
The downside of using larger volumes are seen when restoring files. As Duplicati cannot read data from inside the volumes, it needs to download the entire remote volume before it can extract the desired data. If a file is split across many remote volumes, e.g. due to updates, this will require a large amount of downloads to extract the chunks.
|
||||||
|
|
||||||
|
Another potential downside of a larger remote volume, is that the compression and encryption usually means that data corruption destroys the entire volume, instead of just a few chunks.
|
||||||
|
|
||||||
|
If you have large datasets and a stable connection, it is usually desirable to user a larger volume size. Using volume sizes larger than 1gb usually gives slowdowns as the files are slower to process, but there is no limit applied.
|
||||||
|
|
||||||
|
If you have an unstable connection, you may want to use smaller volume sizes, such that a re-transmit after a failed transfer will not require such a large re-transfer. If you need frequent restores, with often changed data, you may also want to use smaller volume sizes. To reduce the number of remote files in this scenario, consider splitting the backup into separate smaller backups.
|
||||||
|
|
33
docs/appendix-d-filters.md
Normal file
33
docs/appendix-d-filters.md
Normal file
@ -0,0 +1,33 @@
|
|||||||
|
|
||||||
|
# Filters
|
||||||
|
|
||||||
|
### General concept of filters
|
||||||
|
|
||||||
|
Duplicati 2.0 can include or exclude files and folders from your backup based on so-called filters. A filter can consist of various rules. And a rule decides if a file is to be included to the backup or excluded from it. This decision is made based on the name of the file or folder. Besides these name-based rules there is also a set of specific settings which exclude files with specific attributes (like system files or hidden files) or files that exceed a specific size.
|
||||||
|
|
||||||
|
Duplicati's filter engine processes folders first then files. The reason for that behavior is performance. If a folder is excluded from a backup, the files inside that folder don't have to be processed anymore.
|
||||||
|
|
||||||
|
When filter rules have been defined the first folder is taken and the filter rules are processed one by one. The first rule that matches is applied and the following rules are not processed anymore. For instance, if the first rule excludes a folder, then this folder and all files within will be excluded from the backup even if following rules include this folder or its files.
|
||||||
|
|
||||||
|
It is recommended to write folder rules first and file rules afterwards. That way rules are written in the same order as they will be effective when Duplicati processes them and Duplicati's filters are easier to understand that way.
|
||||||
|
|
||||||
|
Per default, all files and folders will be backed up. That means, if no rule matches, the file or folder will be included. In the special case that all rules are include rules (which does not make sense when all files and folders are included per default) Duplicati assumes that all other files and folders are meant to be excluded (this had to be defined as another rule in Duplicati 1.3 but most people found that confusing so we changed that in Duplicati 2.0).
|
||||||
|
|
||||||
|
### Syntax
|
||||||
|
|
||||||
|
If you want to use file globbing to specify rules, `?` and `*` are allowed placeholders. `?` matches any single character. `*` specifies none or multiple characters. Rules can also be specified as regular expression. In this case put the regular expression (using .NET syntax) into hard brackets `[]`. Folder names always end with a slash `/` on Linux or Mac and a backslash `\` on Windows. For instance, `log` is a file, `log/` is a folder. In the UI a rule to include is started with a `+`, a rule to exclude is started with a `-`. Using the command-line there are specific settings to specify include or exclude rules. These are `--include` and `--exclude`. Using the command-line various rules can be specified using `--include` or `--exclude` repeatedly.
|
||||||
|
|
||||||
|
### Settings
|
||||||
|
|
||||||
|
Besides filter rules there are settings that can exclude specific files by their attributes. Those settings are `--skip-files-larger-than` and `--exclude-files-attributes`. The latter is able to exclude files that have any of the following attributes: ReadOnly, Hidden, System, Directory, Archive, Device, Normal, Temporary. Those settings are applied to all files of the backup.
|
||||||
|
|
||||||
|
### Common use cases
|
||||||
|
|
||||||
|
**Exclude specific sub-folders.** On your NAS you want to backup all photos. Your photos are stored in hundreds of folders and each of those folders contains a sub-folder called "@eaDir" that contains thumbnails in different sizes that your NAS uses for a web interface. You want to backup your photos but not the thumbnails. In this example you just exclude all thumbnail folders and thus their content. The rule is: `-*/@eaDir/`. Don't forget the trailing slash that defines @eaDir as a folder.
|
||||||
|
|
||||||
|
**Exclude specific files.** You store your photos and movies in the same folders. For some reason, you do not want to include the movies into your backup. Depending on why you want to exclude those movies, there are different solutions. First solution is to define a rule like `-*.mov` or `-*.avi`. Second solution is to say `--skip-files-larger-than=10M` which will exclude all files that are larger than 10MB which will probably affect all movies but no photos. Third way is to explicitly say that you only want to include the photos. Then read on how to do that.
|
||||||
|
|
||||||
|
**Include specific files only.** You have a folder structure that contains a lot of photos and movies from your camera. For some reason you only want to include the photos to your backup. The rule for your backup is `+*.jpg +*.jpeg`. As there are including rules in this filter, Duplicati automatically excludes all other files. This had to be done manually in Duplicati 1.3 which made including rules a little bit difficult for most users.
|
||||||
|
|
||||||
|
**Include some files, exclude others.** Now let's define a filter that does both of the above. First it excludes @eaDir specifying `-*/@eaDir/`. Then it includes only JPG files specifying `+*.jpg`. The problem here is, that Duplicati includes all files and folders per default. This means that e.g. /photos/movie.avi will also be part of the backup. To make the including rule effective an additional rule is required that excludes all files that do not match any of the current rules. The filter must say "exclude this, exclude that, include this but nothing else". The best rule for "but nothing else" is a regular expression that excludes all files. It is `-[.*[^/]]` on Linux or Mac, and on Windows the rule is `-[.*[^\\]]`. The rule says "exclude everything that is not a folder". The final filter then is `-*/@eaDir/ +*.jpg +*.jpeg -[.*[^/]]`. Duplicati will process all folders but @eaDir/ and it will include JPG and JPEG files but exclude all other files.
|
||||||
|
|
37
docs/appendix-e-how-we-get-along-with-oauth.md
Normal file
37
docs/appendix-e-how-we-get-along-with-oauth.md
Normal file
@ -0,0 +1,37 @@
|
|||||||
|
|
||||||
|
# How we get along with OAuth
|
||||||
|
|
||||||
|
### The Issue with OAuth
|
||||||
|
|
||||||
|
Duplicati is a backup tool running in the background. This usually means that there is no way to prompt users for input easily. This gets even more important when the command line interface is used to run backup scripts automatically and unattended.
|
||||||
|
|
||||||
|
Unfortunately, OneDrive and other services are designed to work with OAuth. OAuth is designed to grant access for a limited amount of time after users have authenticated correctly. This is not well suited for background services such as Duplicati which require to get access permissions regularly without the users being prompted to login.
|
||||||
|
|
||||||
|
To remedy this, we provide a service for Duplicati 2.0 users. The service handles the requirements for the OAuth login, and then uses this service to grant Duplicati access to the stored data. This service does neither store your OneDrive username nor your password. It is a bit difficult to explain how the service is working in details. But let's try!
|
||||||
|
|
||||||
|
With OAuth, the users start on a Duplicati page to set up a new connection. There users find a link which redirects to the OneDrive login page. After the users log in, OneDrive displays the desired access and the application name. The users accept this and they are redirected back to the Duplicati page. When this happens, an exchange is performed in the background where Duplicati and OneDrive exchange tokens. The token grants Duplicati access for a limited amount of time.
|
||||||
|
|
||||||
|
Furthermore, OneDrive allows users to block an application entirely. That is why OneDrive needs an "app secret" that identifies the application. This secret cannot be shared with anyone, so it stays entirely in the Duplicati service. Leaking this "app secret" would allow an attacker to impersonate Duplicati. This means, putting the app secret into the Duplicati application is a no-go.
|
||||||
|
|
||||||
|
When Duplicati sends the "app secret" to OneDrive, it responds with a unique numeric Windows Live user id (CID), and a "refresh token". Duplicati then generates a random authid token, which is used to encrypt the refresh token and CID inside the service. When a backup starts, Duplicati sends the authid token to the service, which decrypts the refresh token in-memory. The refresh token is then sent to OneDrive together with the "app secret". OneDrive grants an "access token" which is valid for one hour. This access token is returned to the Duplicati instance and used to upload and download files. If the backup takes longer than an hour, the process is repeated automatically.
|
||||||
|
|
||||||
|
This slightly complicated multi-step approach protects your Windows Live login details, so you never share your actual username (email) nor your password with Duplicati.
|
||||||
|
|
||||||
|
On top of this, the service encrypts all information completely so it cannot be used without the authid token. This ensures that an attacker needs to listen in on a live session to obtain the authid and access token. A full exposure of the service database is no risk to your data alone. You can create as many authid tokens as you like, and you can always revoke an authid token on the service if you fear it has been compromised.
|
||||||
|
|
||||||
|
### How it works in nine steps
|
||||||
|
|
||||||
|
1. To configure a OneDrive connection the user follows a link in the UI that requests access to OneDrive from "the service".
|
||||||
|
2. To grant access to OneDrive user has to confirm by logging in at OneDrive.
|
||||||
|
3. OneDrive then connects to the service and provides a user CID and a refresh-token.
|
||||||
|
4. The service stores the CID as well as the refresh-token encrypted. The service also generates an authid token for that user. The authid token is presented to the user and manually copied into Duplicati. The connection has been set up now.
|
||||||
|
5. When Duplicati wants to establish a connection, it sends the authid token to the service . the service uses the provided password to decrypt the refresh-token.
|
||||||
|
6. The refresh-token is sent to OneDrive along with the app secret.
|
||||||
|
7. OneDrive checks the refresh-token and gives back a one-hour access token that grants access to the real user account.
|
||||||
|
8. The service hands back the one-hour-access-token to Duplicati.
|
||||||
|
9. Duplicati uses the one-hour-access-token to connect to OneDrive. As soon as the connection fails, it requests a new token the same way the first one-hour-access-token was requested.
|
||||||
|
|
||||||
|
### What if the service gets hacked?
|
||||||
|
|
||||||
|
Then the hackers get a lot of blobs with encrypted refresh-tokens. Without the users' authid, the refresh-tokens are useless. Without the service's app secret, the authid's are useless. With one of them being useless, the other part is useless as well. In summary, if the service gets hacked, the hackers get a lot of useless stuff.
|
||||||
|
|
103
docs/appendix-f-license-agreement.md
Normal file
103
docs/appendix-f-license-agreement.md
Normal file
@ -0,0 +1,103 @@
|
|||||||
|
|
||||||
|
|
||||||
|
# APPENDIX F - License Agreement
|
||||||
|
|
||||||
|
```nohighlight
|
||||||
|
GNU LESSER GENERAL PUBLIC LICENSE
|
||||||
|
Version 2.1, February 1999
|
||||||
|
Copyright (C) 1991, 1999 Free Software Foundation, Inc.
|
||||||
|
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA
|
||||||
|
Everyone is permitted to copy and distribute verbatim copies
|
||||||
|
of this license document, but changing it is not allowed.
|
||||||
|
|
||||||
|
[This is the first released version of the Lesser GPL. It also counts
|
||||||
|
as the successor of the GNU Library Public License, version 2, hence
|
||||||
|
the version number 2.1.]
|
||||||
|
Preamble
|
||||||
|
The licenses for most software are designed to take away your freedom to share and
|
||||||
|
change it. By contrast, the GNU General Public Licenses are intended to guarantee your
|
||||||
|
freedom to share and change free software--to make sure the software is free for all
|
||||||
|
its users.
|
||||||
|
This license, the Lesser General Public License, applies to some specially designated
|
||||||
|
software packages--typically libraries--of the Free Software Foundation and other
|
||||||
|
authors who decide to use it. You can use it too, but we suggest you first think
|
||||||
|
carefully about whether this license or the ordinary General Public License is the
|
||||||
|
better strategy to use in any particular case, based on the explanations below.
|
||||||
|
When we speak of free software, we are referring to freedom of use, not price. Our
|
||||||
|
General Public Licenses are designed to make sure that you have the freedom to
|
||||||
|
distribute copies of free software (and charge for this service if you wish); that you
|
||||||
|
receive source code or can get it if you want it; that you can change the software and
|
||||||
|
use pieces of it in new free programs; and that you are informed that you can do these
|
||||||
|
things.
|
||||||
|
To protect your rights, we need to make restrictions that forbid distributors to deny
|
||||||
|
you these rights or to ask you to surrender these rights. These restrictions translate
|
||||||
|
to certain responsibilities for you if you distribute copies of the library or if you
|
||||||
|
modify it.
|
||||||
|
|
||||||
|
For example, if you distribute copies of the library, whether gratis or for a fee, you must give the recipients all the rights that we gave you. You must make sure that they, too, receive or can get the source code. If you link other code with the library, you must provide complete object files to the recipients, so that they can relink them with the library after making changes to the library and recompiling it. And you must show them these terms so they know their rights.
|
||||||
|
We protect your rights with a two-step method: (1) we copyright the library, and (2) we offer you this license, which gives you legal permission to copy, distribute and/or modify the library.
|
||||||
|
To protect each distributor, we want to make it very clear that there is no warranty for the free library. Also, if the library is modified by someone else and passed on, the recipients should know that what they have is not the original version, so that the original author's reputation will not be affected by problems that might be introduced by others.
|
||||||
|
Finally, software patents pose a constant threat to the existence of any free program. We wish to make sure that a company cannot effectively restrict the users of a free program by obtaining a restrictive license from a patent holder. Therefore, we insist that any patent license obtained for a version of the library must be consistent with the full freedom of use specified in this license.
|
||||||
|
Most GNU software, including some libraries, is covered by the ordinary GNU General Public License. This license, the GNU Lesser General Public License, applies to certain designated libraries, and is quite different from the ordinary General Public License. We use this license for certain libraries in order to permit linking those libraries into non-free programs.
|
||||||
|
When a program is linked with a library, whether statically or using a shared library, the combination of the two is legally speaking a combined work, a derivative of the original library. The ordinary General Public License therefore permits such linking only if the entire combination fits its criteria of freedom. The Lesser General Public License permits more lax criteria for linking other code with the library.
|
||||||
|
We call this license the "Lesser" General Public License because it does Less to protect the user's freedom than the ordinary General Public License. It also provides other free software developers Less of an advantage over competing non-free programs. These disadvantages are the reason we use the ordinary General Public License for many libraries. However, the Lesser license provides advantages in certain special circumstances.
|
||||||
|
For example, on rare occasions, there may be a special need to encourage the widest possible use of a certain library, so that it becomes a de-facto standard. To achieve this, non-free programs must be allowed to use the library. A more frequent case is that a free library does the same job as widely used non-free libraries. In this case, there is little to gain by limiting the free library to free software only, so we use the Lesser General Public License.
|
||||||
|
In other cases, permission to use a particular library in non-free programs enables a greater number of people to use a large body of free software. For example, permission to use the GNU C Library in non-free programs enables many more people to use the whole GNU operating system, as well as its variant, the GNU/Linux operating system.
|
||||||
|
Although the Lesser General Public License is Less protective of the users' freedom, it does ensure that the user of a program that is linked with the Library has the freedom and the wherewithal to run that program using a modified version of the Library.
|
||||||
|
The precise terms and conditions for copying, distribution and modification follow. Pay close attention to the difference between a "work based on the library" and a "work that uses the library". The former contains code derived from the library, whereas the latter must be combined with the library in order to run.
|
||||||
|
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
|
||||||
|
0. This License Agreement applies to any software library or other program which contains a notice placed by the copyright holder or other authorized party saying it may be distributed under the terms of this Lesser General Public License (also called "this License"). Each licensee is addressed as "you".
|
||||||
|
A "library" means a collection of software functions and/or data prepared so as to be conveniently linked with application programs (which use some of those functions and data) to form executables.
|
||||||
|
The "Library", below, refers to any such software library or work which has been distributed under these terms. A "work based on the Library" means either the Library or any derivative work under copyright law: that is to say, a work containing the Library or a portion of it, either verbatim or with modifications and/or translated straightforwardly into another language. (Hereinafter, translation is included without limitation in the term "modification".)
|
||||||
|
"Source code" for a work means the preferred form of the work for making modifications to it. For a library, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the library.
|
||||||
|
Activities other than copying, distribution and modification are not covered by this License; they are outside its scope. The act of running a program using the Library is not restricted, and output from such a program is covered only if its contents constitute a work based on the Library (independent of the use of the Library in a tool for writing it). Whether that is true depends on what the Library does and what the program that uses the Library does.
|
||||||
|
1. You may copy and distribute verbatim copies of the Library's complete source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this License and to the absence of any warranty; and distribute a copy of this License along with the Library.
|
||||||
|
You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee.
|
||||||
|
2. You may modify your copy or copies of the Library or any portion of it, thus forming a work based on the Library, and copy and distribute such modifications or work under the terms of Section 1 above, provided that you also meet all of these conditions:
|
||||||
|
a) The modified work must itself be a software library.
|
||||||
|
b) You must cause the files modified to carry prominent notices stating that you changed the files and the date of any change.
|
||||||
|
c) You must cause the whole of the work to be licensed at no charge to all third parties under the terms of this License.
|
||||||
|
d)
|
||||||
|
If a facility in the modified Library refers to a function or a table of data to be supplied by an application program that uses the facility, other than as an argument passed when the facility is invoked, then you must make a good faith effort to ensure that, in the event an application does not supply such function or table, the facility still operates, and performs whatever part of its purpose remains meaningful.
|
||||||
|
(For example, a function in a library to compute square roots has a purpose that is entirely well-defined independent of the application. Therefore, Subsection 2d requires that any application-supplied function or table used by this function must be optional: if the application does not supply it, the square root function must still compute square roots.)
|
||||||
|
These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Library, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Library, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it.
|
||||||
|
Thus, it is not the intent of this section to claim rights or contest your rights to work written entirely by you; rather, the intent is to exercise the right to control the distribution of derivative or collective works based on the Library.
|
||||||
|
In addition, mere aggregation of another work not based on the Library with the Library (or with a work based on the Library) on a volume of a storage or distribution medium does not bring the other work under the scope of this License.
|
||||||
|
3. You may opt to apply the terms of the ordinary GNU General Public License instead of this License to a given copy of the Library. To do this, you must alter all the notices that refer to this License, so that they refer to the ordinary GNU General Public License, version 2, instead of to this License. (If a newer version than version 2 of the ordinary GNU General Public License has appeared, then you can specify that version instead if you wish.) Do not make any other change in these notices.
|
||||||
|
Once this change is made in a given copy, it is irreversible for that copy, so the ordinary GNU General Public License applies to all subsequent copies and derivative works made from that copy.
|
||||||
|
This option is useful when you wish to copy part of the code of the Library into a program that is not a library.
|
||||||
|
4. You may copy and distribute the Library (or a portion or derivative of it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange.
|
||||||
|
If distribution of object code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place satisfies the requirement to distribute the source code, even though third parties are not compelled to copy the source along with the object code.
|
||||||
|
5. A program that contains no derivative of any portion of the Library, but is designed to work with the Library by being compiled or linked with it, is called a "work that uses the Library". Such a work, in isolation, is not a derivative work of the Library, and therefore falls outside the scope of this License.
|
||||||
|
However, linking a "work that uses the Library" with the Library creates an executable that is a derivative of the Library (because it contains portions of the Library), rather than a "work that uses the library". The executable is therefore covered by this License. Section 6 states terms for distribution of such executables.
|
||||||
|
When a "work that uses the Library" uses material from a header file that is part of the Library, the object code for the work may be a derivative work of the Library even though the source code is not. Whether this is true is especially significant if the work can be linked without the Library, or if the work is itself a library. The threshold for this to be true is not precisely defined by law.
|
||||||
|
If such an object file uses only numerical parameters, data structure layouts and accessors, and small macros and small inline functions (ten lines or less in length), then the use of the object file is unrestricted, regardless of whether it is legally a derivative work. (Executables containing this object code plus portions of the Library will still fall under Section 6.)
|
||||||
|
Otherwise, if the work is a derivative of the Library, you may distribute the object code for the work under the terms of Section 6. Any executables containing that work also fall under Section 6, whether or not they are linked directly with the Library itself.
|
||||||
|
6. As an exception to the Sections above, you may also combine or link a "work that uses the Library" with the Library to produce a work containing portions of the Library, and distribute that work under terms of your choice, provided that the terms permit modification of the work for the customer's own use and reverse engineering for debugging such modifications.
|
||||||
|
You must give prominent notice with each copy of the work that the Library is used in it and that the Library and its use are covered by this License. You must supply a copy of this License. If the work during execution displays copyright notices, you must include the copyright notice for the Library among them, as well as a reference directing the user to the copy of this License. Also, you must do one of these things:
|
||||||
|
a) Accompany the work with the complete corresponding machine-readable source code for the Library including whatever changes were used in the work (which must be distributed under Sections 1 and 2 above); and, if the work is an executable linked with the Library, with the complete machine-readable "work that uses the Library", as object code and/or source code, so that the user can modify the Library and then relink to produce a modified executable containing the modified Library. (It is understood that the user who changes the contents of definitions files in the Library will not necessarily be able to recompile the application to use the modified definitions.)
|
||||||
|
b) Use a suitable shared library mechanism for linking with the Library. A suitable mechanism is one that (1) uses at run time a copy of the library already present on the user's computer system, rather than copying library functions into the executable, and (2) will operate properly with a modified version of the library, if the user installs one, as long as the modified version is interface-compatible with the version that the work was made with.
|
||||||
|
c) Accompany the work with a written offer, valid for at least three years, to give the same user the materials specified in Subsection 6a, above, for a charge no more than the cost of performing this distribution.
|
||||||
|
d) If distribution of the work is made by offering access to copy from a designated place, offer equivalent access to copy the above specified materials from the same place.
|
||||||
|
e) Verify that the user has already received a copy of these materials or that you have already sent this user a copy.
|
||||||
|
For an executable, the required form of the "work that uses the Library" must include any data and utility programs needed for reproducing the executable from it. However, as a special exception, the materials to be distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that component itself accompanies the executable.
|
||||||
|
It may happen that this requirement contradicts the license restrictions of other proprietary libraries that do not normally accompany the operating system. Such a contradiction means you cannot use both them and the Library together in an executable that you distribute.
|
||||||
|
7. You may place library facilities that are a work based on the Library side-by-side in a single library together with other library facilities not covered by this License, and distribute such a combined library, provided that the separate distribution of the work based on the Library and of the other library facilities is otherwise permitted, and provided that you do these two things:
|
||||||
|
a) Accompany the combined library with a copy of the same work based on the Library, uncombined with any other library facilities. This must be distributed under the terms of the Sections above.
|
||||||
|
b) Give prominent notice with the combined library of the fact that part of it is a work based on the Library, and explaining where to find the accompanying uncombined form of the same work.
|
||||||
|
8. You may not copy, modify, sublicense, link with, or distribute the Library except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense, link with, or distribute the Library is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance.
|
||||||
|
9. You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Library or its derivative works. These actions are prohibited by law if you do not accept this License. Therefore, by modifying or distributing the Library (or any work based on the Library), you indicate your acceptance of this License to do so, and all its terms and conditions for copying, distributing or modifying the Library or works based on it.
|
||||||
|
10. Each time you redistribute the Library (or any work based on the Library), the recipient automatically receives a license from the original licensor to copy, distribute, link with or modify the Library subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. You are not responsible for enforcing compliance by third parties with this License.
|
||||||
|
11. If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot distribute so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not distribute the Library at all. For example, if a patent license would not permit royalty-free redistribution of the Library by all those who receive copies directly or indirectly through you, then the only way you could satisfy both it and this License would be to refrain entirely from distribution of the Library.
|
||||||
|
If any portion of this section is held invalid or unenforceable under any particular circumstance, the balance of the section is intended to apply, and the section as a whole is intended to apply in other circumstances.
|
||||||
|
It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest validity of any such claims; this section has the sole purpose of protecting the integrity of the free software distribution system which is implemented by public license practices. Many people have made generous contributions to the wide range of software distributed through that system in reliance on consistent application of that system; it is up to the author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot impose that choice.
|
||||||
|
This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License.
|
||||||
|
12. If the distribution and/or use of the Library is restricted in certain countries either by patents or by copyrighted interfaces, the original copyright holder who places the Library under this License may add an explicit geographical distribution limitation excluding those countries, so that distribution is permitted only in or among countries not thus excluded. In such case, this License incorporates the limitation as if written in the body of this License.
|
||||||
|
13. The Free Software Foundation may publish revised and/or new versions of the Lesser General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns.
|
||||||
|
Each version is given a distinguishing version number. If the Library specifies a version number of this License which applies to it and "any later version", you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Library does not specify a license version number, you may choose any version ever published by the Free Software Foundation.
|
||||||
|
14. If you wish to incorporate parts of the Library into other free programs whose distribution conditions are incompatible with these, write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally.
|
||||||
|
NO WARRANTY
|
||||||
|
15. BECAUSE THE LIBRARY IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE LIBRARY, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE LIBRARY "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE LIBRARY IS WITH YOU. SHOULD THE LIBRARY PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
|
||||||
|
16. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE LIBRARY AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE LIBRARY (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE LIBRARY TO OPERATE WITH ANY OTHER SOFTWARE), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
|
||||||
|
END OF TERMS AND CONDITIONS
|
||||||
|
```
|
82
mkdocs.yml
Normal file
82
mkdocs.yml
Normal file
@ -0,0 +1,82 @@
|
|||||||
|
|
||||||
|
|
||||||
|
# Project information
|
||||||
|
site_name: "Duplicati 2 User's Manual"
|
||||||
|
site_description: 'Documentation for Duplicati 2'
|
||||||
|
site_author: 'K. Zaaijer'
|
||||||
|
site_url: 'https://www.duplicati.com/'
|
||||||
|
|
||||||
|
# Repository
|
||||||
|
repo_name: 'squidfunk/mkdocs-material'
|
||||||
|
repo_url: 'https://github.com/squidfunk/mkdocs-material'
|
||||||
|
|
||||||
|
# Copyright
|
||||||
|
copyright: 'Copyright © 2016 - 2017 The Duplicati Team'
|
||||||
|
|
||||||
|
# Configuration
|
||||||
|
theme:
|
||||||
|
name: 'material'
|
||||||
|
language: 'en'
|
||||||
|
palette:
|
||||||
|
primary: 'blue'
|
||||||
|
accent: 'blue'
|
||||||
|
font:
|
||||||
|
text: 'Roboto'
|
||||||
|
code: 'Roboto Mono'
|
||||||
|
logo: 'duplicatilogo.png'
|
||||||
|
favicon: 'favicon.ico'
|
||||||
|
feature:
|
||||||
|
tabs: false
|
||||||
|
|
||||||
|
# Customization
|
||||||
|
extra:
|
||||||
|
social:
|
||||||
|
- type: 'edge'
|
||||||
|
link: 'https://www.duplicati.com'
|
||||||
|
- type: 'comment'
|
||||||
|
link: 'https://forum.duplicati.com'
|
||||||
|
- type: 'github'
|
||||||
|
link: 'https://github.com/duplicati/duplicati'
|
||||||
|
- type: 'google-plus'
|
||||||
|
link: 'https://plus.google.com/105271984558189185842'
|
||||||
|
- type: 'facebook'
|
||||||
|
link: 'http://www.facebook.com/pages/Duplicati/105118456272281'
|
||||||
|
- type: 'btc'
|
||||||
|
link: 'bitcoin:1Lfzs4EQBtjqQyARfxW1vH5JMRaz7tVCir'
|
||||||
|
- type: 'paypal'
|
||||||
|
link: 'https://www.paypal.com/cgi-bin/webscr?cmd=_xclick&business=paypal%40hexad%2edk&item_name=Duplicati%20Donation&no_shipping=2&no_note=1&tax=0¤cy_code=EUR&bn=PP%2dDonationsBF&charset=UTF%2d8&lc=US'
|
||||||
|
|
||||||
|
|
||||||
|
# Google Analytics
|
||||||
|
google_analytics:
|
||||||
|
- 'UA-XXXXXXXX-X'
|
||||||
|
- 'auto'
|
||||||
|
|
||||||
|
# Extensions
|
||||||
|
markdown_extensions:
|
||||||
|
- admonition
|
||||||
|
- codehilite:
|
||||||
|
guess_lang: false
|
||||||
|
- toc:
|
||||||
|
permalink: true
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
pages:
|
||||||
|
- Home: index.md
|
||||||
|
- Manual:
|
||||||
|
- Introduction: 01-introduction.md
|
||||||
|
- Installation: 02-installation.md
|
||||||
|
- Using the Graphical User Interface: "03-using-the-graphical-user-interface.md"
|
||||||
|
- Using Duplicati from the Command Line: "04-using-duplicati-from-the-command-line.md"
|
||||||
|
- Storage Providers: "05-storage-providers.md"
|
||||||
|
- Advanced Options: "06-advanced-options.md"
|
||||||
|
- Other Command Line Utilities: "07-other-command-line-utilities.md"
|
||||||
|
- Disaster Recovery: "08-disaster-recovery.md"
|
||||||
|
- Articles:
|
||||||
|
- "How the Backup Process Works": "appendix-a-how-the-backup-process-works.md"
|
||||||
|
- "How the Restore Process Works": "appendix-b-how-the-restore-process-works.md"
|
||||||
|
- "Choosing Sizes in Duplicati": "appendix-c-choosing-sizes-in-duplicati.md"
|
||||||
|
- "Filters": "appendix-d-filters.md"
|
||||||
|
- "How We Get Along With OAuth": "appendix-e-how-we-get-along-with-oauth.md"
|
||||||
|
- License Agreement: "appendix-f-license-agreement.md"
|
Loading…
x
Reference in New Issue
Block a user