I can’t connect to the backup repo and get the error message
Connection closed by remote host. Is borg working on the server?
This is almost always a problem with SSH keys. Double check the following to debug further:
- Have you already assigned a SSH key to the repo on BorgBase.com?
- Is SSH using the key you assigned? By default only
~/.ssh/id_[rsa | ed25519 | ecdsa]are used. When using a custom key name, you can add
~/.ssh/config. Or to only use this key with BorgBase:
Host *.repo.borgbase.com IdentityFile ~/.ssh/id_custom
- Are the key permissions OK? Private keys need to have a permission of
0600. To change the permission:
$ chmod 0600 ~/.ssh/id_custom
- If you still get errors, try to connect to your repo using the
sshcommand with verbose logging enabled:
$ ssh -v email@example.com
This will print a list of keys being tried and potential problems. You won’t get a shell at the end, as BorgBase only support access via
borg. Once you see
Remote: Key is restricted. or
PTY allocation request failed on channel 0 then the login step still worked.
My SSH key is set to append-only access, but I can still prune or delete old archives. Why is append-only mode not working?
The Borg developers made the decision to fail delete commands “silently”. This means that prune- and delete commands will still succeed from a client perspective, but no data will be deleted on the server.
With append-only mode enabled, the repository will have a timestamped transaction log. This allows going back to previous states, even if prune- or delete-commands were issued by the backup client.
If you need to restore an older repo version, please contact support. We will make a full copy of the repo and do the restore there.
First it helps to understand the steps
borg follows when creating an initial or new backup:
- First it will do some housekeeping, like getting the index from the repo repo, if there is no local copy.
- Next it will compare all the inode ID and other attributes of all files to determine changed files. If you move your files to a new file system, the first backup run can take a bit longer, but no new data will be copied.
- If new data is found, Borg will checksum, compress and encrypt the files as segments of 1-5 MB, skipping any known segments. So if part of a large file changes, only new parts will be uploaded.
- Last, it will upload new segments to BorgBase.
So the upload speed is not always the main bottleneck. Depending on your setup, you should also watch out for CPU usage or disk IO. It’s generally difficult to improve uplink speed, but if you are CPU- or IO-limited, there are a few settings you can tune. Just be aware of the trade-offs. Maybe you are OK with a slower initial backup in order to have a smaller well-compressed backup in the future.
- Make sure you choose an appropriate compression level for your data. In general
lz4(fast, but low compression)
zstd,3(medium compression) and
zstd,8(high compression) will work well.
- If you don’t need additional file flags, you can disable them with
bsd_flags: falsein Borgmatic. In a future version this flag may be renamed to
- Avoid excessive archive checking:
borg checkcan read all backup segments and confirm their consistency. For large repos this can take a long time. BorgBase already uses different techniques to avoid bitrot in the storage backend, so
borg checkis not strictly necessary for this purpose. In Borgmatic set
- If you suspect a slow or unstable network connection, we can temporarily enable
iperf3for you server-side.
Both regions are currently using hardware RAID-6 backed storage servers. This protects against hardware failure and a degree of bit rot. For a list of the providers we work with, you can also see our GDPR page.
I have an existing Borg Backup repo on my own server or with another provider. How can I move it to BorgBase?
We offer free migration services for both, incoming and outgoing transfers. Currently this is done manually. To start a transfer follow these steps:
- For incoming transfers, create an empty repo in your BorgBase account.
- Make sure your old repo data is accessible from the internet somehow, e.g. via SSH, FTP or HTTP. For SSH we will provide you with a one-time public key to use.
- Contact our support and provide the BorgBase repo ID and the login details of the transfer source or target.
If Borg happens to be busy on the client- or server side, it may not send data over the SSH connection for a while. In this case, some ISPs will terminate the connection after a period of inactivity. You would then see an error like this:
Remote: packet_write_wait: Connection to xxx.xxx.xxx.xxx: Broken pipe Connection closed by remote host
Remote: client_loop: send disconnect: Broken pipe RemoteRepository: 2.61 kB bytes sent, 1.01 MB bytes received, 52 messages sent Connection closed by remote host
Which means the SSH connection has been terminated and Borg is unable to send data to the server-side process. The solution is to have the client regular keepalive packages while no data is sent. On the client machine, you can add the below configuration to
Host *.repo.borgbase.com ServerAliveInterval 10 ServerAliveCountMax 30
This configuration means that the client will send a null packet every 10 seconds to keep the connection alive. If it doesn’t get a response 30 times, the connection will be closed.
BorgBase already has the appropriate
ClientAliveInterval configuration server-side.
If you still encounter issues, you may be using a VPN or mobile network that aggressively terminates idle connections.
BorgBase always displays the actual disk usage, as measured on the file system. This includes some metadata and index files and slight variations from the space usage reported via
borg info under All Archives > Deduplicated Size are expected.
If you see larger variations, you are probably running your repo in append-only mode. This means
borg never really deletes old segments. So the actual disk usage will be higher than what
borg thinks over time. From the docs:
As data is only appended, and nothing removed, commands like prune or delete won’t free disk space, they merely tag data as deleted in a new transaction.
If you are OK to fully remove those old segments, then just write to the repo with a full-access key. This will clean up old segments:
Be aware that as soon as you write to the repo in non-append-only mode (e.g. prune, delete or create archives from an admin machine), it will remove the deleted objects permanently