Support

Documentation

Post processing engines

When a backup archive is created, the application normally stores it on the server where it is installed. Typically it is the same server where the backed up site is located in and most likely the same hosting account as well. This is a bad idea! If your server goes down or a hacker infiltrates your site or the server it's on your backups will be gone together with your site, leaving you without a way to restore your site. The solution to that is transferring your backup archive to off-site storage as soon as the backup is complete. Doing that manually is time consuming. If you have automated the backup it's even worse, as you would have to remember to do that every time an automatic backup is made.

This is where the data processing engines come into play. Instead of having you manually transfer backup archives, the application can do it for you. With a wide array of options you'll be hard pressed to find a cloud or remote storage provider that doesn't work with it!

[Note]Note

If you enable the Process each part immediately option the reported size of the backup will often be inaccurate.

Before you use any of them, you should know the limitations. Most remote storage engines do not allow appending to files, so the archive has to be transferred in a single step. PHP has a time limit restriction we can't overlook. The time required to upload a file to CloudFiles equals the size of the file divided by the available bandwidth. We want the time to upload a file to be less than PHP's time limit restriction so as to avoid timing out. Since the available bandwidth is finite and constant, the only thing we can reduce in order to avoid timeouts is the file size. To this end, you have to produce split archives, by setting the Part size for split archives in the archiver engine configuration pane. The suggested values are between 2Mb and 20Mb. Most servers have a bandwidth cap of 20Mbits, which equals to roughly 2Mb/sec (1 byte is 8 bits, plus there's some traffic overhead, lost packets, etc). With a time limit of 10 seconds, we can upload at most 2 Mb/sec * 10 sec = 20Mb without timing out. If you get timeouts during post-processing (transferring to remote storage), please lower the part size before asking for support.

Finally, please note that both PHP and many remote storage providers have a maximum file size cap. PHP can't reliably create archives over 2Gb in size. Some remote storage providers have limits of their own. It is generally a good idea not to use part sizes over 100Mb unless you are willing to do some trial and error until you get the perfect limits for your site.

[Tip]Tip

If you use the native CRON method (cli/backup.php) for backup scheduling, there is usually no time limit - or there is a very high time limit in the area of 3 minutes or so. Ask your host about it. Setting up a profile for use only with the native CRON method allows you to increase the part size and reduce the number of parts a complete backup consists of.

No post-processing

This is the default setting. It does no post-processing. It simply leaves the backup archives on your server.

Upload to CloudMe

Upload to CloudMe

Using this engine, you can upload your backup archives to the European cloud storage service CloudMe.

The required settings for this engine are:

Process each part immediately

If you enable this, each backup part will be uploaded as soon as it's ready. This is useful if you are low on disk space (disk quota) when used in conjunction with Delete archive after processing. When using this feature we suggest having 10Mb plus the size of your part for split archives free in your account. The drawback with enabling this option is that if the upload fails, the backup fails. If you don't enable this option, the upload process will take place after the backup is complete and finalized. This ensures that if the upload process fails a valid backup will still be stored on your server. The drawback is that it requires more available disk space.

Delete archive after processing

If enabled, the archive files will be removed from your server after they are uploaded to CloudMe.

Username

Your CloudMe username

Password

Your CloudMe password

Directory

The directory inside your CloudMe Blue Folder™ where your files will be stored in. If you want to use subdirectories, you have to use a forward slash, e.g. directory/subdirectory/subsubdirectory.

[Tip]Tip

You can use the application's "variables" in the directory name in order to create it dynamically. These are the same variables as what you can use in the archive name, i.e. [DATE], [TIME], [HOST], [RANDOM].

Upload to Microsoft Windows Azure BLOB Storage service

Upload to Microsoft Windows Azure BLOB Storage

Using this engine, you can upload your backup archives to the Microsoft Windows Azure BLOB Storage cloud storage service. This cloud storage service from Microsoft is reasonably priced (the cost is very close to CloudFiles) and quite fast, with lots of local endpoints around the globe.

[Warning]Warning

Azure, unlike other cloud storage providers, doesn't support storing files over 64Mb. As a result you MUST use a part size for archive splitting lower than 64Mb at all times. Failure to do so might cause your backup uploads to fail.

The required settings for this engine are:

Process each part immediately

If you enable this, each backup part will be uploaded as soon as it's ready. This is useful if you are low on disk space (disk quota) when used in conjunction with Delete archive after processing. When using this feature we suggest having 10Mb plus the size of your part for split archives free in your account. The drawback with enabling this option is that if the upload fails, the backup fails. If you don't enable this option, the upload process will take place after the backup is complete and finalized. This ensures that if the upload process fails a valid backup will still be stored on your server. The drawback is that it requires more available disk space.

Delete archive after processing

If enabled, the archive files will be removed from your server after they are uploaded to Microsoft Azure BLOB Storage.

Account name

The account name for your Microsoft Azure subscription. If your endpoint looks like foobar.blobl.core.windows.net then your account name is foobar.

Primary Access Key

You can find this Key in your Azure account page.

Container

The name of the Azure container where you want to store your archives.

Directory

The directory inside your Azure container where your files will be stored in. If you want to use subdirectories, you have to use a forward slash, e.g. /directory/subdirectory/subsubdirectory. Leave blank to store the files in the container's root.

Upload to RackSpace CloudFiles

Using this engine, you can upload your backup archives to the RackSpace CloudFiles cloud storage service. This service has been around for a long time, under the Mosso brand, and is considered one of the most dependable ones. Its cheap prices make it ideal for applications where storing large quantities of backup archives is more likely than downloading them.

Upload to RackSpace CloudFiles

The required settings for this engine are:

Process each part immediately

If you enable this, each backup part will be uploaded as soon as it's ready. This is useful if you are low on disk space (disk quota) when used in conjunction with Delete archive after processing. When using this feature we suggest having 10Mb plus the size of your part for split archives free in your account. The drawback with enabling this option is that if the upload fails, the backup fails. If you don't enable this option, the upload process will take place after the backup is complete and finalized. This ensures that if the upload process fails a valid backup will still be stored on your server. The drawback is that it requires more available disk space.

Delete archive after processing

If enabled, the archive files will be removed from your server after they are uploaded to CloudFiles.

Username

The username assigned to you by the RackSpace CloudFiles service

API Key

The API Key found in your CloudFiles account

Container

The name of the CloudFiles container where you want to store your archives.

Directory

The directory inside your CloudFiles container where your files will be stored in. If you want to use subdirectories, you have to use a forward slash, e.g. /directory/subdirectory/subsubdirectory. Leave blank to store the files in the container's root.

Upload to OVH Object Storage
[Note]Note

This feature is available only to Akeeba Backup and Akeeba Solo Professional.

Using this engine, you can upload your backup archives to the OVH Object Storage cloud storage service. This allows you to upload files into OVH's public cloud, powered by the OpenStack technology.

Before you begin, you should know the limitations. As most cloud storage providers, OVH does not allow appending to files, so the archive has to be transferred in a single step. PHP has a time limit restriction we can't overlook. The time required to upload a file to OVH equals the size of the file divided by the available bandwidth. We want to time to upload a file to be less than PHP's time limit restriction so as to avoid timing out. Since the available bandwidth is finite and constant, the only thing we can reduce in order to avoid timeouts is the file size. To this end, you have to produce split archives, by setting the part size for archive splitting in ZIP's or JPA's engine configuration pane. The suggested values are between 10Mb and 20Mb. Most servers have a bandwidth cap of 20Mbits, which equals to roughly 2Mb/sec (1 byte is 8 bits, plus there's some traffic overhead, lost packets, etc). With a time limit of 10 seconds, we can upload at most 2 Mb/sec * 10 sec = 20Mb without timing out. If you get timeouts during post-processing lower the part size.

[Tip]Tip

If you use the native CRON mode (akeeba-backup.php), there is usually no time limit - or there is a very high time limit in the area of 3 minutes or so. Ask your host about it. Setting up a profile for use only with the native CRON mode allows you to increase the part size and reduce the number of parts a complete backup consists of.

Before you begin

You will need to set up object storage and collect some necessary but not necessarily obvious information from your OVH account. You can do so through OVH's Cloud Manager portal.

From the left side menu click on Servers and expand your cloud server. If you do not have a server yet you will need to use the Order button to purchase credits. Please note that credits activation can take several days if this is your first order.

Click on the Infrastructure link under your server. In the main area of the manager page you will see the name of your server. Below it, in hard to see grey letters, you will see a 32-digit alphanumeric code such as abcdef0123456789abcdef0123456789. Note it down. This is your Project ID.

Click on the Storage link under your server. You will see a list of your containers. If you do not have any containers yet, create a new one. Make sure to select the Private type; you don't want your backups to be publicly accessible! Click on your server's name. The main area changes. You will see a box with information such as objects, container size and Container URL. Note down the Container URL.

Click on the OpenStack link under your server. If you have not created an OpenStack user yet, create one now. Copy the values under the ID and Password columns. These are, respectively, your OpenStack Username and OpenStack Password.

Upload to OVH Object Storage

The required settings for this engine are:

Process each part immediately

If you enable this, each backup part will be uploaded as soon as it's ready. This is useful if you are low on disk space (disk quota) when used in conjunction with Delete archive after processing. When using this feature we suggest having 10Mb plus the size of your part for split archives free in your account. The drawback with enabling this option is that if the upload fails, the backup fails. If you don't enable this option, the upload process will take place after the backup is complete and finalized. This ensures that if the upload process fails a valid backup will still be stored on your server. The drawback is that it requires more available disk space.

Delete archive after processing

If enabled, the archive files will be removed from your server after they are uploaded to OVH.

Project ID

See above.

OpenStack Username

See above.

OpenStack Password

See above.

Container URL

See above.

Directory

The directory inside your OVH container where your files will be stored in. If you want to use subdirectories, you have to use a forward slash, e.g. /directory/subdirectory/subsubdirectory. Leave blank to store the files on the container's root.

Upload to pCloud
[Note]Note

This feature is available only to Akeeba Backup Professional and requires an active subscription to use.

Using this engine, you can upload your backup archives to pCloud, a cheap and secure cloud storage service. Before you begin, you should know the limitations.

0. Slow requests. pCloud applies rate limiting to its API in a rather awkward manner: all requests are handled with a delay of 3 to 6 seconds per HTTP connection. This means that simple things like checking whether a folder exists would take an extraordinary amount of time (minutes instead of seconds) which makes it impossible to use them in the real world. Furthermore, the delay in processing means that the maximum size of files you can upload is further restricted. We expect timeouts to occur on most hosts if you are uploading part files over 10MB. This makes pCloud impractical for storing backups of medium sized sites or bigger.

1. Single part only. pCloud does not allow appending to files, so the archive has to be transferred in a single step. PHP has a time limit restriction we can't overlook. The time required to upload a file to OVH equals the size of the file divided by the available bandwidth. We want to time to upload a file to be less than PHP's time limit restriction so as to avoid timing out. Since the available bandwidth is finite and constant, the only thing we can reduce in order to avoid timeouts is the file size. To this end, you have to produce split archives, by setting the part size for archive splitting in ZIP's or JPA's engine configuration pane. The suggested values are between 5 and 10 MB. Please note that chunked uploads are also not supported in pCloud's published SDK for PHP so it's not just us telling you it's not possible, it's pCloud itself.

If you use the native CRON mode (akeeba-backup.php), there is usually no time limit - or there is a very high time limit in the area of 3 minutes or so. Ask your host about it. Setting up a profile for use only with the native CRON mode allows you to increase the part size and reduce the number of parts a complete backup consists of.

2. The Crypto folder cannot be supported. While pCloud supports client-side encryption (Crypto folder) in its desktop and mobile client software this is not supported over their public web API. As a result you cannot upload your backup archives directly to your Crypto folder. You will have to upload them to regular pCloud storage. Afterwards, you can move them to your Crypto folder from your desktop or mobile device. Please note that this feature is also not supported in pCloud's published SDK for PHP so it's not just us telling you it's not possible, it's pCloud itself.

3. Upload directory must exist. The directory where you are uploading the backup archives must already exist. We cannot create a new directory. This has to do with limitation #0, slow requests. A three level deep directory required more than half a minute to determine if it exists, thereby making it an unrealistic feature for most real world servers: the upload would time out, causing the backup process to fail.

4. You must be logged in to pCloud.com BEFORE using the Authentication - Step 1 in our software. This only applies if you have Two Factor Authentication (TFA) enabled on your account. Due to the awkward and rather haphazard way TFA is implemented in pCloud you can only ever be asked for TFA when logging in to pCloud.com proper. The OAuth2 endpoint, used to link pCloud with Akeeba Backup, does NOT support TFA. If you try logging in through it you will see a puzzling message that Two Factor Authentication is required. The only solution is to log into pCloud.com in your browser RIGHT BEFORE using the Authentication - Step 1 in our software. This is a bug in pCloud's web application, not our software.

The required settings for this engine are:

Process each part immediately

If you enable this, each backup part will be uploaded as soon as it's ready. This is useful if you are low on disk space (disk quota) when used in conjunction with Delete archive after processing. When using this feature we suggest having 10Mb plus the size of your part for split archives free in your account. The drawback with enabling this option is that if the upload fails, the backup fails. If you don't enable this option, the upload process will take place after the backup is complete and finalized. This ensures that if the upload process fails a valid backup will still be stored on your server. The drawback is that it requires more available disk space.

Delete archive after processing

If enabled, the archive files will be removed from your server after they are uploaded to pCloud.

Directory

The directory inside your pCLoud account where your files will be stored in. If you want to use subdirectories, you have to use a forward slash, e.g. /directory/subdirectory/subsubdirectory. Leave blank to store the files on the container's root.

The directory must already exist. See limitations above.

You cannot use your pCloud Crypto folder. See limitations above.

Access Token

Automatically filled in when you use the Authentication - Step 1 button.

If you are linking another site to the same pCloud account you should copy the access token from your already set up site and not try to run the Authentication - Step 1 again.

Upload to DreamObjects

Upload to DeramObjects

Using this engine, you can upload your backup archives to the DreamObjects cloud storage service by DreamHost.

The required settings for this engine are:

Process each part immediately

If you enable this, each backup part will be uploaded as soon as it's ready. This is useful if you are low on disk space (disk quota) when used in conjunction with Delete archive after processing. When using this feature we suggest having 10Mb plus the size of your part for split archives free in your account. The drawback with enabling this option is that if the upload fails, the backup fails. If you don't enable this option, the upload process will take place after the backup is complete and finalized. This ensures that if the upload process fails a valid backup will still be stored on your server. The drawback is that it requires more available disk space.

Delete archive after processing

If enabled, the archive files will be removed from your server after they are uploaded to DreamObjects.

Access Key

Your DreamObjects Access Key

Secret Key

Your DreamObjects Secret Key

Use SSL

If enabled, an encrypted connection will be used to upload your archives to DreamObjects. In this case the upload will take slightly longer, as encryption - what SSL does - is more resource intensive than uploading unencrypted files. You may have to lower your part size.

Bucket

The name of your DreamObjects bucket where your files will be stored in. The bucket must be already created; the application can not create buckets.

[Warning]Warning

DO NOT CREATE BUCKETS WITH NAMES CONTAINING UPPERCASE LETTERS. If you use a bucket with uppercase letters in its name it is very possible that the application will not be able to upload anything to it for reasons that have to do with the S3 API implemented by DreamObjects. It is not something we can "fix" in the application. If this is the case with your site, please don't ask for support; simply create a new bucket whose name only consists of lowercase unaccented Latin characters (a-z), numbers (0-9), dashes and dots.

Moreover, you cannot use a bucket name with a dot in its filename together with the Use SSL option. This is a limitation of the SSL setup in DreamHost servers and cannot be worked around.

Directory

The directory inside your DreamObjects bucket where your files will be stored in. If you want to use subdirectories, you have to use a forward slash, e.g. directory/subdirectory/subsubdirectory.

[Tip]Tip

You can use the application's "variables" in the directory name in order to create it dynamically. These are the same variables as what you can use in the archive name, i.e. [DATE], [TIME], [HOST], [RANDOM].

Regarding the naming of buckets and directories, you have to be aware of the S3 API rules used by DreamObjects:

  • Folder names can not contain backward slashes (\). They are invalid characters.

  • Bucket names can only contain lowercase letters, numbers, periods (.) and dashes (-). Accented characters, international characters, underscores and other punctuation marks are illegal characters.

    [Important]Important

    Even if you created a bucket using uppercase letters, you must type its name with lowercase letters. The S3 API used by DreamObjects automatically converts the bucket name to all-lowercase. Also note that, as stated above, you may NOT be able to use at all under some circumstances. Generally, your should avoid using uppercase letters.

  • Bucket names must start with a number or a letter.

  • Bucket names must be 3 to 63 characters long.

  • Bucket names can't be in an IP format, e.g. 192.168.1.2

  • Bucket names can't end with a dash.

  • Bucket names can't have an adjacent dot and dash. For example, both my.-bucket and my-.bucket are invalid.

If any - or all - of those rules are broken, you'll end up with error messages that the application couldn't connect to DreamObjects, that the calculated signature is wrong or that the bucket does not exist. This is normal and expected behaviour, as the S3 API of DreamObjects drops the connection when it encounters invalid bucket or directory names.

Upload to Dropbox (v1 API)
[Important]Important

This is the old method to connect to Dropbox. The v1 API may be removed by Dropbox at any time. We recommend that all users migrate to the v2 API instead.

Using this engine, you can upload your backup archives to the low-cost Dropbox cloud storage service (http://www.dropbox.com). This is an ideal option for small websites with a low budget, as this service offers 2Gb of storage space for free, all the while retaining all the pros of storing your files on the cloud. Even if your host's data center is annihilated by a natural disaster and your local PC and storage media are wiped out by an unlikely event, you will still have a copy of your site readily accessible and easy to restore.

[Warning]Warning

You can normally not upload files over 150Mb to Dropbox. This is a limitation imposed by the Dropbox API. You MAY be able to upload larger files by enabling the Enabled chunked upload option. We generally suggest using a Part Size for Split Archives less than 150Mb with Dropbox.

Upload to Dropbox

The required settings for this engine are:

Process each part immediately

If you enable this, each backup part will be uploaded as soon as it's ready. This is useful if you are low on disk space (disk quota) when used in conjunction with Delete archive after processing. When using this feature we suggest having 10Mb plus the size of your part for split archives free in your account. The drawback with enabling this option is that if the upload fails, the backup fails. If you don't enable this option, the upload process will take place after the backup is complete and finalized. This ensures that if the upload process fails a valid backup will still be stored on your server. The drawback is that it requires more available disk space.

Delete archive after processing

If enabled, the archive files will be removed from your server after they are uploaded to Dropbox.

Authorisation – Step 1 and 2

Before you can use the application with Dropbox you have to "link" your Dropbox account with your Akeeba Solo / Akeeba Backup installation. This allows the application to access your Dropbox account without you storing the username (email) and password to the application. The authentication is a simple, two step process. First click on the Authentication - Step 1 button. A popup window opens, allowing you to log in to your Dropbox account. Once you log in successfully, close the popup. Then, click on the Authentication - Step 2 button. It should show a message dialog reading OK!. This means that your Akeeba Solo / Akeeba Backup installation and Dropbox account are now linked.

[Warning]Warning

You must only do this in the FIRST installation you want to link to Dropbox. For all subsequent installations please copy the Token, Token Secret Key and User ID and do NOT use the authorisation buttons. This is a limitation of the Dropbox OAuth2 API.

Directory

The directory inside your Dropbox account where your files will be stored in. If you want to use subdirectories, you have to use a forward slash, e.g. /directory/subdirectory/subsubdirectory.

Enabled chunked upload

When enabled the application will try to upload your backup archives / backup archive parts in small chunks and then ask Dropbox to assemble them back into one file. This allows you to transfer larger archives more reliably and works around the 100Mb limitation of Dropbox' API. In other words, if your backup is over 100Mb and you don't want to split it to smaller parts (multiple files) you must enable this option.

Chunk size

This option determines the size of the chunk which will be used by the chunked upload option above. You are recommended to use a relatively small value around 5 to 20 Mb to prevent backup timeouts. The exact maximum value you can use depends on the speed of your server and its connection speed to the Dropbox server. Try starting high and lower it if the backup fails during transfer to Dropbox.

Token

This is the connection token to Dropbox. Normally, it is automatically fetched from Dropbox when you click on the Authentication - Step 2 button above. However, if you have multiple sites you want to connect to the same Dropbox account you must not use the authentication buttons on each site. Doing so will unauthenticate (disconnect) all sites except the last one you authenticated. Instead, if you have multiple sites, use the Authentication buttons only on the first site. Then copy the Token, Token Secret Key and User ID from the first site and copy it to each and every other site you want to connect with Dropbox.

Token Secret Key

See above.

User ID

See above.

Upload to Dropbox (v2 API)
[Important]Important

This is the new method to connect to Dropbox. The v1 API may be removed by Dropbox at any time. We recommend that all users migrate to this method which uses the newer v2 API.

Using this engine, you can upload your backup archives to the low-cost Dropbox cloud storage service (http://www.dropbox.com). This is an ideal option for small websites with a low budget, as this service offers 2Gb of storage space for free, all the while retaining all the pros of storing your files on the cloud. Even if your host's data center is annihilated by a natural disaster and your local PC and storage media are wiped out by an unlikely event, you will still have a copy of your site readily accessible and easy to restore.

The required settings for this engine are:

Process each part immediately

If you enable this, each backup part will be uploaded as soon as it's ready. This is useful if you are low on disk space (disk quota) when used in conjunction with Delete archive after processing. When using this feature we suggest having 10Mb plus the size of your part for split archives free in your account. The drawback with enabling this option is that if the upload fails, the backup fails. If you don't enable this option, the upload process will take place after the backup is complete and finalized. This ensures that if the upload process fails a valid backup will still be stored on your server. The drawback is that it requires more available disk space.

Delete archive after processing

If enabled, the archive files will be removed from your server after they are uploaded to Dropbox.

Authorisation

Before you can use the application with Dropbox you have to "link" your Dropbox account with your Akeeba Solo / Akeeba Backup installation. This allows the application to access your Dropbox account without you storing the username (email) and password to the application. The authentication is a simple process. First click on the Authentication - Step 1 button. A popup window opens, allowing you to log in to your Dropbox account. Once you log in successfully, click the blue button to transfer the access token back to your Akeeba Solo / Akeeba Backup installation.

Unlike the v1 API, you can perform the same procedure on every single site you want to link to Dropbox.

Directory

The directory inside your Dropbox account where your files will be stored in. If you want to use subdirectories, you have to use a forward slash, e.g. /directory/subdirectory/subsubdirectory.

Enabled chunked upload

The application will always try to upload your backup archives / backup archive parts in small chunks and then ask Dropbox to assemble them back into one file. This allows you to transfer larger archives more reliably and works around the 150Mb limitation of Dropbox' API.

When you enable this option every step of the chunked upload process will take place in a separate page load, reducing the risk of timeouts if you are transferring large archive part files (over 10Mb). When you disable this option the entire upload process has to take place in a single page load.

[Warning]Warning

When you select Process each part immediately this option has no effect! In this case the entire upload operation for each part will be attempted in a single page load. For this reason we recommend that you use a Part Size for Split Archives of 5Mb or less to avoid timeouts.

Chunk size

This option determines the size of the chunk which will be used by the chunked upload option above. You are recommended to use a relatively small value around 5 to 20 Mb to prevent backup timeouts. The exact maximum value you can use depends on the speed of your server and its connection speed to the Dropbox server. Try starting high and lower it if the backup fails during transfer to Dropbox.

Token

This is the connection token to Dropbox. Normally, it is automatically fetched from Dropbox when you click on the Authentication - Step 1 button above. If for any reason this method does not work for you you can copy the Token from the popup window or another Akeeba Backup / Akeeba Solo installation you have already connected to Dropbox.

Send by email

Send by email

It will send you the backup archive parts as file attachments to your email address. That said, beware of the restrictions:

[Warning]Warning

You must set the Part size for split archives setting of the Archiver engine to a value between 1-10 Megabytes. If you choose a big value (or leave the default value of 0, which means that no split archives will be generated) you run the risks of the process timing out, a memory outage error to occur or, finally, your email servers not being able to cope with the attachment size, dropping the email.

[Important]Important

You must set up the application's email engine in the System Configuration page before using this feature. The default settings do not work with all hosts out there.

The available configuration settings for this engine, accessed by pressing the Configure... button next to it, are:

Process each part immediately

If you enable this, each backup part will be emailed to you as soon as it's ready. This is useful if you are low on disk space (disk quota) when used in conjunction with Delete archive after processing. The drawback with enabling this option is that if the email fails, the backup fails. If you don't enable this option, the email process will take place after the backup is complete and finalized. This ensures that if the email process fails a valid backup will still be stored on your server. Its drawback is that it requires more available disk space.

Delete archive after processing

If enabled, the archive files will be removed from your server after they are emailed to you. Very useful to conserve disk space and practice the good security measure of not leaving your backups on your server.

Email address

The email address where you want your backups sent to.

Email subject

A subject for the email you'll receive. You can leave it blank if you want to use the default. However, we suggest using something descriptive, i.e. your site's name and the description of the backup profile.

Upload to OneDrive (LEGACY)
[Note]Note

This is a legacy feature which will be removed from future versions of Akeeba Backup / Solo. Please use the "Upload to OneDrive and OneDrive for Business" option instead.

You should no longer use this integration. It only supports OneDrive with personal Microsoft / Xbox accounts. It does not support OneDrive for Business accounts which is what you get with Office 365 or a work or school account. The API it is using will be removed by Microsoft. Therefore we have no option but to remove the legacy integration from Akeeba Backup / Solo. Do note that we already have a replacement integration, using Microsoft's newer API for accessing OneDrive.

Therefore, if you were already using this option we recommend that you switch to the "Upload to OneDrive and OneDrive for Business" storage option. You will need to link your site to OneDrive again.

Since this option will be removed from Akeeba Backup / Solo soon we will no longer document its options.

Upload to OneDrive and OneDrive for Business
[Note]Note

This feature is available only to Akeeba Backup / Solo Professional and requires an active subscription to use.

Using this engine, you can upload your backup archives to the low-cost Microsoft OneDrive cloud storage service (https://onedrive.live.com). Moreover, this engine also supports OneDrive for Business which is the kind of free OneDrive storage you get with your Office 365 for Business, work or school account.

OneDrive is an ideal option for small to medium websites with a low budget. It offers a substantial amount of free storage, especially if you are an Office 365 or Office 365 for Business subscriber. Even if your host's data center is annihilated by a natural disaster and your local PC and storage media are wiped out by an unlikely event, you will still have a copy of your site readily accessible and easy to restore.

Please note that OneDrive is rather slow. If you have a big site, take frequent backups or otherwise upload performance is of the essence you should use a speedier storage provider such as Amazon S3, BackBlaze B2 or, if you'd rather remain in Microsoft's cloud ecosystem, Microsoft Azure BLOB Storage.

Important security and privacy information

The OneDrive integration uses the OAuth 2 authentication method. This requires a fixed endpoint (URL) for each application which uses it, such as Akeeba Backup / Solo. Since Akeeba Backup / Solo is installed on your site, therefore has a different endpoint URL for each installation, you could not normally use OneDrive's API to upload files. We have solved this problem by creating a small script which lives on our own server and acts as an intermediary between your site and OneDrive. When you are linking Akeeba Backup / Solo to OneDrive you are going through the script on our site. Moreover, whenever the access token (a time-limited, really long password given by OneDrive to your Akeeba Backup / Solo installation to access the service) expires your Akeeba Backup / Solo installation has to exchange it with a new token. This process also takes place through the script on our site. Please note that even though you are going through our site we DO NOT store this information and we DO NOT have access to your OneDrive account.

WE DO NOT STORE THE ACCESS CREDENTIALS TO YOUR ONEDRIVE ACCOUNT. WE DO NOT HAVE ACCESS TO YOUR ONEDRIVE ACCOUNT. CONNECTIONS TO OUR SITE ARE PROTECTED BY STRONG ENRYPTION (HTTPS), THEREFORE NOBODY ELSE CAN SEE THE INFORMATION EXCHANGED BETWEEN YOUR SITE AND OUR SITE AND BETWEEN OUR SITE AND ONEDRIVE. HOWEVER, AT THE FINAL STEP OF THE AUTHENTICATION PROCESS, YOUR BROWSER IS SENDING THE ACCESS TOKENS TO YOUR SITE. SOMEONE CAN STEAL THEM IN TRANSIT IF AND ONLY IF YOU ARE NOT USING HTTPS ON YOUR SITE'S ADMINISTRATOR.

For this reason we DO NOT accept any responsibility whatsoever for any use, abuse or misuse of your connection information to OneDrive. If you do not accept this condition you are FORBIDDEN from using the intermediary script on our site which, simply put, means that you cannot use the OneDrive integration.

[Important]Important

Access to the intermediary script on our servers requires a. an active subscription to any of our products and b. entering a valid Download ID for your AkeebaBackup.com account in the component's options. If the Download ID is invalid or corresponds to an expired subscription you will be unable to use the intermediary script on our servers. As a result you will be unable to upload backup archives to OneDrive.

Moreover, the above means that there are additional requirements for using OneDrive integration on your Akeeba Backup / Solo installation:

  • You need the PHP cURL extension to be loaded and enabled on your server. Most servers do that by default. If your server doesn't have it enabled the upload will fail and warn you that cURL is not enabled.

  • Your server's firewall must allow outbound HTTPS connections to www.akeebabackup.com over port 443 (standard HTTPS port) to get new tokens every time the current access token expires.

  • Your server's firewall must allow outbound HTTPS connections to OneDrive's domains over port 443 to allow the integration to work. These domain names are, unfortunately, not predefined. Most likely your server administrator will have to allow outbound HTTPS connections to any domain name to allow this integration to work. This is a restriction of how the OneDrive service is designed, not something we can modify (obviously, we're not Microsoft).

Settings

Upload to OneDrive

The required settings for this engine are:

Process each part immediately

If you enable this, each backup part will be uploaded as soon as it's ready. This is useful if you are low on disk space (disk quota) when used in conjunction with Delete archive after processing. When using this feature we suggest having 10Mb plus the size of your part for split archives free in your account. The drawback with enabling this option is that if the upload fails, the backup fails. If you don't enable this option, the upload process will take place after the backup is complete and finalized. This ensures that if the upload process fails a valid backup will still be stored on your server. The drawback is that it requires more available disk space.

Delete archive after processing

If enabled, the archive files will be removed from your server after they are uploaded to OneDrive.

Authorisation – Step 1

Before you can use Akeeba Backup with OneDrive you have to "link" your OneDrive account with your Akeeba Backup installation. This allows Akeeba Backup to access your OneDrive account without you storing the username (email) and password to. The authentication is a simple process. First click on the Authentication - Step 1 button. A popup window opens, allowing you to log in to your OneDrive account. Once you log in successfully, you are shown a page with the access and refresh tokens (the "keys" returned by OneDrive to be used for connecting to the service) and the URL to your site. Double check that the URL to your site is correct and click on the big blue "Finalize authentication" button. The popup window closes automatically.

Alternatively, instead of clicking that big blue button you can copy the Access Token and Refresh Token from the popup window to Akeeba Backup's configuration page at the same-named fields. Afterwards you can close the popup.

[Important]Important

As described above, this process routes you through our own site (akeebabackup.com) due to OneDrive's API restrictions. We do NOT store your login information or tokens and we do NOT have access to your OneDrive account. If, however, you do not agree being routed through our site you are FORBIDDEN from using this intermediary service on our site and you cannot use the OneDrive integration feature. We repeat for a third time that this is a restriction imposed by the OneDrive API, not us. We CANNOT work around this restriction, so we created a very secure solution which works within the restrictions imposed by the OneDrive API.

Directory

The directory inside your OneDrive account where your files will be stored in. If you want to use subdirectories, you have to use a forward slash, e.g. /directory/subdirectory/subsubdirectory.

Enabled chunked upload

When enabled Akeeba Backup will try to upload your backup archives / backup archive parts in small chunks and then ask OneDrive to assemble them back into one file. If your backup archive parts are over 10Mb you are strongly encouraged to check this option.

Chunk size

This option determines the size of the chunk which will be used by the chunked upload option above. We recommend a relatively small value around 4 to 20 Mb to prevent backup timeouts. The exact maximum value you can use depends on the speed of your server and its connection speed to OneDrive's server. Try starting high and lower it if the backup fails during transfer to OneDrive. You cannot set a chunk size lower than 1Mb or higher than 60Mb because of OneDrive's API restrictions. We recommend using 4, 10 or 20Mb (tested and found to be properly working).

Access Token

This is the connection token to OneDrive. Normally, it is automatically sent to your site when clicking the blue button from the Authentication Step 1 popup described above. If you do not wish to click that button copy the (very, VERY long!) Access Token from that popup window into this box.

Unlike other engines, such as Dropbox, you CANNOT share OneDrive tokens between multiple site. Each site MUST go through the authentication process described above and use a different set of Access and Refresh tokens!

Refresh Token

This is the refresh token to OneDrive, used to get a fresh Access Token when the previous one expires. Normally, it is automatically sent to your site when clicking the blue button from the Authentication Step 1 popup described above. If you do not wish to click that button copy the (very, VERY long!) Refresh Token from that popup window into this box.

Unlike other engines, such as Dropbox, you CANNOT share OneDrive tokens between multiple site. Each site MUST go through the authentication process described above and use a different set of Access and Refresh tokens!

Upload to Remote FTP server
[Note]Note

This feature is available only to Akeeba Solo and Akeeba Backup Professional.

[Note]Note

This engine uses PHP's native FTP functions. This may not work if your host has disabled PHP's native FTP functions or if your remote FTP server is incompatible with them. In this case you may want to use the Upload to Remote FTP server over cURL engine instead.

Using this engine, you can upload your backup archives to any FTP or FTPS (FTP over Explicit SSL) server. There are some "FTP" protocols and other file storage protocols which are not supported, such as SFTP, SCP, Secure FTP, FTP over Implicit SSL and SSH variants. The difference of this engine to the DirectFTP archiver engine is that this engine uploads backup archives to the server, whereas DirectFTP uploads the uncompressed files of your site. DirectFTP is designed for rapid migration, this engine is designed for easy moving of your backup archives to an off-server location.

Your originating server must support PHP's FTP extensions and not have its FTP functions blocked. Your originating server must not block FTP communication to the remote (target) server. Some hosts apply a firewall policy which requires you to specify to which hosts your server can connect. In such a case you might need to allow communication to your remote host.

Before you begin, you should know the limitations. Most servers do not allow resuming of uploads (or even if they do, PHP doesn't quite support this feature), so the archive has to be transferred in a single step. PHP has a time limit restriction we can't overlook. The time required to upload a file to FTP equals the size of the file divided by the available bandwidth. We want to time to upload a file to be less than PHP's time limit restriction so as to avoid timing out. Since the available bandwidth is finite and constant, the only thing we can reduce in order to avoid timeouts is the file size. To this end, you have to produce split archives, by setting the part size for archive splitting in ZIP's or JPA's engine configuration pane. The suggested values are between 10Mb and 20Mb. Most servers have a bandwidth cap of 20Mbits, which equals to roughly 2Mb/sec (1 byte is 8 bits, plus there's some traffic overhead, lost packets, etc). With a time limit of 10 seconds, we can upload at most 2 Mb/sec * 10 sec = 20Mb without timing out. If you get timeouts during post-processing lower the part size.

Upload to Remote FTP Server

The available configuration options are:

Process each part immediately

If you enable this, each backup part will be uploaded as soon as it's ready. This is useful if you are low on disk space (disk quota) when used in conjunction with Delete archive after processing. When using this feature we suggest having 10Mb plus the size of your part for split archives free in your account. The drawback with enabling this option is that if the upload fails, the backup fails. If you don't enable this option, the upload process will take place after the backup is complete and finalized. This ensures that if the upload process fails a valid backup will still be stored on your server. The drawback is that it requires more available disk space.

Delete archive after processing

If enabled, the archive files will be removed from your server after they are uploaded to the FTP server.

Host name

The hostname of your remote (target) server, e.g. ftp.example.com. You must NOT enter the ftp:// protocol prefix. If you do, Akeeba Backup will try to remove it automatically and issue a warning about it.

Port

The TCP/IP port of your remote host's FTP server. It's usually 21.

User name

The username you have to use to connect to the remote FTP server.

Password

The password you have to use to connect to the remote FTP server.

Initial directory

The absolute FTP directory to your remote site's location where your archives will be stored. This is provided by your hosting company. Do not ask us to tell you what you should put in here because we can't possibly know. There is an easy way to find it, though. Connect to your target FTP server with FileZilla. Navigate to the intended directory. Above the right-hand folder pane you will see a text box with a path. Copy this path and paste it to Akeeba Backup's setting.

Use FTP over SSL

If your remote server supports secure FTP connections over SSL (they have to be Explicit SSL; implicit SSL is not supported), you can enable this feature. In such a case you will most probably have to change the port. Please ask your hosting company to provide you with more information on whether they support this feature and what port you should use. You must note that this feature must also be supported by your originating server as well.

Use passive mode

Normally you should enable it, as it is the most common and firewall-safe transfer mode supported by FTP servers. Sometimes, you remote server might require active FTP transfers. In such a case please disable this, but bear in mind that your originating server might not support active FTP transfers, which usually requires tweaking the firewall!

Upload to Remote FTP server over cURL
[Note]Note

This feature is available only to Akeeba Solo and Akeeba Backup Professional.

[Note]Note

This engine uses PHP's cURL functions. This may not work if your host has not installed or enabled the cURL functions. In this case you may want to use the Upload to Remote FTP server engine instead.

Using this engine, you can upload your backup archives to any FTP or FTPS (FTP over Explicit SSL) server. There are some "FTP" protocols and other file storage protocols which are not supported, such as SFTP, SCP, Secure FTP, FTP over Implicit SSL and SSH variants. The difference of this engine to the DirectFTP over cURL archiver engine is that this engine uploads backup archives to the server, whereas DirectFTP over cURL uploads the uncompressed files of your site. DirectFTP over cURL is designed for rapid migration, this engine is designed for easy moving of your backup archives to an off-server location.

Your originating server must support PHP's cURL extension and not have its FTP functions blocked. Your originating server must not block FTP communication to the remote (target) server. Some hosts apply a firewall policy which requires you to specify to which hosts your server can connect. In such a case you might need to allow communication to your remote host.

Before you begin, you should know the limitations. Most servers do not allow resuming of uploads (or even if they do, PHP doesn't quite support this feature), so the archive has to be transferred in a single step. PHP has a time limit restriction we can't overlook. The time required to upload a file to FTP equals the size of the file divided by the available bandwidth. We want to time to upload a file to be less than PHP's time limit restriction so as to avoid timing out. Since the available bandwidth is finite and constant, the only thing we can reduce in order to avoid timeouts is the file size. To this end, you have to produce split archives, by setting the part size for archive splitting in ZIP's or JPA's engine configuration pane. The suggested values are between 10Mb and 20Mb. Most servers have a bandwidth cap of 20Mbits, which equals to roughly 2Mb/sec (1 byte is 8 bits, plus there's some traffic overhead, lost packets, etc). With a time limit of 10 seconds, we can upload at most 2 Mb/sec * 10 sec = 20Mb without timing out. If you get timeouts during post-processing lower the part size.

The available configuration options are:

Process each part immediately

If you enable this, each backup part will be uploaded as soon as it's ready. This is useful if you are low on disk space (disk quota) when used in conjunction with Delete archive after processing. When using this feature we suggest having 10Mb plus the size of your part for split archives free in your account. The drawback with enabling this option is that if the upload fails, the backup fails. If you don't enable this option, the upload process will take place after the backup is complete and finalized. This ensures that if the upload process fails a valid backup will still be stored on your server. The drawback is that it requires more available disk space.

Delete archive after processing

If enabled, the archive files will be removed from your server after they are uploaded to the FTP server.

Host name

The hostname of your remote (target) server, e.g. ftp.example.com. You must NOT enter the ftp:// protocol prefix. If you do, Akeeba Backup will try to remove it automatically and issue a warning about it.

Port

The TCP/IP port of your remote host's FTP server. It's usually 21.

User name

The username you have to use to connect to the remote FTP server.

Password

The password you have to use to connect to the remote FTP server.

Initial directory

The absolute FTP directory to your remote site's location where your archives will be stored. This is provided by your hosting company. Do not ask us to tell you what you should put in here because we can't possibly know. There is an easy way to find it, though. Connect to your target FTP server with FileZilla. Navigate to the intended directory. Above the right-hand folder pane you will see a text box with a path. Copy this path and paste it to Akeeba Backup's setting.

Use FTP over SSL

If your remote server supports secure FTP connections over SSL (they have to be Explicit SSL; implicit SSL is not supported), you can enable this feature. In such a case you will most probably have to change the port. Please ask your hosting company to provide you with more information on whether they support this feature and what port you should use. You must note that this feature must also be supported by your originating server as well.

Use passive mode

Normally you should enable it, as it is the most common and firewall-safe transfer mode supported by FTP servers. Sometimes, you remote server might require active FTP transfers. In such a case please disable this, but bear in mind that your originating server might not support active FTP transfers, which usually requires tweaking the firewall!

Passive mode workaround

Some badly configured / misbehaving servers report the wrong IP address when you enable the passive mode. Usually they report their internal network IP address (something like 127.0.0.1 or 192.168.1.123) instead of their public, Internet-accessible IP address. This erroneous information confuses FTP information, causing uploads to stall and eventually fail. Enabling this workaround option instructs cURL to ignore the IP address reported by the server and instead use the server's public IP address, as seen by your server. In most cases this works much better, therefore we recommend leaving this option turned on if you're not sure. You should only disable it in case of an exotic setup where the FTP server uses two different public IP addresses for the control and data channels.

Upload to Google Storage (Legacy S3 API)

Using this engine, you can upload your backup archives to the Google Storage cloud storage service using the interoperable API (Google Storage simulates the API of Amazon S3)

Before you begin, go to your Google API Console, select your project and then select Google Cloud Storage from the left-hand sidebar. Under "Interoperable Access" enable the "Make this my default project for interoperable storage access". Then, you need to create an Access and Secret key pair for use in the application. You can create keys by using the Google Cloud Storage key management tool. You can create up to five sets of developer keys. After creating an access and secret key pair, copy the Access Key and Secret. You will paste them into the application's configuration page, into the Access Key and Secret Key areas.

Upload to Google Storage

The required settings for this engine are:

Process each part immediately

If you enable this, each backup part will be uploaded as soon as it's ready. This is useful if you are low on disk space (disk quota) when used in conjunction with Delete archive after processing. When using this feature we suggest having 10Mb plus the size of your part for split archives free in your account. The drawback with enabling this option is that if the upload fails, the backup fails. If you don't enable this option, the upload process will take place after the backup is complete and finalized. This ensures that if the upload process fails a valid backup will still be stored on your server. The drawback is that it requires more available disk space.

Delete archive after processing

If enabled, the archive files will be removed from your server after they are uploaded to Google Storage.

Access Key

Your Google Storage Access Key, available from the Google Cloud Storage key management tool.

Secret Key

Your Google Storage Secret Key, available from the Google Cloud Storage key management tool.

Use SSL

If enabled, an encrypted connection will be used to upload your archives to Google Storage. In this case the upload will take longer, as encryption - what SSL does - is a resource intensive operation. You may have to lower your part size. We strongly recommend enabling this option for enhanced security.

Bucket

The name of your Google Storage bucket where your files will be stored in. The bucket must be already created; the application can not create buckets.

[Warning]Warning

DO NOT CREATE BUCKETS WITH NAMES CONTAINING UPPERCASE LETTERS. If you use a bucket with uppercase letters in its name it is very possible that the application will not be able to upload anything to it.

Please note that this is a limitation of the API. It is not something we can "fix" in the application. If this is the case with your site, please simply create a new bucket whose name only consists of lowercase unaccented latin characters (a-z), numbers (0-9), dashes and dots.

Moreover, you cannot use a bucket name with a dot in its filename together with the Use SSL option. This is a limitation of the SSL setup in Google servers and cannot be worked around.

Directory

The directory inside your Google Storage bucket where your files will be stored in. If you want to use subdirectories, you have to use a forward slash, e.g. directory/subdirectory/subsubdirectory.

[Tip]Tip

You can use the application's "variables" in the directory name in order to create it dynamically. These are the same variables as what you can use in the archive name, i.e. [DATE], [TIME], [HOST], [RANDOM].

Regarding the naming of buckets and directories, you have to be aware of the Google Storage rules:

  • Folder names can not contain backward slashes (\). They are invalid characters.

  • Bucket names can only contain lowercase letters, numbers, periods (.) and dashes (-). Accented characters, international characters, underscores and other punctuation marks are illegal characters.

  • Bucket names must start with a number or a letter.

  • Bucket names must be 3 to 63 characters long.

  • Bucket names can't be in an IP format, e.g. 192.168.1.2

  • Bucket names can't end with a dash.

  • Bucket names can't have an adjacent dot and dash. For example, both my.-bucket and my-.bucket are invalid.

If any - or all - of those rules are broken, you'll end up with error messages that the application couldn't connect to Google Storage, that the calculated signature is wrong or that the bucket does not exist. This is normal and expected behaviour, as Google Storage drops the connection when it encounters invalid bucket or directory names.

Upload to Google Storage (JSON API)

Using this engine, you can upload your backup archives to the Google Storage cloud storage service using the official Google Cloud JSON API. This is the preferred method for using Google Storage.

Foreward and requirements

Setting up Google Storage is admittedly complicated. We did ask Google for permission to use the much simpler end-user OAuth2 authentication, a method which is more suitable for people who are not backend developers or IT managers. Unfortuantely, their response on July 14th, 2017 was that we were not allowed to. They said in no uncertain terms that we MUST have our clients use Google Cloud Service Accounts. Unfortunately this comes with increased server requirements and more complicated setup instructions.

First the requirements. Google Storage support requires the openssl_sign() function to be available on your server and support the "sha256WithRSAEncryption" method (it must be compiled against the OpenSSL library version 0.9.8l or later). If you are not sure please ask your host. Please note that the versions of the software required for Google Storage integration have been around since early 2012 so they shouldn't be a problem for any decently up-to-date host.

Moreover, we are only allowed to give you the following quick start instructions as an indicative way to set up Google Storage. If you need support for creating a service account or granting Akeeba Backup the appropriate permissions via the IAM Policies, Google requested that we direct you to their Google Cloud Support page. We are afraid this means that we will not be able to provide you with support about any issues concerning the Google Cloud side of the setup at the request of Google.

We apologize for any inconvenience. We have no option but to abide by Google's terms. It's their service, their API and their rules.

Performance and stability

According to our extensive tests in different server environments, the performance and stability of Google Storage is not a given. We've seen upload operations randomly failing with a Google-side server error or timing out when the immediately prior upload of a same sized file chunk worked just fine. We've seen file deletions taking anywhere from 0.5 to 13 seconds per file, for the same file, storage class and bucket with the command issued always from the same server. Please note that you might experience random upload failures. Moreover, you might experience random failures applying remote storage quotas if deleting the obsolete files takes too long to be practical. These issues are on Google Storage's side and cannot be worked around in any way using code in the context of a backup application that's bound by PHP and web server time limits.

We recommend using a remote storage service with good, consistent performance such as Amazon S3 or BackBlaze B2.

Initial Setup

Before you begin you will need to create a JSON authorization file for Akeeba Backup / Akeeba Solo. Please follow the instructions below, step by step, to do this. Kindly note that you can reuse the same JSON authorization file on multiple sites and / or backup profiles.

  1. Go to https://console.developers.google.com/permissions/serviceaccounts?pli=1

  2. Select the API Project where your Google Storage bucket is already located in.

  3. Click on Create Service Account

  4. Set the Service Account Name to Akeeba Backup Service Account

  5. Click on Role and select Storage, Storage Object Admin

  6. Check the Furnish a new private key checkbox.

  7. The Key Type section appears. Make sure JSON is selected.

  8. Click on the CREATE link at the bottom right.

  9. Your server prompts you to download a file. Save it as googlestorage.json You will need to paste the contents of this file in the Contents of googlestorage.json (read the documentation) field in the Configuration page of Akeeba Backup / Akeeba Solo.

[Important]Important

If you lose the googlestorage.json file you will have to delete the Service Account and create it afresh. If you had any sites already set up with this googlestorage.json you will need to reconfigure them with the new file you created for the new Service Account. In short: don't lose that file, you will need it to (re)connect your sites with Google Storage.

Post-processing engine options

Upload to Google Storage (JSON API)

The settings for this engine are:

Process each part immediately

If you enable this, each backup part will be uploaded as soon as it's ready. This is useful if you are low on disk space (disk quota) when used in conjunction with Delete archive after processing. When using this feature we suggest having 10Mb plus the size of your part for split archives free in your account. The drawback with enabling this option is that if the upload fails, the backup fails. If you don't enable this option, the upload process will take place after the backup is complete and finalized. This ensures that if the upload process fails a valid backup will still be stored on your server. The drawback is that it requires more available disk space.

Delete archive after processing

If enabled, the archive files will be removed from your server after they are uploaded to Google Storage.

Enabled chunk upload

When enabled, Akeeba Backup / Akeeba Solo will upload your backup archives in 5Mb chunks. This is the recommended methods for larger (over 10Mb) archives and/or archive parts.

Bucket

The name of your Google Storage bucket where your files will be stored in. The bucket must be already created; the application can not create buckets.

[Warning]Warning

DO NOT CREATE BUCKETS WITH NAMES CONTAINING UPPERCASE LETTERS. If you use a bucket with uppercase letters in its name it is very possible that the application will not be able to upload anything to it.

Please note that this is a limitation of the API. It is not something we can "fix" in the application. If this is the case with your site, please simply create a new bucket whose name only consists of lowercase unaccented latin characters (a-z), numbers (0-9), dashes and dots.

Directory

The directory inside your Google Storage bucket where your files will be stored in. If you want to use subdirectories, you have to use a forward slash, e.g. directory/subdirectory/subsubdirectory.

[Tip]Tip

You can use the application's "variables" in the directory name in order to create it dynamically. These are the same variables as what you can use in the archive name, i.e. [DATE], [TIME], [HOST], [RANDOM].

Contents of googlestorage.json (read the documentation)

Open the JSON file you created in the Initial Setup stage outlined above. Copy all of its contents. Paste them in this field. Make sure you have included the curly braces, { and }, at the beginning and end of the file respectively. Don't worry about line breaks being "eaten up", they are NOT important.

Regarding the naming of buckets and directories, you have to be aware of the Google Storage rules:

  • Folder names can not contain backward slashes (\). They are invalid characters.

  • Bucket names can only contain lowercase letters, numbers, periods (.) and dashes (-). Accented characters, international characters, underscores and other punctuation marks are illegal characters.

  • Bucket names must start with a number or a letter.

  • Bucket names must be 3 to 63 characters long.

  • Bucket names can't be in an IP format, e.g. 192.168.1.2

  • Bucket names can't end with a dash.

  • Bucket names can't have an adjacent dot and dash. For example, both my.-bucket and my-.bucket are invalid.

If any - or all - of those rules are broken, you'll end up with error messages that the application couldn't connect to Google Storage, that the calculated signature is wrong or that the bucket does not exist. This is normal and expected behaviour, as Google Storage drops the connection when it encounters invalid bucket or directory names.

Upload to iDriveSync

Using this engine, you can upload your backup archives to the iDriveSync low-cost, encrypted, cloud storage service.

Upload to iDriveSync

The required settings for this engine are:

Process each part immediately

If you enable this, each backup part will be uploaded as soon as it's ready. This is useful if you are low on disk space (disk quota) when used in conjunction with Delete archive after processing. When using this feature we suggest having 10Mb plus the size of your part for split archives free in your account. The drawback with enabling this option is that if the upload fails, the backup fails. If you don't enable this option, the upload process will take place after the backup is complete and finalized. This ensures that if the upload process fails a valid backup will still be stored on your server. The drawback is that it requires more available disk space.

Delete archive after processing

If enabled, the archive files will be removed from your server after they are uploaded to iDriveSync

Username or e-email

Your iDriveSync username or email address

Password

Your iDriveSync password

Private key (optional)

If you have locked your account with a private key (which means that all your data is stored encrypted in iDriveSync) please enter your Private Key here. If you are not making use of this feature please leave this field blank.

Directory

The directory inside your iDriveSync where your files will be stored in. If you want to use subdirectories, you have to use a forward slash, e.g. directory/subdirectory/subsubdirectory.

[Tip]Tip

You can use the application's "variables" in the directory name in order to create it dynamically. These are the same variables as what you can use in the archive name, i.e. [DATE], [TIME], [HOST], [RANDOM].

Upload to Amazon S3 (legacy API)
[Note]Note

This feature has been discontinued. If you were using it please upgrade your backup profiles to the Upload to Amazon S3 post-processing engine.

Upload to Amazon S3

Using this engine, you can upload your backup archives to the Amazon S3 cloud storage service and any other storage service which provide an S3 compatible API. With the lowest price per Gigabyte, Amazon S3 is an ideal option for securing your backups. Even if your host's data center is annihilated by a natural disaster and your local PC and storage media are wiped out by an unlikely event, you will still have a copy of your site readily accessible and easy to restore.

We do support multi-part uploads to Amazon S3. This means that, unlike the other post-processing engines, even if you do not use split archives, the application will still be able to upload your files to Amazon S3 in most cases. This new feature allows the application to upload your backup archive in 5Mb chunks so that it doesn't time out when uploading a very big archive file. That said, we STRONGLY suggest using a part size for archive splitting of 2000Mb. This is required to work around a PHP limitation which causes extraction to fail if the file size is over roughly 2Gb.

[Note]Note

Multi-part uploads tend to be more prone to connection errors on the Amazon S3 side. Due to maximum execution time restrictions the application is unable to retry the connection, causing the backup transfer to fail. As a result we suggest not relying to this feature.

You can also specify a custom endpoint URL. This allows you to use this feature with third party cloud storage services offering an API compatible with Amazon S3 such as Cloudian, Riak CS, Ceph, Connectria, HostEurope, Dunkel, S3For.me, Nimbus, Walrus, GreenQloud, Scality Ring, CloudStack and so on. If a cloud solution (public or private) claims that it is compatible with S3 then you can use it with the application.

[Note]Note

Akeeba Backup for WordPress / Akeeba Solo 1.9.2 and later support the Beijing Amazon S3 region, i.e. storage buckets hosted in China. These buckets are only accessible from inside China and have a few caveats:

  • You can only access buckets in the Beijing region from inside China.

  • Download to browser is not supported unless you have a license by the Chinese government to share content from your Amazon S3 bucket. That's because downloading to browser requires a pre-signed URL which could, in theory, be used to disseminate material from your Amazon S3 bucket to others. So even though you see the Download button it will most likely result in an error.

  • Sometimes deleting and trying to re-upload an object or trying to overwrite fails silently (without an error message). WE strongly recommend using unique names for your backup archives and testing them frequently.

Upload to Amazon S3

The required settings for this engine are:

Process each part immediately

If you enable this, each backup part will be uploaded as soon as it's ready. This is useful if you are low on disk space (disk quota) when used in conjunction with Delete archive after processing. When using this feature we suggest having 10Mb plus the size of your part for split archives free in your account. The drawback with enabling this option is that if the upload fails, the backup fails. If you don't enable this option, the upload process will take place after the backup is complete and finalized. This ensures that if the upload process fails a valid backup will still be stored on your server. The drawback is that it requires more available disk space.

Delete archive after processing

If enabled, the archive files will be removed from your server after they are uploaded to Amazon S3.

Access Key

Your Amazon S3 Access Key. Required unless you run Akeeba Backup inside an EC2 instance with an attached IAM Role. Please read about this below.

Secret Key

Your Amazon S3 Secret Key. Required unless you run Akeeba Backup inside an EC2 instance with an attached IAM Role. Please read about this below.

Use SSL

If enabled, an encrypted connection will be used to upload your archives to Amazon S3. In this case the upload will take longer, as encryption - what SSL does - is a resource intensive operation. You may have to lower your part size.

Bucket

The name of your Amazon S3 bucket where your files will be stored in. The bucket must be already created; the application can not create buckets.

[Warning]Warning

DO NOT CREATE BUCKETS WITH NAMES CONTAINING UPPERCASE LETTERS. AMAZON CLEARLY WARNS AGAINST DOING THAT. If you use a bucket with uppercase letters in its name it is very possible that the application will not be able to upload anything to it. More specifically, it seems that if your web server is located in Europe, you will be unable to use a bucket with uppercase letters in its name. If your server is in the US, you will most likely be able to use such a bucket. Your mileage may vary.

Please note that this is a limitation imposed by Amazon itself. It is not something we can "fix" in the application (I did spent 5 hours on Christmas trying to find a workaround, with no success, because it's a limitation by Amazon). If this is the case with your site, please DO NOT ask for support; simply create a new bucket whose name only consists of lowercase unaccented latin characters (a-z), numbers (0-9), dashes and dots.

Moreover, you cannot use a bucket name with a dot in its filename together with the Use SSL option. This is a limitation of the SSL setup in Amazon S3 servers and cannot be worked around, especially for EU-hosted buckets.

Amazon S3 Region

Please select which S3 Region you have created your bucket in. This is MANDATORY for using the newer, more secure, v4 signature method. You can see the region of your bucket in your Amazon S3 management console. Right click on a bucket and click on Properties. A new pane opens to the left. The second row is labelled Region. This is where your bucket was created in. Go back to Akeeba Backup / Akeeba Solo and select the corresponding option from the drop-down.

[Important]Important

If you choose the wrong region the connection WILL fail.

Please note that there are some reserved regions which have not been launched by Amazon at the time we wrote this engine. They are included for forward compatibility should and when Amazon launches those regions.

Signature method

This option determines the authentication API which will be used to "log in" the backup engine to your Amazon S3 bucket. You have two options:

  • v4 (preferred for Amazon S3). If you are using Amazon S3 (not a compatible third party storage service) and you are not sure, you need to choose this option. Moreover, you MUST specify the Amazon S3 Region in the option above. This option implements the newer AWS4 (v4) authentication API. Buckets created in Amazon S3 regions brought online after January 2014 (e.g. Frankfurt) will only accept this option. Older buckets will work with either option.

    [Important]Important

    v4 signatures are only compatible with Amazon S3 proper. If you are using a custom Endpoint this option will NOT work.

  • v2 (legacy mode, third party storage providers). If you are using an S3-compatible third party storage service (NOT Amazon S3) you MUST use this option. We do not recommend using this option with Amazon S3 as this authentication method is going to be phased out by Amazon itself in the future.

Bucket Access

This option determines how the API will access the Bucket. If unsure, use the Virtual Hosting setting.

The two available settings are:

  • Virtual Hosting (recommended). This is the recommended and supported method for Amazon S3. Buckets created after May 2019 will only support this method. Amazon has communicated that this method is the only available in Amazon S3's API starting September 2020.

  • Path Access (legacy). This is the older, no longer supported method. You should only need to use it with a custom endpoint and ONLY if your storage provider has told you that you need to enable it.

Directory

The directory inside your Amazon S3 bucket where your files will be stored in. If you want to use subdirectories, you have to use a forward slash, e.g. directory/subdirectory/subsubdirectory.

[Tip]Tip

You can use the application's "variables" in the directory name in order to create it dynamically. These are the same variables as what you can use in the archive name, i.e. [DATE], [TIME], [HOST], [RANDOM].

Disable multipart uploads

Uploads to Amazon S3 of parts over 5Mb use Amazon's new multi-part upload feature. This allows the application to upload the backup archive in 5Mb chunks and then ask Amazon S3 to glue them together in one big file. However, some hosts time out while uploading archives using this method. In that case it's preferable to use a relatively small Part Size for Split Archive setting (around 10-20Mb, your mileage may vary) and upload the entire archive part in one go. Enabling this option ensures that, no matter how big or small your Part Size for Split Archives setting is, the upload of the backup archive happens in one go. You MUST use it if you get RequestTimeout warnings while the application is trying to upload the backup archives to Amazon S3.

Storage class

Select the storage class for your data. Standard is the regular storage for business critical data. Please consult the Amazon S3 documentation for the description of each storage class.

[Note]Note

Glacier and Deep Archive storage classes are much cheaper but have long delays (several seconds to several hours) in retrieving or deleting your files. Using these storage classes is not compatible with the Enable Remote Quotas configuration option and the Manage Remotely Stored Files feature in the Manage Backups page. This is a limitation of Amazon S3, not Akeeba Backup / Solo.

We strongly recommend not using these storage classes directly in Akeeba Backup / Solo. Instead, use one or more Lifecycle Policies in your Amazon S3 bucket. These can be configured in your Amazon S3 control panel and tell Amazon when to migrate your files between different storage classes. For example, you could use Intelligent Tiering in Akeeba Backup / Solo together with the Maximum Backup Age quotas and Remote Quotas to only keep the last 45 days of backup archives and the backups taken on the 1st of each month. You could then also add two lifecycle policies to migrate backup archives older than 60 days to Glacier and archives older than 180 days to Deep Archive. This way you would have enough backups to roll back your site in case of an emergency but also historical backups for safekeeping or legal / regulatory reasons. Feel free to adjust the time limits to best suit your business use case!

Custom endpoint

Enter the custom endpoint (connection URL) of a third party service which supports an Amazon S3 compatible API. Please remember to set the Signature method to v2 when using this option.

Regarding the naming of buckets and directories, you have to be aware of the Amazon S3 rules (these rules are a simplified form of the list S3Fox presents you with when you try to create a new bucket):

  • Folder names can not contain backward slashes (\). They are invalid characters.

  • Bucket names can only contain lowercase letters, numbers, periods (.) and dashes (-). Accented characters, international characters, underscores and other punctuation marks are illegal characters.

    [Important]Important

    Even if you created a bucket using uppercase letters, you must type its name with lowercase letters. Amazon S3 automatically converts the bucket name to all-lowercase. Also note that, as stated above, you may NOT be able to use at all under some circumstances. Generally, your should avoid using uppercase letters.

  • Bucket names must start with a number or a letter.

  • Bucket names must be 3 to 63 characters long.

  • Bucket names can't be in an IP format, e.g. 192.168.1.2

  • Bucket names can't end with a dash.

  • Bucket names can't have an adjacent dot and dash. For example, both my.-bucket and my-.bucket are invalid.

If any - or all - of those rules are broken, you'll end up with error messages that the application couldn't connect to S3, that the calculated signature is wrong or that the bucket does not exist. This is normal and expected behaviour, as Amazon S3 drops the connection when it encounters invalid bucket or directory names.

Automatic provisioning of Access and Secret Key on EC2 instances with an attached IAM Role

Starting with version 3.2.0, Akeeba Solo / Akeeba Backup for WordPress can automatically provision temporary credentials (Access and Secret Key) if you leave these fields blank. This feature is meant for advanced users who automatically deploy multiple sites to Amazon EC2. This feature has four requirements:

  • Using Amazon S3, not a custom endpoint. Only Amazon S3 proper works with the temporary credentials issued by the EC2 instance.

  • Using the v4 signature method. The old signature method (v2) does not work with temporary credentials issued by the EC2 instance. This is because Amazon requires that the requests authenticated with these credentials to also include the Security Token returned by the EC2 instance, something which is only possible with the v4 signature method.

  • Running Akeeba Backup / Akeeba Solo on a site which is hosted on an Amazon EC2 instance. It should be self understood that you can't use temporary credentials issued by the EC2 instance unless you use one. Therefore, don't expect this feature to work with regular hosting; it requires that your site runs on an Amazon EC2 server.

  • Attaching an IAM Role to the Amazon EC2 instance. The IAM Role must allow access to the S3 bucket you have specified in Akeeba Backup's / Akeeba Solo's configuration.

When Akeeba Backup / Akeeba Solo detects that both the Access and Secret Key fields are left blank (empty) it will try to query the EC2 instance's metadata server for an attached IAM Role. If a Role is attached it will make a second query to the EC2 instance's metadata server to retrieve its temporary credentials. It will then proceed to use them for accessing S3.

The temporary credentials are cached by Akeeba Backup / Akeeba Solo for the duration of the backup process. If they are about to expire or expire during the backup process new credentials will be fetched from the EC2 instance's metadata server using the same process.

Creating and attaching IAM Roles to EC2 instances is beyond the scope of our documentation and our support services. Please refer to Amazon's documentation.

Upload to Remote SFTP server
[Note]Note

This feature is available only to Akeeba Solo and Akeeba Backup Professional.

[Note]Note

This engine uses the PHP extension called SSH2. The SSH2 extension is still marked as an alpha and is not enabled by default or even provided by many commercial hosts. In this case you may want to use the Upload to Remote SFTP server over cURL engine instead which uses PHP's cURL extension, available on most hosts.

Using this engine, you can upload your backup archives to any SFTP (Secure File Transfer Protocol) server. Please note that SFTP is the encrypted file transfer protocol provided by SSH servers. Even though the name is close, it has nothing to do with plain old FTP or FTP over SSL. Not all servers support this but for those which do this is the most secure file transfer option.

The difference of this engine to the DirectSFTP archiver engine is that this engine uploads backup archives to the server, whereas DirectSFTP uploads the uncompressed files of your site. DirectSFTP is designed for rapid migration, this engine is designed for easy moving of your backup archives to an off-server location. Moreover, this engine also supports connecting to your SFTP server using cryptographic key files instead of passwords, a much safer (and much harder and geekier) user authentication method.

Your originating server must have PHP's SSH2 module installed and activated and its functions unblocked. Your originating server must also not block SFTP communication to the remote (target) server. Some hosts apply a firewall policy which requires you to specify to which hosts your server can connect. In such a case you might need to allow communication to your remote host over TCP port 22 (or whatever port you are using).

Before you begin, you should know the limitations. SFTP does not allow resuming of uploads so the archive has to be transferred in a single step. PHP has a time limit restriction we can't overlook. The time required to upload a file to SFTP equals the size of the file divided by the available bandwidth. We want to time to upload a file to be less than PHP's time limit restriction to avoid timing out. Since the available bandwidth is finite and constant, the only thing we can reduce in order to avoid timeouts is the file size. To this end, you have to produce split archives, by setting the part size for archive splitting in ZIP's or JPA's engine configuration pane. The suggested values are between 10Mb and 20Mb. Most servers have a bandwidth cap of 20Mbits, which equals to roughly 2Mb/sec (1 byte is 8 bits, plus there's some traffic overhead, lost packets, etc). With a time limit of 10 seconds, we can upload at most 2 Mb/sec * 10 sec = 20Mb without timing out. If you get timeouts during post-processing lower the part size.

The available configuration options are:

Process each part immediately

If you enable this, each backup part will be uploaded as soon as it's ready. This is useful if you are low on disk space (disk quota) when used in conjunction with Delete archive after processing. When using this feature we suggest having 10Mb plus the size of your part for split archives free in your account. The drawback with enabling this option is that if the upload fails, the backup fails. If you don't enable this option, the upload process will take place after the backup is complete and finalized. This ensures that if the upload process fails a valid backup will still be stored on your server. The drawback is that it requires more available disk space.

Delete archive after processing

If enabled, the archive files will be removed from your server after they are uploaded to the SFTP server.

Host name

The hostname of your remote (target) server, e.g. secure.example.com. You must NOT enter the sftp:// or ssh:// protocol prefix. If you do, Akeeba Backup will try to remove it automatically and issue a warning about it.

Port

The TCP/IP port of your remote host's SFTP (SSH) server. It's usually 22. If unsure, please ask your host.

User name

The username you have to use to connect to the remote SFTP server. This must be always provided

Password

The password you have to use to connect to the remote SFTP server.

Private key file (advanced)

Many (but not all) SSH/SFTP servers allow you to connect to them using cryptographic key files for user authentication. This method is far more secure than using a password. Passwords can be guessed within some degree of feasibility because of their relatively short length and complexity. Cryptographic keys are night impossible to guess with the current technology due to their complexity (on average, more than 100 times as complex as a typical password).

If you want to use this kind of authentication you will need to provide a set of two files, your public and private key files. In this field you have to enter the full filesystem path to your private key file. The private key file must be in RSA or DSA format and has to be configured to be accepted by your remote host. The exact configuration depends on your SSH/SFTP server and is beyond the scope of this documentation. If you are a curious geek we strongly advise you to search for "ssh certificate authentication" in your favourite search engine for more information.

If you are using encrypted private key files enter the passphrase in the Password field above. If it is not encrypted, which is a bad security practice, leave the Password field blank.

[Important]Important

If the libssh2 library that the SSH2 extension of PHP is using is compiled against GnuTLS (instead of OpenSSL) you will NOT be able to use encrypted private key files. This has to do with bugs / missing features of GnuTLS, not our code. If you can't get certificate authentication to work please try providing an unencrypted private key file and leave the Password field blank.

Public Key File (advanced)

If you are using the key file authentication method described above you will also have to supply the public key file. Enter here the full filesystem path to the public key file. The public key file must be in RSA or DSA format and, of course, unencrypted (as it's a public key).

Initial directory

The absolute filesystem path to your remote site's location where your archives will be stored. This is provided by your hosting company. Do not ask us to tell you what you should put in here because we can't possibly know. There is an easy way to find it, though. Connect to your target SFTP server with FileZilla. Navigate to the intended directory. Above the right-hand folder pane you will see a text box with a path. Copy this path and paste it to Akeeba Backup's setting.

Upload to Remote SFTP server over cURL
[Note]Note

This feature is available only to Akeeba Solo and Akeeba Backup Professional.

[Note]Note

This engine uses the PHP cURL extension. If your host has disabled the cURL extension but has enabled the SSH2 PHP extension you may want to use the Upload to Remote SFTP server engine instead which uses PHP's SSH2 extension.

Using this engine, you can upload your backup archives to any SFTP (Secure File Transfer Protocol) server. Please note that SFTP is the encrypted file transfer protocol provided by SSH servers. Even though the name is close, it has nothing to do with plain old FTP or FTP over SSL. Not all servers support this but for those which do this is the most secure file transfer option.

The difference of this engine to the DirectSFTP over cURL archiver engine is that this engine uploads backup archives to the server, whereas DirectSFTP over cURL uploads the uncompressed files of your site. DirectSFTP over cURL is designed for rapid migration, this engine is designed for easy moving of your backup archives to an off-server location.

Your originating server (where you are backing up from) must a. have PHP's cURL extension installed and activated, b. have the cURL extension compiled with SFTP support and c. allow outbound TCP/IP connections to your target host's SSH port. Please note that some hosts provide the cURL extension without SFTP support. This feature will NOT work on these hosts. Moreover, some hosts apply a firewall policy which requires you to specify to which hosts your server can connect. In such a case you might need to allow communication to your remote host.

Before you begin, you should know the limitations. SFTP does not allow resuming of uploads so the archive has to be transferred in a single step. PHP has a time limit restriction we can't overlook. The time required to upload a file to SFTP equals the size of the file divided by the available bandwidth. We want to time to upload a file to be less than PHP's time limit restriction to avoid timing out. Since the available bandwidth is finite and constant, the only thing we can reduce in order to avoid timeouts is the file size. To this end, you have to produce split archives, by setting the part size for archive splitting in ZIP's or JPA's engine configuration pane. The suggested values are between 10Mb and 20Mb. Most servers have a bandwidth cap of 20Mbits, which equals to roughly 2Mb/sec (1 byte is 8 bits, plus there's some traffic overhead, lost packets, etc). With a time limit of 10 seconds, we can upload at most 2 Mb/sec * 10 sec = 20Mb without timing out. If you get timeouts during post-processing lower the part size.

The available configuration options are:

Process each part immediately

If you enable this, each backup part will be uploaded as soon as it's ready. This is useful if you are low on disk space (disk quota) when used in conjunction with Delete archive after processing. When using this feature we suggest having 10Mb plus the size of your part for split archives free in your account. The drawback with enabling this option is that if the upload fails, the backup fails. If you don't enable this option, the upload process will take place after the backup is complete and finalized. This ensures that if the upload process fails a valid backup will still be stored on your server. The drawback is that it requires more available disk space.

Delete archive after processing

If enabled, the archive files will be removed from your server after they are uploaded to the SFTP server.

Host name

The hostname of your remote (target) server, e.g. secure.example.com. You must NOT enter the sftp:// or ssh:// protocol prefix. If you do, Akeeba Backup will try to remove it automatically and issue a warning about it.

Port

The TCP/IP port of your remote host's SFTP (SSH) server. It's usually 22. If unsure, please ask your host.

User name

The username you have to use to connect to the remote SFTP server. This must be always provided

Password

The password you have to use to connect to the remote SFTP server.

Private key file (advanced)

Many (but not all) SSH/SFTP servers allow you to connect to them using cryptographic key files for user authentication. This method is far more secure than using a password. Passwords can be guessed within some degree of feasibility because of their relatively short length and complexity. Cryptographic keys are night impossible to guess with the current technology due to their complexity (on average, more than 100 times as complex as a typical password).

If you want to use this kind of authentication you will need to provide a set of two files, your public and private key files. In this field you have to enter the full filesystem path to your private key file. The private key file must be in RSA or DSA format and has to be configured to be accepted by your remote host. The exact configuration depends on your SSH/SFTP server and is beyond the scope of this documentation. If you are a curious geek we strongly advise you to search for "ssh certificate authentication" in your favourite search engine for more information.

If you are using encrypted private key files enter the passphrase in the Password field above. If it is not encrypted, which is a bad security practice, leave the Password field blank.

[Important]Important

If cURL is compiled against GnuTLS (instead of OpenSSL) you will NOT be able to use encrypted private key files. This has to do with bugs / missing features of GnuTLS, not our code. If you can't get certificate authentication to work please try providing an unencrypted private key file and leave the Password field blank.

Public Key File (advanced)

If you are using the key file authentication method described above you will also have to supply the public key file. Enter here the full filesystem path to the public key file. The public key file must be in RSA or DSA format and, of course, unencrypted (as it's a public key). Some newer versions of cURL allow you to leave this blank, in which case they will derive the public key information from the private key file. We do not recommend this approach.

Initial directory

The absolute filesystem path to your remote site's location where your archives will be stored. This is provided by your hosting company. Do not ask us to tell you what you should put in here because we can't possibly know. There is an easy way to find it, though. Connect to your target SFTP server with FileZilla. Navigate to the intended directory. Above the right-hand folder pane you will see a text box with a path. Copy this path and paste it to Akeeba Backup's setting.

Upload to SugarSync

Using this engine, you can upload your backup archives to the SugarSync cloud storage service. SugarSync has a free tier (with 5Gb of free space) and a paid tier. the application can work with either one.

Please note that the application can only upload files to Sync Folders, it can not upload files directly to a Workspace (a single device). You have to set up your Sync Folders in SugarSync before using the application. If you have not created or specified any Sync Folder, the application will upload the backup archives to your Magic Briefcase, the default Sync Folder which syncs between all of your devices, including your mobile devices (iPhone, iPad, Android phones, ...).

First-time setup

Since Akeeba Backup 7.0 you need to perform an additional step the very first time you set up SugarSync to obtain a set of Access Key ID and Secret Access Key which will be used together with your email and password to access SugarSync. SugarSync's API needs all four pieces of information (Access Key ID, Secret Access Key, Email and Password) to grant access to your files.

First go to SygarSync's site and select the Developer Portal option at the footer of the site. If this is your first time there select the Join our Program option. It is free of charge.

Then go to the Developer Console (it requires you to log into SugarSync). At the top of the page there is the Your Access Keys area. If you already have entries there skip this paragraph. If you do not have any entries there click on Add Keys. This creates an entry.

[Note]Note

You can ignore the Your Apps section. In fact, creating an app is optional, makes authentication more complicated and does not offer any security or workflow advantage. Therefore, Akeeba Ltd chose not to implement support for SugarSync's Apps.

You need to copy the Access Key ID and its corresponding Private Access Key in your Akeeba Backup configuration, as explained below.

Upload to SugarSync

The required settings for this engine are:

Process each part immediately

If you enable this, each backup part will be uploaded as soon as it's ready. This is useful if you are low on disk space (disk quota) when used in conjunction with Delete archive after processing. When using this feature we suggest having 10Mb plus the size of your part for split archives free in your account. The drawback with enabling this option is that if the upload fails, the backup fails. If you don't enable this option, the upload process will take place after the backup is complete and finalized. This ensures that if the upload process fails a valid backup will still be stored on your server. The drawback is that it requires more available disk space.

Access Key ID

The Access Key ID you have created in SugarSync's Developer Console page.

Private Access Key

The Private Access Key that corresponds to the Access Key ID you have created in SugarSync's Developer Console page.

Delete archive after processing

If enabled, the archive files will be removed from your server after they are uploaded to SugarSync.

Email

The email used by your SugarSync account.

Password

The password used by your SugarSync account.

Directory

The directory inside SugarSync where your files will be stored in. If you want to use subdirectories, you have to use a forward slash, e.g. /directory/subdirectory/subsubdirectory. You may use the same variables used in archive naming, e.g. [HOST] for the site's host name or [DATE] for the current date.

Please note that the first part of your directory should be the name of your shared folder. For example, if you have a shared folder named backups and you want to create a subdirectory inside it based on the host name of the server the application is installed on, you need to enter backups/[HOST] in the directory box. If a Sync Folder by the name "backups" is not found, a directory named "backups" will be created inside your Magic Briefcase folder. Yes, it's more complicated than, say, DropBox – but that's also why SugarSync is more powerful.

Upload to WebDAV

Using this engine, you can upload your backup archives to any server which supports the WebDAV (Web Distributed Authoring and Versioning) protocol. Examples of storage services supporting WebDAV:

  • OwnCloud is a software solution that you can install on your own servers to provide a private cloud.

  • CloudDAV is a service which gives you WebDAV access to a plethora of cloud storage providers: Amazon S3, GMail, RackSpace CloudFiles, Microsoft OneDrive (formerly: SkyDrive), Windows Azure BLOB Storage, iCloud, LiveMesh, Box.com, FTP servers, Email (which, unlike the Send by email engine in the application, does support large files), Google Docs, Mezeo, Zimbra, FilesAnywhere, Dropbox, Google Storage, CloudMe, Microsoft SharePoint, Trend Micro, OpenStack Swift (supported by several providers), Google sites, HP cloud, Alfresco cloud, Open S3, Eucalyptus Walrus, Microsoft Office 365, EMC Atmos, iKoula - iKeepinCloud, PogoPlug, Ubuntu One, SugarSync, Hosting Solutions, BaseCamp, Huddle, IBM Files Cloud, Scality, Google Drive, Memset Memstore, DumpTruck, ThinkOn, Evernote, Cloudian, Copy.com, Salesforce. [TESTED with Amazon S3 as the storage provider]

  • Apache web server (when the optional WebDAV support is enabled – recommended for advanced users only).

  • 4Shared.

  • ADrive.

  • Amazon Cloud Drive.

  • Box.com.

  • CloudSafe.

  • DriveHQ.

  • DumpTruck.

  • FilesAnywhere.

  • MyDrive.

  • MyDisk.se.

  • PowerFolder.

  • OVH.net

  • Safecopy Backup.

  • Strato HiDrive.

  • Telekom Mediencenter.

  • Pretty much every storage provider which claims to support WebDAV

[Tip]Tip

You can find more information for WebDAV access of each of these providers in http://www.free-online-backup-services.com/features/webdav.html

[Note]Note

We have not thoroughly tested and do not guarantee that any of the above providers will work smoothly with the application unless you see the notice [TESTED] next to it.

Upload to WebDAV

The required settings for this engine are:

Process each part immediately

If you enable this, each backup part will be uploaded as soon as it's ready. This is useful if you are low on disk space (disk quota) when used in conjunction with Delete archive after processing. When using this feature we suggest having 10Mb plus the size of your part for split archives free in your account. The drawback with enabling this option is that if the upload fails, the backup fails. If you don't enable this option, the upload process will take place after the backup is complete and finalized. This ensures that if the upload process fails a valid backup will still be stored on your server. The drawback is that it requires more available disk space.

Delete archive after processing

If enabled, the archive files will be removed from your server after they are uploaded to SugarSync.

Username

The username you use to connect to your WebDAV server

Password

The password you use to connect to your WebDAV server

WebDAV base URL

The base URL of your WebDAV server's endpoint. It might be a directory such as http://www.example.com/mydav/ or even a script endpoint such as http://www.example.com/webdav.php. If unsure please ask your WebDAV provider for more information.

[Important]Important

If your base URL is a directory and not a script you are advised to enter the trailing forward slash at the end of the base URL. Otherwise connection problems may arise.

Directory

The directory inside the WebDAV server where your files will be stored in. If you want to use subdirectories, you have to use a forward slash, e.g. /directory/subdirectory/subsubdirectory. You may use the same variables used in archive naming, e.g. [HOST] for the site's host name or [DATE] for the current date.

Upload to Box.net / Box.com

Even though there is no direct integration option, you can always use the Upload to WebDAV option to upload your backup archives to Box.com. You will need to use the following parameters:

Username

Your box.com email address

Password

Your box.com password

WebDAV base URL

https://dav.box.com/dav/

[Important]Important

Do not forget to add the trailing slash in the WebDAV base URL!

For more information please check the official Box.com page explaining the Box.com over WebDAV feature: https://support.box.com/hc/en-us/articles/200519748-Does-Box-support-WebDAV-

[Important]Important

Due to limitations in the Box.com implementation of WebDAV we strongly recommend using a Part Size for Split Archives smaller than 50Mb at all times.

Cookies Notification - Action required

This website uses cookies to provide user authentication and improve your user experience. Please indicate whether you consent to our site placing these cookies on your device. You can change your preference later, from the controls which will be made available to you at the bottom of every page of our site.