Rclone has become the Swiss Army knife of cloud storage For anyone who wants to stop struggling with mediocre official clients, custom scripts, and absurd provider limitations. With a single binary and a syntax very similar to rsync, you can manage backups, synchronizations, mounts, and encryption across more than 70 services: Google Drive, OneDrive, Dropbox, S3, Backblaze B2, Wasabi, pCloud, MEGA, Proton Drive, Nextcloud, WebDAV, SFTP... and a long etcetera.
The beauty of Rclone is that it's not limited to "copying files to the cloud"It understands every backend, leverages APIs for increased speed, enables transparent encryption, FUSE mounting, backup scheduling, and cross-cloud operations without accessing your disk. From a macOS laptop or a Raspberry Pi with a small disk to a multi-terabyte production server, you can build a robust, encrypted, and automated backup strategy if you understand its components.
What is Rclone and why is it more interesting than rsync for the cloud?
Rclone is an open-source command-line tool Designed specifically for handling remote and cloud storage, Rclone differs from rsync, which is geared towards local or remote files via SSH. Rclone understands APIs from services like Google Drive, S3, and Dropbox, manages their limits, quotas, and specific features, and offers high-level commands for listing, copying, synchronizing, mounting, and serving content.
It is often described as “rsync for the cloud”But it actually goes much further: it supports more than 70 providers, integrates encryption layers (backend) crypt), a virtual file system (VFS) for stable assemblies, a module chunker for splitting large files and a backend abstraction layer that makes the same command work the same on Google Drive, S3, B2 or an SFTP.
Compared to rsync, Rclone shines in several aspects: can take advantage of multithreaded transfers (--multi-thread-streams and parameters of --transfers / --checkersand typically achieves speeds several times higher when the bottleneck is cloud APIs. Furthermore, it understands concepts like trash, versions, daily quotas, and request limits, and provides specific flags to optimize each provider without breaking anything.
At the internal architecture levelIt's a single binary that includes a core (Rclone Core) and several layers: VFS for mounts and caching, Crypt for client-side encryption, Chunker for fragmenting large files, and a set of backends that implement the specific communication with each service (S3/R2, Google Drive, B2, WebDAV, SFTP, etc.). This modularity allows you to combine, for example, a remote S3 server with a crypt layer and then mount it via FUSE.
Supported storage services and typical use cases

The catalog of services supported by Rclone is enormousHowever, it's helpful to get a quick overview of the most common ones to properly organize backups.
For everyday users They usually highlight:
- Google Drive / Google Photos: for personal and business accounts, including shared drives (Team Drives).
- Microsoft OneDrive and SharePoint: both personal and business.
- Dropbox, Box, MEGA, pCloud, Proton Drive: widely used for personal backups, photos, documents and small repositories.
In professional and development environments The usual thing is to use storage for objects:
- Amazon S3 and compatible (MinIO, Wasabi, Ceph, Oracle, etc.) for backup buckets, static and archived content.
- Google Cloud Storage, Azure Blob y Cloudflare R2 if you are already in those ecosystems.
- Backblaze B2 as a cheap alternative for large volumes of cold data.
For self-hosted and home serversRclone fits very well with:
- SFTP/FTP against other servers.
- WebDAV for Nextcloud / ownCloud.
- SMB / CIFS y HTTP when you need to integrate NAS and web servers.
A commonly used idea is to combine several clouds with "Union".You create a remote union: which includes, for example, gdrive:, onedrive: y dropbox:and you operate as if it were a single giant file system. Very useful for taking advantage of free accounts or distributing backups among different providers.
Requirements, basic installation and update
Rclone is light but not magical: On a small VPS or a Raspberry Pi with little RAM, it's important to know its requirements to avoid overwhelming it with aggressive setups or many parallel transfers.
At the hardware and system level, as a reasonable reference:
- Minimum RAM512 MB (2 GB or more for mounts with aggressive VFS caching).
- CPU1 vCPU is enough, but with 2+ you'll notice the parallel transfers better.
- StorageThe binary file takes up little space, but the cache may need 1 GB or more depending on the parameters.
- Kernel: It works from version 3.10 onwards; with modern kernels (5.4+) and FUSE3 the assemblies run more smoothly.
As for installation, you have several options. depending on what you prefer: convenience, version control, or reproducibility.
Official script (Linux, recommended to stay up to date):
curl -fsSL https://rclone.org/install.sh | sudo bash
rclone version
Manual installation with .deb package on Debian/Ubuntu:
wget https://downloads.rclone.org/v1.71.0/rclone-v1.71.0-linux-amd64.deb
sudo dpkg -i rclone-v1.71.0-linux-amd64.deb
sudo apt -f install # si pide dependencias
On macOS, Homebrew is the most convenient option.:
brew install rclone
rclone version --check
On Windows you can choose between installer, winget or Chocolatey:
- Downloading the ZIP file and leaving
rclone.exeenC:\rclone\and adding it to Path. - With winget:
winget install Rclone.Rclone - With Chocolatey:
choco install rclone(for assembling units)
choco install winfsp
In all cases, updating is as simple as run:
rclone selfupdate
First steps: the concept of “remote” and interactive configuration
The basic unit in Rclone is the “remote”A configuration entry that describes how to connect to a specific service. For example, gdrive: for personal Google Drive, s3-backup: for an S3 bucket or nombre_en: for an encrypted folder.
Everything is managed from the assistant. rclone config, which saves the default settings in ~/.config/rclone/rclone.conf in INI format. That file is gold: a mandatory backup (and even better, encrypt it).
Quick example: creating a Google Drive remote on a computer with a browser:
rclone config
# n) New remote
# name> gdrive
# Storage> drive
# client_id> (vacío para usar el cliente por defecto de rclone)
# client_secret> (vacío)
# scope> 1 (acceso completo)
# service_account_file> (vacío)
# Edit advanced config? n
# Use auto config? y (abre el navegador, inicias sesión y aceptas permisos)
On a server without a graphical environment, the movie changes a little.When it asks if you want to use automatic settings, choose nIt will show you a command like this:
rclone authorize "drive"
You execute that command on your desktop computer., where you do have a browser, you authorize the account and copy the resulting JSON token to paste it into the server when prompted (config_token> {...}). With that you have a functional remote even if the server is headless.
A real-life example: unlimited and encrypted shared Google Drive
For a while, links circulated for creating "unlimited" shared drives on Google Drive associated with universities and similar institutions. Those who have taken advantage of this trick often use them as a "well" to send large volumes of multimedia data from their server or Raspberry Pi.
The typical workflow for using one of those units with Rclone is
- Get the Shared drive ID by entering it through the browser and copying the last part of the URL (something like
0AKXXD2qTbW50Uk9PVA). - Create a normal Google Drive remote (
Storage> drive) and, in the fieldroot_folder_id>, paste that identifier. - When Rclone asks if it should treat it as a Team Drive, answer yes and choose the drive in question.
Once you have access to that drive, the next logical step is to encrypt it.Because you don't really know who else might see that data on the other end. That's where the backend comes in. crypt.
Create an encrypted layer over that shared drive It would be something like this:
- Again
rclone config→ New remote → name for examplenombre_en. - Choosing Storage > crypt.
- En
remote>putnombre:onombre:carpeta/, which is the previous remote (the one from Drive) pointing to a folder. - Choose filename_encryption = standard y directory_name_encryption = true so that file and directory names are also obfuscated.
- Generate one or two strong passwords (password and salt), either manually or with the built-in generator.
From then on, everything you upload to nombre_en: will be encrypted And on the Google Drive website you'll see strange and unreadable names. Only Rclone with those keys can translate them back.
Basic commands: list, copy, synchronize, and delete

Rclone's general syntax is quite consistent.: rclone <subcomando> origen destino [flags]The trick is to internalize a few commands and a few key options.
To explore remote and content you have:
rclone listremotes: displays all configured remotes.rclone ls remote:: list of files with size.rclone lsd remote:: directories/buckets only.rclone tree remote:orclone tree remote:carpeta -dto see the tree-like structure.rclone ncdu remote:: interface type ncdu to see the size of everything without getting lost.
Copy without deleting anything at the destination It is made with copy:
# Local → nube
rclone copy /home/usuario/fotos drive:fotos/verano -P
# Nube → local
rclone copy gdrive:Documentos/ ~/Documentos/ -P
# Nube → nube (server-side cuando el backend lo permite)
rclone copy gdrive:proyecto/ s3-backup:proyecto/ -P
Synchronizing (mirroring) is done with syncAnd here we need to be careful.because it removes from the destination what is no longer at the origin:
# Sincronización unidireccional
rclone sync /srv/data onedrive:data -P
# Simulación sin tocar nada
rclone sync /srv/data onedrive:data --dry-run
For cleaning operations There are several useful commands:
rclone delete remote:carpeta: deletes files from a path, leaves directories.rclone purge remote:carpeta: deletes path and all its contents.rclone rmdirs remote: --leave-root: deletes empty directories.rclone cleanup drive:: empty the trash (for example, in Google Drive).
The most important cross-flags The ones you'll use all the time are:
-Po--progressto see the progress.-v,-vvy--log-levelto increase verbosity.--transfers Ny--checkers Nto control parallelism.--bwlimitto limit bandwidth, even by time slots ("08:00,1M 18:00,off").--include,--excludey--filter-fromto fine-tune what is copied.
Mount the cloud as if it were another disk (FUSE / VFS)
One of Rclone's star features is mounting remotes as file systemsIt's incredibly useful for Plex/Jellyfin, for working directly with documents you have in Drive, or for exposing an S3 bucket to an application that doesn't know how to use S3.
On Linux you need FUSE3 installed and enabled allow_other en /etc/fuse.conf so that other users can access the mount. On Windows, it relies on WinFsp.
Example of mounting in Linux:
mkdir -p /mnt/gdrive
rclone mount gdrive: /mnt/gdrive \
--daemon \
--allow-other \
--vfs-cache-mode full \
--vfs-cache-max-size 20G \
--dir-cache-time 48h \
--log-file /var/log/rclone/gdrive.log \
--log-level INFO
Parameter --vfs-cache-mode Mark the difference of behavior:
- off: no cache; for simple reads only.
- minimalJust enough to make things work.
- writes: caches writings until they are uploaded.
- full: caches reads and writes; ideal for multimedia, office applications, etc., at the cost of more disk space.
If you want the setup to survive rebootsThe usual practice is to create a dedicated systemd service. A commonly used pattern is a parameterized service. rclone-remote@.service that mounts any remote in /cloud/<nombre> and then activate it with:
sudo systemctl enable --now rclone-gdrive.service
On Windows, mount OneDrive or Drive as letter X:
rclone mount onedrive: X: --vfs-cache-mode full
Advanced encryption with the crypt backend and configuration security
If there's one thing to take seriously with Rclone, it's security.: the file rclone.conf It contains OAuth tokens, S3 access keys, encrypted passwords… and anyone who copies it can, in practice, access your clouds.
the back end crypt It allows you to encrypt files, filenames, and directory names. transparently. When configuring it, you choose:
- remote: the underlying backend (for example
gdrive:encrypted). - filename_encryption:
off,standarduobfuscate. - directory_name_encryption: true/false.
- Password y password2 (optional salt).
The resulting configuration looks like this (summarized):
[gdrive-crypt]
type = crypt
remote = gdrive:encrypted/
password = ENCRYPTED_PASSWORD_HASH
password2 = ENCRYPTED_SALT_HASH
filename_encryption = standard
directory_name_encryption = true
Rclone also allows you to encrypt your own configuration fileso that even if someone steals it, they won't be able to read the secrets without the master password:
rclone config password
# te pedirá una contraseña y a partir de ese momento
# Rclone te la pedirá en cada ejecución
To automate scripts without having to type the password In cron or systemd, you can use the environment variable RCLONE_CONFIG_PASS or a small script that exports it by reading from a protected file (root owner, 600 permissions) or from a secrets manager:
export RCLONE_CONFIG_PASS="contraseña_muy_larga"
rclone lsd gdrive: --ask-password=false
Performance optimization: large files, many small files, and limits
By default, Rclone works reasonably well, but with a few tweaks, you can get much more out of it.especially when moving terabytes of data or millions of small files.
For large files (VM images, compressed backups, long videos) it is important to increase the size of the fragments and the parallelism:
rclone copy backup.tar.gz gdrive:backups \
--drive-chunk-size=256M \
--transfers=8 \
--progress
For many small files (photos, logs, static sites) the number of checkers and transfers matters more and use --fast-list If RAM allows:
rclone copy fotos/ onedrive:fotos/ \
--transfers=32 \
--checkers=16 \
--fast-list \
--progress
When bandwidth or API limits must be respected (like the famous 750 GB/day upload limit to Google Drive or limits on requests per second), you have tools like:
--bwlimit "08:00,1M 18:00,off"to go slowly during working hours and turn on the tap at night.--tpslimity--tpslimit-burstto contain 403 “rate limit exceeded” errors.
Automate backups: cron, systemd, and full scripts
Make a rclone copy Manual is fine for testingBut the interesting thing is to schedule recurring backups, with logs, version rotation, and notifications when something goes wrong.
On Linux, the easiest way is to use cronFor example, to run a document synchronization to Google Drive every night at 2:00:
crontab -e
0 2 * * * /usr/bin/rclone sync /home/user/documents gdrive:backup \
--log-file=/var/log/rclone-backup.log --log-level=INFO
When you want something more serious, with lock, hold, and notificationsIt's worth having a dedicated script. A typical scheme includes:
- Variables with origin, destination, bandwidth limits and retention days.
- Use of
flockor a lockfile to prevent simultaneous instances. - Use of
--backup-dirand suffixes.bakfor versioning. - Cleaning up old backups with
--min-agey--rmdirs. - Optional sending of notifications to Slack/Discord/Email when it fails.
If you prefer systemd to cron, you can define a rclone-backup.service that executes the script and a rclone-backup.timer that marks the cadence (for example, at 2:00 with a small random one so as not to always coincide).
Graphical interfaces and usage from Android
Not everyone is comfortable with the command lineAnd for certain tasks (exploring content, checking that something is where it should be) a GUI can save time.
Rclone brings an official web GUI (experimental, but functional) that rises with:
rclone rcd --rc-web-gui --rc-user=admin --rc-pass=password
# luego abres http://localhost:5572 en el navegador
In addition, there are third-party GUIs such as Rclone Browser or modern forks like “Rclone UI”These tools allow you to drag and drop, schedule tasks from a window, and view progress in a visually appealing way. On Linux, you can install them from repositories or use an AppImage; there are also Docker images that expose the interface on an HTTP port.
On Android, the trick is to reuse your file rclone.confMany Rclone client apps allow you to import it (or place it in a folder) rclone/ (in internal memory) and, from there, you can access all the defined remotes just like on the server. Ideal for streaming encrypted multimedia content from your mobile device.
Troubleshooting common problems and diagnosis
Working with cloud APIs means dealing with authentication errors and service limits.Knowing how to read the message and having the appropriate flag on hand saves a lot of trouble.
Authentication errors (“failed to make oauth client” or expired tokens) They are usually fixed with:
rclone config reconnect remote:to renew credentials.- In extreme cases, delete the remote and configure it from scratch.
403 rate limit errors, especially in Google DriveThey are reduced by lowering aggressiveness:
rclone copy source: dest: \
--transfers 2 --checkers 4 \
--tpslimit 2 --tpslimit-burst 0
If a FUSE mount returns “permission denied”, check:
- What in
/etc/fuse.confthe optionuser_allow_otheris uncommented. - What are you riding with?
--allow-otherand the appropriate user.
When you suspect files are missing after a syncThe healthy thing is:
- Having tried before with
--dry-run. - Use
rclone check origen destino --one-wayand review the combined report. - Check patterns of
--exclude/--includethat they may have leaked more than they should have.
For fine refining, -vv and the flags --dump headers / --dump bodies They show the HTTP requests made by Rclone, and rclone backend features remote: o rclone test They help to see exactly what each backend supports and how the connection performs.
Ultimately, Rclone allows you to mount anything from a modest backup of your Documents folder in Dropbox to a multi-cloud replication system with encryption, persistent mounts, and monitoring with Prometheus.The learning curve can be a little intimidating at first, but as you master it rclone config, copy, sync, mount y cryptThe rest is adding layers: automation with cron or systemd, advanced filtering, detailed logs, and minor performance tweaks to adapt it to your network and cloud environments. Once you fit all those pieces together, it stops being "just another console tool" and becomes the silent pillar that keeps your data safe, encrypted, and redundant without you having to think about it every day.