If you work with containers, sooner or later you'll get to the less glamorous part: Deploy a Docker container on a remote server without it all becoming a chaotic mess of commands, ports, certificates, and scripts. The good news is that the Docker ecosystem (Plesk, remote Docker Engine, Docker Desktop with WSL2, Portainer, GitHub Actions, etc.) already has almost all the typical problems solved; you just have to put them together intelligently.
In this article you will see, in considerable detail, how they work the different scenarios for using Docker remotelyFrom using a Linux or Windows server as a "build machine" or runtime node, to managing it from Plesk, VS Code, or even automating deployments with Docker Compose and CI/CD. The idea is that you can replicate the entire workflow: build, upload, run, expose ports, manage volumes, and control containers without having Docker running directly on your main machine.
Requirements and compatibility for using Docker on remote servers
Before deploying anything, you need to be clear where Docker can be run in a supported manner And with what limitations. In Plesk-type environments, for example, there are very specific operating system and architecture requirements.
Plesk supports Docker on a select list of modern Linux systems: CentOS 7, Red Hat Enterprise Linux 7, Debian 10/11/12, Ubuntu 18.04/20.04/22.04/24.04, AlmaLinux 8.x/9.x, Rocky Linux 8.x and also in Virtuozzo 7 environments starting with Update 1 Hotfix 1 (7.0.1-686). A key point: Docker in Plesk only works in x64 systemsSo if you're dealing with rare or 32-bit architectures, forget it.
There is an important detail that often goes unnoticed: Docker cannot be used if Plesk is deployed inside a Docker container.In other words, Plesk must run on a "real" host (physical or virtual) to manage containers, not the other way around. Furthermore, in Virtuozzo 7, CentOS 7-based containers come with a firewall enabled by default, forcing the Plesk administrator to manually configure firewall rules to open the ports that Plesk needs and those that the containers will use.
In the Windows world, things are different. With Plesk for Windows you can using Docker installed on a remote machinewithout having Docker on the server where Plesk runs. For that, you have to use Docker remote services and, very importantly, have an additional license (Remote Docker is included in packages like Hosting Pack, Power Pack, or Developer Pack, or can be purchased separately.) This allows you to keep Windows "clean" and centralize containers on another node.
One last point about compatibility: Docker containers managed from Plesk cannot be migrated or cloned directlyHowever, you can back up the data they use through volumes or snapshots. So the usual strategy is: persistent data using volumes, a reproducible image, and, if necessary, snapshots for specific states.
Installation and deployment of a remote-ready Docker server
The basis of any remote deployment is having a server with Docker correctly installed and configuredYou can go the classic route (installing Docker CE on Ubuntu, Debian, etc.) or use solutions that come pre-packaged with Docker.
On platforms like Cloudbuilder, for example, create a server With Docker pre-installed, it's basically a simple process in the panel: you choose one server image that already includes Docker On the “Applications” tab, select it in the setup wizard, and in a few minutes, you'll have a host ready to receive containers. If it doesn't appear the first time, you can search for “Docker” in the image search engine and select the application.
If what you want is a more advanced scenario, especially when working with Windows Server Containers and Kubernetes (AKS/EKS/GKE)You can set up your own remote "build machine" with Docker Engine. In Windows Server 2019, for example, the system itself already includes a Docker license. Docker Engine EnterpriseYou can install it via PowerShell using the DockerMsftProvider and the Containers feature. This approach is ideal if your laptop is running low on resources or if you need process isolation instead of Hyper-V isolationThis is common when you want your images to match the Windows nodes of Kubernetes.
The general flow would be: you start a Windows Server 2019 with the appropriate build (preferably in Server Core mode to save resources), install Docker Engine Enterprise, and then... You configure it to accept secure remote connections via TLSso that you can build images and run containers from your local machine or from a pipeline, without directly touching that server except for the initial configuration.
Searching, selecting, and managing images in Docker (local and remote)
Once you have a Docker host, the next step is manage the images you will use for your containersThis involves both the host's local repository and Docker Hub or other registries.
In interfaces like Plesk, you can use the search box to locate imagesThe system searches in the enabled repositories, which are usually the local repository (images already downloaded and stored on the server with Docker) and Docker hubIf an image is already stored locally, you will see indications such as "(local)" next to the version; otherwise, it will be downloaded from the remote registry.
It is common for a single application to have multiple labeled versionsTo launch a specific version, simply select the appropriate label when running the container. From Plesk, when choosing an image, you can select the “Image version” in a drop-down menu, or, if you prefer, simply choose the latest version. Additionally, you can access the documentation and description on Docker Hub from the image card (except for local images, where this doesn't apply).
Over time, the local repository becomes a catch-all, so it's important clean up obsolete imagesFor example, from Plesk you can go to the Docker > Images section, search for a specific image, or click on the link under a product name to view all local tags and the space they occupy. From there you select the ones you no longer need and delete them, freeing up disk space on the host.
Creation and detailed configuration of containers on a remote host

When running a container on a remote server, ideally you shouldn't just stick to the simple dockerrunbut rather to properly control its configuration: memory, ports, volumes, environment variables, and boot policies.
In a panel like Plesk, the typical workflow would be to log in Docker > Containers > Run ContainerFind the image, select the version, and then, in the next step, adjust its settings. This is where you define, for example, the Environment VariablesThe ports it will expose and the volumes it will mount. When you click Run, the container will be created and listed in the Containers tab; from there you can open its console logs to verify that it is working correctly.
There are several advanced options that are worth keeping track of:
- memory limitationBy default, Docker containers can use all available RAM on the host. If you don't want a container to overload the server, enable the memory limit option and define a reasonable value in MB for its load.
- Automatic startIf you disable "Automatic startup after system restart", when the host restarts your containers will not start automatically and the sites that depend on them will not start automatically. They will remain fallen until you pick them up manually.In production environments, it's usually a good idea to keep autorun enabled.
- Port allocationBy default, the automatic port assignment option maps the container's internal port to a random port on the host (for example, 32768). If you want to have control over that port, disable automatic assignment and configure a “Manual assignment”If this option does not appear, it usually means that the image does not expose ports.
- Port securityWhen using manual assignment, Docker usually binds that port by default only to 127.0.0.1 (localhost), making the service inaccessible from the internet. This is perfect if you're going to put a reverse proxy in front of it or only need internal access. If you uncheck the option that blocks external access, the port will be bound to all interfaces and the application will be accessible from outside using a host IP address plus the chosen port.
The container settings tab also allows you to perform actions such as change the container name, adjust environment variables or volume mappings, review logs and resource consumption, recreate the container with another image version, save the current configuration as a new image, download a snapshot, or completely remove the container.
Persistent data management: volumes and backups
One of the critical points when deploying Docker remotely is how persist data without tying them to the container's lifecycle. This is where Docker volumes come in, acting as host folders mounted inside the container.
A volume is nothing more than a directory of the host server mounted on the container's file systemThis ensures that your data (databases, user-uploaded files, important logs, etc.) remains even if you stop or delete the container. This way, if you need to recreate the container or change the image version, your data remains intact.
When configuring a volume, you have to specify, on the one hand, the absolute path on the host where the data will reside (Host field) and, on the other hand, the absolute path within the container where the application expects to find them (Container field). If you need more than one volume, simply add new entries. It's also advisable to review the official Docker documentation to fully understand the differences between Docker-managed volumes and bind mounts on system paths.
Keep in mind that, although Plesk or other tools may not allow you to "migrate" the container as such, Volumes can be copied or backed up. like any folder on the server, or even using file system-level snapshots. This shifts the backup strategy's focus to the volumes rather than the container itself, which is much easier to rebuild.
Network configuration and reverse proxy for remote containers
For a Docker application deployed on a remote server to be accessible from outside, it is not enough to expose ports in the container: it is necessary to coordinate host ports, firewall and, in many cases, a reverse proxy such as Nginx or Apache.
In environments where you use, for example, Cloudbuilder, if your application listens on port 3030 and you have a 3030:3030 mapping in docker-compose, you will need to open that port in the server's firewall policyThis is done by creating a custom rule and associating it with the server. Once opened, you can access the application from your browser using a URL like this: https://IP_DEL_SERVIDOR:3030.
If you work with Plesk and Nginx, it's very common to configure them reverse proxy rules to direct standard HTTP traffic (port 80 or 443) to the internal port where the container is listening. These rules are implemented in the domain's web server configuration, for example in /var/www/vhosts/system/$domain/conf/nginx.confThis way you can serve a website contained in Docker as if it were just another site on the server, with its domain, TLS certificate and so on, while the container listens on an internal port not directly exposed to the Internet.
These configurations usually work well even if the server is behind NATProvided you have your firewall ports and network mappings configured consistently. In complex scenarios (multiple containers, multiple hosts, etc.), it's common to use a generic reverse proxy that acts as an HTTP router and communicates internally with the various Docker services through their internal ports or a dedicated Docker network.
Using Docker Compose to orchestrate deployments on remote servers
When your applications start having more than one service (frontend, backend, database, cache, etc.), working only with dockerrun It gets awkward. That's where Docker Compose comes in, allowing you to define a complete service stack in a single file and set it up at once on the remote server.
A simple, typical example would be a NodeJS application with Express. First, you create your project (for example, in an “app” folder), then initialize it with npm init, you install Express with npm install express and you write a index.js Start the server on port 3030. Once you have the app running locally with node indexThe time has come to dockerize.
To do this you create a Dockerfilefor example in “app/Dockerfile”, where you define the base (for example FROM node:12), the WORKDIRYou copy the code, you run it npm install and you're exposing port 3030. It's a good idea to add a .dockerignore to avoid uploading large folders like node_modules to the build context.
Then you define a file docker-compose.yml at the project's root, where you describe the services. In a simple app, you might have something like an "express" service that does build on ./appIt maps port 3030:3030 and defines the startup command. In more serious projects, containers are also added for the frontend, the backend, the database (for example MongoDB) and any other component, each with its own configuration of volumes, networks and variables.
To deploy this stack on a remote server, the usual procedure is:
- Connect to the server via SSH that already has Docker installed.
- Clone the repository with the project (or upload the files via SCP, although git is usually more convenient).
- Make sure the binary docker-compose It's already installed, as it doesn't come pre-installed with Docker in many distributions. On Linux, for example, you can download it from GitHub with
curlsave it in/usr/local/bin/docker-composeand mark it as executable. It's advisable to check the official "Install Docker Compose" page for the recommended version. - From the folder where the
docker-compose.yml, execute docker-compose-up (perhaps in mode)-d(so that it runs in the background). The first execution will take longer, because it will have to download images and resolve dependencies; subsequent executions will be much faster.
You can also use panels like Plesk Deploy Docker Compose stacks Without directly using SSH. From Docker > Stacks > Add Stack, you can give it a project name and choose whether to paste the contents of the Compose file into an editor, upload it from your machine, or select a file already present in a domain's web space. The stack will handle declaring and creating custom containers, whose artifacts will be stored in the website's root directory.
Automated remote deployments with GitHub Actions and docker-compose
If you work with CI/CD, it makes sense that you'd want to Automate the deployment of your containers to a remote server Every time you push to certain branches (for example, development or main), Docker Compose fits very well into this type of workflow.
A fairly common pattern is to have your GitHub Actions workflow connect to the remote Ubuntu server via SSH. So far, many people do something like this: Download images from Docker Hub, stop and delete running containers, and then launch docker run for each one.By migrating to docker-compose, the workflow can be greatly simplified.
The most straightforward approach is that, once the SSH connection is established, the job should perform a cd to repository on the server (or to the directory where you have your docker-compose.yml) and run the docker-compose commands you need, for example docker-compose pull to update images and docker-compose up -d --remove-orphans to recreate the services according to the changes in the Compose file.
It's a perfectly valid strategy and, for many projects, more than enough and quite robustThe key is to ensure the remote server has the Docker binaries and docker-compose correctly installed, and that the user connecting via SSH has permissions to execute those commands (often membership in the docker group or a properly configured sudoers group). From a GitHub Actions perspective, the rest is simply a matter of using a trusted SSH action and managing your secrets (server IP, username, private key, etc.) using the repository's secrets.
Remote Docker from Windows without Hyper-V: WSL2, Remote Docker Engine, and Portainer
If you're on Windows and can't or don't want to enable Hyper-V (for example, because it conflicts with other virtualization tools), you have several ways to use containers hosted on another server as if they were local, in the style of what LXD offers.
One option is to lean on Docker Desktop with WSL2 backendIf you can use WSL2 (Windows Subsystem for Linux v2) but not classic Hyper-V, Docker Desktop allows you to set up a Linux-based development environment where containers run on a WSL2 distribution (Ubuntu, for example), and you operate from Windows. The typical workflow is:
- Install WSL2 and a Linux distribution (Ubuntu 22.04, 24.04, etc.).
- Install Docker Desktop and, in the configuration, enable the engine based on WSL2 in the General section.
- In “WSL Integration”, choose the WSL2 distributions where you want to integrate Docker.
- From the distro, check
docker --versionand try withdocker run hello-world.
From there you can work with US Code Using the WSL, Dev Containers, and Docker extensions to develop directly in remote containers (actually hosted on WSL2), opening folders in the container, debugging, etc. It's very convenient for development, although it's not exactly the same as pointing to an external remote server.
If what you want is to use a truly remote Docker EngineHosted on another server (Linux or Windows Server), you can configure it to accept TLS connections over TCP (typically on port 2376). To do this, you generate X.509 certificates (CA, server certificate, and client certificate), configure the daemon.json of the server To enable TLS, specify paths to the certificates and open the TCP host where the daemon will listen. Then, on the client, install the certificates in ~/.docker (or in the Windows user profile) and configure environment variables such as DOCKER_HOST y DOCKER_TLS_VERIFY or, in more recent versions, you create a docker context pointing to that remote host. Once that context is activated, any command docker that you launch on your local machine It actually runs against the remote server..
Another very useful tool in this scenario is carrierPortainer provides a web interface for managing Docker containers on one or more servers. To install it, you first create a data volume for Portainer and then start its container, mapping ports 8000 and 9000, and mounting the /var/run/docker.sock which allows you to communicate with the host's Docker daemon. From there, you access it through port 9000 (remember to open it in your firewall), create an administrator user, and configure both local and remote connections. Portainer can communicate with other Docker nodes using the Docker API or the Portainer Agent, allowing you to Manage multiple Docker hosts from a single web console.
Advanced configuration of remote Docker Engine with TLS and contexts
When you want your development machine to use a remote server as its main Docker engine, it is recommended Protect that communication with TLS and not leaving the daemon running wildly over TCP. This involves working with X.509 certificates and fine-tuning on both the server and the client.
On a Windows Server with Docker Engine, for example, you can automate certificate generation with a PowerShell script that installs openssl, creates a CA, and issues a server certificate with subjectAltName that includes the DNS and the necessary IPs (public, internal, and localhost), and also generates client certificates for mutual authentication. The resulting files (usually ca.pem, server-cert.pem, server-key.pem, cert.pem y key.pem) are placed in an accessible folder, for example C:\ProgramData\docker\config\.
In the File daemon.json From the server you configure keys such as "tls": true, "tlsverify": true, "tlscacert", "tlscert", "tlskey" and the list of hosts, including a tcp://0.0.0.0:2376 along with the native Windows named pipe. Restart the Docker service and verify that everything is still working. Remember that the standard encrypted port is 2376, while 2375 is reserved for unencrypted connections (highly discouraged for internet access).
Finally, on the client machine, copy the certificates to your Docker directory (~/.docker or similar) and configure either the Environment Variables (DOCKER_HOST, DOCKER_TLS_VERIFY) or a context with docker context createwhere you specify host, CA, cert, and key. Once the remote context is active, when you run docker version You'll see that the client and server have different versions and architectures (for example, a Linux/amd64 client and a Windows/amd64 server with Docker Engine Enterprise), but Your commands are applied to the remote host as if they were local..
If you previously used Docker previews for WSL, you may have old contexts like "wsl" that are no longer used. You can check this with docker context ls and, if necessary, remove obsolete contexts with docker context rm to avoid connection errors with older pipes.
Taken together, this whole soup of pieces (Plesk, remote Docker Engine, Docker Desktop with WSL2, docker-compose, Portainer, and GitHub Actions) allows you to set up workflows in which You build and deploy containers on remote servers in a secure, replicable, and relatively convenient manner.Separating development resources, execution nodes, administration panels, and orchestrators according to what best suits your needs in each case. Share the guide and more users will learn about the Docker container.