
If you come from the programming or classical computing And now you're struggling with Homelabs, Docker, Icecast, or Azuracast—it's no wonder your head's spinning. Between ports, IPs, SSL, Windows, Linux, and containers, it seems like you need a whole new degree just to set up a simple radio server.
The good news is that, once you understand What exactly are containers in Windows?How they differ from virtual machines and in what cases they make sense—everything starts to fall into place. You can have several applications listening on the same port (from the outside it looks the same), each in its own container, with working SSL certificates, and without needing to fill your house with Raspberry Pis.
What is a container (for real) and why isn't it a virtual machine?
A software container is, essentially, a insulated and lightweight package where you package an application along with everything it needs to run: libraries, runtime, configuration, and part of the user-mode operating system. This package runs on top of the host operating system kernel, instead of carrying an entire operating system within it.
In a virtual machine, on the other hand, you have a full guest operating system with its own kernel, drivers, and services, running on top of a hypervisor like Hyper-V, VMware, or VirtualBox. Each VM believes it has its own hardware: virtual CPUs, RAM, disks, network cards, etc. This provides very strong isolation, but also consumes more resources and takes longer to boot.
With containers, the host operating system (for example) Windows Server 2019 or 2022A Linux distribution (or a Linux distribution) shares its kernel with all containers. Each container sees a virtual filesystem, its own process space, its own logical network configuration, and yet, underneath, everything runs through the same kernel.
That trick of sharing the kernel makes a container much more lighter than a VMIt takes up less disk space, requires less memory, and boots in seconds (or less). That's why you can have dozens or hundreds of containers where before you could only manage a few virtual machines.
In summary, while the VMs virtualize hardware and they build an entire operating system on top of it, the containers. virtualize the operating system and they isolate only the application and its user environment.
The container ecosystem in Windows: what Microsoft offers
Microsoft has been investing heavily in containers for years, both for Windows and LinuxIt hasn't stopped at "Docker works on Windows and that's it", but has built a whole ecosystem around it: official images, integration with Visual Studio, support in Azure and orchestration tools.
On the local development side, you can use Docker Desktop on Windows 10/11 to run Windows and Linux containers on your own PC. Docker Desktop leverages the container functionality built into Windows and, when needed, a small VM for Linux containers or in WSL2But all of that is transparent to you.
If you work in a server environment, Windows Server 2016, 2019, 2022 and 2025 They allow you to run containers natively. With them, you can build serious solutions: classic .NET applications, backend services, APIs, microservices, etc., packaged in images and deployed as containers.
For the complete development cycle, Visual Studio and Visual Studio Code integrate native support for DockerDocker Compose, Kubernetes, and Helm. This lets you compile, debug, create images, and publish them to a registry with a couple of clicks or directly from the editor, without constantly switching between tools. If you want to compare environments and tools, check out this guide on IDE and development tools.
You can upload the images you create to Docker hub (if you don't care if they are public) or Azure Container Registry (ACR) If you want a private registry within your organization or cloud environment, your development, testing, and production environments can pull the images from there and deploy them as needed.
How a Windows container actually works
A Windows container is based on the host kernelBut it doesn't connect to it haphazardly. The system gives it an isolated "view" of the resources: virtualized file system, its own registry entries, processes, network, and, if you want, persistent storage mounted externally.
The files and libraries that the application needs in user mode are packaged into a basic imageOn top of that base image, additional layers are stacked: specific dependencies, configurations, your application code… The result of that stack of layers is the final container image, which will be the template from which you start one or more containers.
One key point: Images are immutableWhen you create a container from an image, the changes your application makes (temporary files, logs, etc.) are saved in a writable layer on top. If you discard the container, that layer is lost unless you've mounted a persistent volume or storage, such as an Azure disk or an Azure Files share.
This layered system allows you to reuse images between applications. For example, the .NET team publishes pre-built .NET Core images (based on Nano Server), and you only add your code and configuration. This saves you from installing runtimes each time, and the shared layers are downloaded only once.
For isolation processes in Windows there are two modes: the process isolationwhere the containers directly share the host kernel, and the isolation using Hyper-Vwhere each container runs inside a micro-VM with its own kernel. The first is lighter, the second offers added security and compatibility.
Base Windows images and container types
Microsoft offers several official base images Windows images on which to build your custom images. Each one is designed for different scenarios, sizes, and compatibilities.
The “Windows” image includes virtually all system APIs and services (except for some server roles). It is the most complete, appropriate if you need maximum compatibility with applications that use many operating system functions.
The “Windows Server” image is geared towards server scenarios It also includes the full suite of Windows Server APIs and services. Ideal for enterprise applications that were already designed for that environment.
“Windows Server Core” is a further version lightIt uses a subset of the Windows Server APIs and the full version of the .NET Framework. It includes most, but not all, server roles. It's a good foundation for typical server applications that don't require a full graphical interface.
“Nano Server” is the most minimal and optimizedIt's designed for .NET Core and specific server roles. Its small size makes it very attractive for containers where you want to start up very quickly and consume few resources.
Thanks to the layered nature of nature, you don't always have to start with one of these "pure" images. You can use, for example, an official image of .NET Core or ASP.NET Core which already includes the runtime, and then you just add your application. This reduces configuration work and also improves Docker caching because you're sharing layers with other images.
Containers for developers and administrators
For the development team, containers are pure gold: they allow boot identical environments to production in a matter of seconds, without messing up the laptop's operating system and without fighting over library versions or dependencies.
Instead of the typical phrase "it works on my machine," the developer starts a container with the same image as on the production server. That image includes the exact versions of runtimes, frameworks and configuration that the application needs, so many problems of "this DLL is different here" or "the Java version does not match" disappear.
The containers also make it easier collaborative workSharing an environment is as simple as passing along a Dockerfile or the registry image name. Any team member can start the same service in seconds, without having to follow lengthy installation manuals.
For IT professionals and system administrators, containers allow you to build standardized infrastructures For development, QA, and production. Each environment is defined by the same images and orchestration files, reducing surprises and manual configuration errors.
Additionally, you can use the interactive mode of containers to run, for example, multiple versions of the same command-line tool on the same server without conflicts. This is really useful for testing, migrations, or compatibility with legacy software, and for tasks such as Creating Bash scripts in Windows.
Key differences between Windows and Linux containers
Although conceptually similar, there are important differences between Windows and Linux containers. Both share the host kernel, but it is clearly not the same kernel nor does it expose the same APIs, so each host can only run containers of its own operating system type.
On a Linux host, you can only run Linux containers natively. On a Windows host, you can run Windows containers natively and, using techniques like Hyper-V or WSL2, also Linux containers, although in that case there is actually an additional layer that acts as an intermediary.
Windows has two isolation modes: processes and Hyper-V. Process isolation is very similar to that of Linux: the container directly shares the kernel And its main process is also seen from the host as just another process. If you look at the process list with PowerShell, you'll see that the container's PID matches a process on the host.
In Hyper-V mode, each container runs inside a micro-VM with its own isolated kernelFrom the host, you no longer see the application process directly, but rather the VM process (for example, vmwp on Windows). This is more secure and offers greater compatibility with some applications, but it consumes slightly more resources.
There is also specific limitations In Windows containers: not everything can be containerized. For example, services like Microsoft DTC (Distributed Transactions), client applications with traditional graphical interfaces like Office, and certain infrastructure roles such as DHCP, DNS, Domain Controller, NTP, or print and file servers are not supported within standard containers.
Advantages of using containers (also on Windows)
The list of advantages of containers is long, and applies to both Linux and Windows. The first is the isolationEach container is an independent unit, which reduces conflicts between applications and improves security if something breaks or is compromised.
The second is the portabilityA container encapsulates the application with its dependencies and configuration, so you can move it between different machines, data centers, or public clouds without having to reconfigure everything from scratch. The "build once, run anywhere" mantra makes perfect sense here.
Another big advantage is the resource efficiencySince multiple containers share the same kernel, RAM and disk consumption per instance is much lower than that of a virtual machine. You can run many more applications on the same physical server, resulting in cost savings.
In development, containers are a brutal accelerator: they create environments reproducible and automatableThese practices are very much in line with DevOps and CI/CD. Defining the image in a Dockerfile and versioning it in Git allows you to control exactly what's in production and how it was built.
Furthermore, maintainability improves: updating an application involves building a new image and deploy it. If something goes wrong, you can revert to the previous version without any problems, simply by changing the label or the deployment to another image.
Safety and risks in containers
Container security is a serious matter: it's not just a matter of "insulating it a bit" and calling it a day. It needs to be protected. the entire chainFrom the base image you're using to the runtime where the container is running. To strengthen host protection, review tools and apps to improve security.
One of the most common risks is using images with vulnerabilities or even with malware. That's why it's important to scan images (both your own and third-party) with vulnerability analysis tools before uploading or deploying them.
Another danger is exposure to sensitive dataPasswords, API keys, or certificates embedded in the image or in uncontrolled environment variables can leak critical information if the image is published on a public registry or if someone gains access to the system.
We also need to take care of the runtime configurationExcessive privileges, unrestricted host volume mounts, overly open network capabilities, etc. A misconfigured container can be used as an entry point to compromise the host or the rest of the infrastructure.
To mitigate all of this, scanning tools, static and dynamic code analysis, supply chain security policies, and orchestration platform controls (such as Kubernetes) are used to define resource limits, network policies, and access rules.
Containers or virtual machines: when is each one appropriate?
Choosing between containers and virtual machines isn't a black and white issue. Both technologies are complementary And, in fact, in many environments they are combined: VMs as a base and containers on top for applications.
VMs are the logical choice when you need total isolation, running different operating systems (for example, Linux on a Windows host without a specific middleware layer) or when the application requires very low-level access to specific hardware or drivers.
The containers, on the other hand, shine when the priority is the efficiency, speed and elasticityThey start up in seconds, scale easily, and consume fewer resources, which is perfect for microservices, APIs, web servers, and modern applications in general.
In the cloud, providers typically run containers on background virtual machines. For example, Azure Kubernetes Service (AKS) deploys nodes on Azure VMs, and containers run on those VMs. This gives you the flexibility of both worlds: strong isolation at the node level and lightweight application-level performance.
In many cases, the practical decision is to mix: use VMs for critical infrastructure services or tightly coupled to the operating system, and containers for application layers that benefit from scalability and portability.
Orchestration: why Kubernetes and company are essential
While you only have two or three containers, managing them manually with `docker run`, `docker stop`, or `docker logs` isn't a problem. The problem arises when your application consists of... dozens or hundreds of containerswith replicas, load balancing, updates, and monitoring.
That's where the container orchestrators like Kubernetes, which has become a key component of any modern container-based infrastructure. Its mission is to manage containers at scale and in production.
Typical functions of an orchestrator include the mass implementation of containers, load allocation to cluster nodes, health monitoring (if one container fails, another one takes over), failover between nodes, and automatic load scaling.
They are also in charge of the network functionsThey expose services to the outside, provide internal discovery services, implement firewall rules between pods, etc. They also coordinate application updates (e.g., rolling deployments) to prevent service outages.
In the Microsoft world, the central component is Azure Kubernetes Service (AKS), which offers managed Kubernetes both in Azure and on-premises through Azure Arc or Azure Stack. Other platforms such as Red Hat OpenShift They also provide increasing support for Windows containers, expanding the options for hybrid environments.
Containers in the cloud and as a service
The major cloud providers have assembled a whole catalog of container services so you don't have to manage everything from scratch. At the infrastructure (IaaS) and platform (PaaS) levels, you can find everything from image registries to fully managed Kubernetes clusters.
Amazon Web Services offers Amazon ECS (Elastic Container Service) and Amazon EKS (Elastic Kubernetes Service). ECS is one of the services. AWS proprietaryEKS, on the other hand, gives you managed Kubernetes, which is very useful if you want to use the de facto industry standard.
In Microsoft Azure, in addition to AKS, you have Azure Container Registry to store and version your container images privately. This fits perfectly with CI/CD pipelines based on Azure DevOps or GitHub Actions.
Google Cloud Platform offers Google Kubernetes Engine (GKE) as its primary managed Kubernetes solution. It also includes App Engine for running web and mobile applications without directly managing containers, although similar mechanisms are at play.
Besides these giants, many other IaaS and PaaS providers offer variations of "containers as a service." The key is that you focus on the image of your application and in its configuration, and the provider takes care of nodes, system patches, scaling and even part of the infrastructure security.
Tools for creating and managing containers
The most popular tool for working with containers is, without a doubt, DockerDocker introduced a standard image format, a runtime, and an ecosystem around it that greatly simplified the adoption of containers, even by people who were not systems experts.
At the heart of Docker is Docker Engine, the component responsible for create, run and manage containers on the host. On top of that, the Dockerfile is the text file where you describe how to build your image: what base to use, what packages to install, what ports to expose, what command to run at startup.
The resulting container image is a logical file containing all the necessary components for the application: code, runtime, dependencies, and part of the operating system. From this image, you can launch one or more containers, which are the live instances that run on the host.
To share and distribute images, Docker Hub acts as a public record massive, with thousands of official and community images. Organizations often combine it with private registries, such as ACRs or self-hosted registries, to better control what is deployed in production.
Besides Docker and Kubernetes, other tools have emerged: Podman (daemon-free and compatible with the Docker CLI), containerd (the runtime that Docker uses underneath), OpenShift as an enterprise platform based on Kubernetes, HashiCorp's Nomad for orchestrating workloads, Docker Swarm as a simpler orchestrator, and solutions like LXD or Vagrant that cover related scenarios.
Practical applications: from Netflix to your homelab
Shipping containers aren't just for big companies. Globally, companies like Netflix They use them to scale their streaming platform, banks like JPMorgan Chase leverage them for online banking services, and hospitals like the Mayo Clinic apply them in patient management systems.
In the education sector, universities like Harvard are using shipping containers to online aprendizaje plataformasensuring consistent environments for students spread across the globe. And in public administration, even agencies like the U.S. Department of Defense use containers in national security applications.
But getting down to earth, in a homelab or personal project, containers allow you to set up services like Icecast, Azuracast, web servers, databases, or monitoring panels on a single machine, without overlapping ports or dependencies.
Instead of dedicating a Raspberry Pi per service, you can set up several containers on the same host and use a reverse proxy (e.g., containerized Nginx or Traefik) that receives HTTPS traffic on port 443 and distributes it internally to your different services based on the domain or route.
Regarding SSL, the key point is to understand that the encryption ends at some point in the chain: this can be in the container running the service or in a reverse proxy in front of it. In both cases, the container sees "normal" HTTP traffic to its internal port, even though everything from the outside is encrypted.
On the network, each container has its own Internal IP address within the Docker network and an internal port. From the outside, the host advertises one or more ports and maps them to the container's internal port. This explains why you can have multiple containers listening on the same internal port 80, while on the host you only open, for example, 8080, 8081, and 8082 for each one.
In this context, containers in Windows make a lot of sense when you want to take advantage of your current Windows machine (laptop, desktop, server) to host multiple services without setting up a zoo of physical devices, maintaining order, isolation and relatively simple management.
Ultimately, understanding how containers work in Windows, the role of base images, how they integrate with the network, and their advantages over virtual machines allows you to make smarter decisions: from choosing whether your next .NET app will be containerized or run in a VM, to knowing how to set up an Icecast with SSL on your ThinkPad without burning through ports or resources.
