If you have a PC or server with multiple network cards and want to get the most out of them, you've probably heard of... NIC Teaming in Windows 11This technique has been used for years in professional environments, but it is becoming increasingly common to see it also in home labs, hand-built NAS devices, or small home servers that move a lot of data over the network.
The problem is that in In Windows 11, many of these features are more hidden or directly reduced. Regarding Windows Server, it's not as intuitive as simply opening a graphical wizard and being done. You need to understand exactly what NIC Teaming is, what types exist (classic LBFO, SET, SDN…), what limitations there are on client systems, and how to set it up correctly using PowerShell and a properly configured switch.
What is NIC Teaming and why is it relevant in Windows 11?
When we talk about NIC Teaming or NIC team formation We're referring to the ability to combine multiple physical network cards into a single logical adapter. From Windows' perspective, you no longer have two or three independent interfaces, but rather a single "virtual" adapter on which you configure the IP address, subnet mask, gateway, and DNS.
This approach allows traffic to no longer depend on a single card, but on all the NICs that are grouped in the teamThe goal is twofold: to increase total bandwidth (link aggregation) and to gain fault tolerance, so that if a cable, a switch port, or an adapter fails, the rest will continue to function without you losing connectivity.
A simple way to visualize this is to imagine you have several water pipes in parallel and you join them into a single pipe of larger diameter: The total flow rate increases, and a puncture in one of the tubes won't leave you dry.Something similar happens in a network: you combine the capacity of several links and minimize the consequences of an individual failure.
For physical servers, it is very typical to have multiple 1 Gbps or 10 Gbps Ethernet portsInstead of leaving some unused or distributing them for different tasks, the most logical approach is to aggregate them into a single NIC (Network Interface Card) that handles load balancing and high availability. This is especially useful for file servers, virtualization environments, home NAS devices, or equipment used as backup and media centers.
In Windows Server, NIC Teaming support has been very mature for several versions and can be managed from both the server administrator as through PowerShell. In Windows 10 and Windows 11 the story changes: you don't have a built-in graphical assistant for classic teaming, many LBFO features are designed for servers, and yet, There are cmdlets and alternative solutions. which allow you to achieve very similar results if you know where to touch.
Differences between NIC Classic Teaming, LBFO, SET and SDN
To clarify what can and cannot be done in Windows 11, it is helpful to separate several concepts that are sometimes used interchangeably: NIC Teaming (LBFO), Switch Embedded Teaming (SET) and Software-Defined Networking (SDN)They all aim to improve performance and availability, but they don't work the same way or are designed for the same scenarios.
Classic NIC Teaming, also known as LBFO (Load Balancing and Failover)This was the technology that Microsoft popularized in earlier versions of Windows Server. With LBFO, you can create a NIC team "independent" of the Hyper-V virtual switch and expose one or more virtual adapters to the host or virtual machines, handling load balancing and failover between the physical cards.
With the arrival of Windows Server 2016 and the strong commitment to the software-defined networking (SDN)Switch Embedded Teaming (SET) appears. In this model, the teaming logic is integrated directly into the Hyper-V virtual switch. Instead of creating a team externally and then plugging it into the vSwitch, in SET, the added physical adapters power the vSwitch itself, reducing intermediate layers and improving integration with the SDN stack.
SET was designed with advanced virtualization environments in mind, where technologies such as RDMA (Remote Memory Direct Access), NVGRE, VXLAN, NFV or network controllersIn these scenarios, the goal is to lower latency, maximize performance, and control traffic from many virtual machines within a highly flexible network architecture.
On workstations and client systems like Windows 10 and Windows 11, the situation is more down-to-earth: you don't usually have a large-scale SDN Hyper-V, but you can have multiple Ethernet cards that you want to group To communicate with a managed switch, an advanced router, or a NAS that supports link aggregation, the options involve using the available cmdlets (for example). New-NetSwitchTeam) to create a basic team or resort to solutions that restore some of the capabilities of LBFO in editions where they are no longer officially exposed.
Real-world example: Setting up NIC Teaming on a NUC with Windows 11
A very illustrative example is that of a high-end NUC Extremewhich you can find second-hand at a very good price and which comes with interesting extras: generous SSD storage, the possibility of installing an ITX GPU and, what matters here, two 1 Gbps Ethernet interfaces. This type of machine is perfect as a mini home or lab server (See full specifications).
Out of the box, those two RJ45 ports function as independent 1 Gbps interfaces. Even if you connect both to the same switch, Windows will continue to see them as separate paths.So, you won't gain bandwidth just by plugging in two cables. For the system to utilize them together, they need to be grouped using a NIC (Network Interface Card).
Before touching Windows, it is essential to prepare the managed switch where you will connect the equipmentMany switches allow you to create a "trunk" or link aggregation (for example, using LACP) with two or more ports. The idea is that these ports behave as a single, higher-capacity logical link, so the NUC and the switch can exchange traffic as if they had a "double pipeline."
Once the switch trunk is configured, it's time to get to the Windows 11 part. The first step is to open a command prompt. PowerShell with administrator permissionsFrom there, the key command to get started is Get-NetAdapter, which lists all the network adapters present in the system, both physical and virtual.
In a typical setup, you'll see something like a Wi-Fi interface (for example, Intel Wi-Fi 6), two wired Ethernet NICs (perhaps named LAN01 and LAN02), a Bluetooth interface, and maybe an additional adapter such as a 1 Gbps USB EthernetWhat we need to note down is the exact names of the cards we want to include in the team, because those are the ones we will use in the creation command.

Creating a Switch Team with PowerShell in Windows 11
Once the interfaces that will form the team have been identified (for example LAN01 and LAN02The next step is to create the team itself. Windows 11 provides the New-NetSwitchTeam cmdlet for this purpose, which allows you to define a team of adapters for switching and link aggregation scenarios.
The basic command would be something like: New-NetSwitchTeam -Name «LANTEAMING» -TeamMembers «LAN01″,»LAN02»This PowerShell command groups the two physical NICs under a new logical adapter, which will appear in the system as if it were an additional card. From that point on, IP properties are configured on LANTEAMING and not directly on LAN01 or LAN02.
To verify that the team has been created correctly, you can run Get-NetSwitchTeamThe result should show a computer named LANTEAMING and a member list that includes LAN01 and LAN02. It's also a good idea to open the Windows Network Connections panel to confirm that the new logical adapter appears among the available options.
If you run ipconfig after creating the computer, you will see that The IP address is no longer associated with each physical NICbut to the LANTING adapter. That is, the network configuration (IP, mask, gateway and DNS) is assigned to the logical equipment, while the physical cards become "legs" that support that virtual adapter.
In practice, the new device usually obtains an IP address via DHCP, and often retains the same IP address that one of the network cards (for example, LAN01) had before the device was formed, provided the process is done without drastically interrupting the connection. In any case, you can easily configure a Static IP address, with its subnet mask, gateway, and DNS servers on the LANTEAMING adapter just as you would with a conventional NIC.
A clear advantage of this setup is that if one interface fails (for example, if you disconnect a cable), the device still has network access thanks to the other. You lose aggregate bandwidth but maintain connectivity, which means a high availability bonus in the face of isolated failures of cabling, switch ports or network adapters.
Real benefits: aggregated bandwidth and fault tolerance
It's often simplified to say that NIC Teaming "doubles the speed," but the reality is more nuanced. When you add two properly configured 1 Gbps links, what you actually get is a total capacity of up to 2 Gbps distributed across multiple connectionsnot so much a single 2 Gbps flow to a single client.
In everyday use, this means that if multiple devices access your NUC, NAS, or Windows 11 server simultaneously, the system may distribute the different sessions among the NICs that make up the teamThe sum of all flows can approach 2 Gbps, provided the switch supports link aggregation and the load balancing algorithm does its job well.
The other major advantage is high availability. With multiple network cards, cables, and switch ports, the likelihood of a complete network outage is greatly reduced. If one component fails, the network interface card (NIC) continues to provide service. service at lower speed but without complete disconnectionsFor servers with connected users or virtual machines running on top of them, this is vital.
Even in an advanced home environment, where the server centralizes backups, multimedia, or testing environments, a network outage in the middle of a long backup or a streaming session It can cause quite a bit of trouble. With NIC teaming, minor hardware issues are mitigated and go much less noticeable.
However, to truly notice the sustained performance improvement, it's important that the rest of your infrastructure keeps pace: you need a managed switch with support for LACP or other aggregation methodA router or firewall capable of handling that bandwidth and, if you use NAS or disk arrays, that also have multiple ports with bonding or teaming support.
NIC Teaming in Windows 10 and Windows 11: features and PowerShell
In Windows 10 and Windows 11, enabling NIC Teaming is not as user-friendly as in Windows Server, where the Server Manager guides you with a graphical wizard. On client systems, the usual process involves PowerShell with elevated privileges and the use of cmdlets such as Get-NetAdapter and New-NetSwitchTeam.
The sequence in Windows 10 is very similar to the NUC example: first, Get-NetAdapter is run to see what interfaces are available, and then a command like this is launched. New-NetSwitchTeam -Name “LAN” -TeamMembers “LAN1″,”LAN2” Adjusting the names to match those displayed by your device. After creation, you can verify the result with Get-NetSwitchTeam and see the new logical adapter in the connections panel.
In Windows 11 the philosophy is similar, but it's important to keep in mind that Microsoft has been reducing official support for LBFO on client computersEven so, many people still use New-NetSwitchTeam to set up basic systems when the drivers allow it, and in some cases they use scripts or external tools to re-enable functions that are no longer exposed by default.
A very curious example is that of a user who set up a NAS and media server with Windows 11 IoT Enterprise LTSC 2024 He discovered that he couldn't use LBFO/NIC Teaming as he did in Windows 10. He also couldn't switch to Windows Server because some applications in his lab weren't compatible with that edition. After considerable research and relying on unofficial documentation, he created a one-stop installation solution that Restores LBFO capabilities in Windows 11.
This solution, published in a GitHub repository (hifihedgehog/Windows11LBFO), has been shown to work on Windows 11 Pro 24H2 and Windows 11 IoT Enterprise LTSC 2024, both on physical hardware and in virtual machines. Thanks to this, it's possible to see [the issue] again. LACP link aggregation between advanced routers, managed switches, and home servers, with multiple clients transferring data simultaneously without overloading a single link.
If you want to try something like this, it's recommended that you thoroughly review the repository documentation, check the requirements, and report any issues on GitHub so the author can refine the process. It's still an advanced and unofficial solution, but it opens the door to restoring the behavior of Classic NIC Teaming in Windows 11 for demanding home and laboratory environments.
Network scenarios with NIC Teaming and SET in Windows Server
Although we're focusing on Windows 11 here, understanding how Windows Server handles NIC Teaming helps provide a more complete picture of the direction modern networking is heading. Many aspects are defined in Windows Server 2016 and later. officially approved scenarios where NIC Teaming, SET and SDN are involved.
In the field of software-defined networking (SDN)There is the network controller, which allows the deployment and management of a multi-node instance capable of managing network policy through a Northbound REST API. From there, virtual networks based on Hyper-V Network Virtualization can be created and controlled, using encapsulations such as NVGRE or VXLAN to build flexible topologies.
Within Network Function Virtualization (NFV) scenarios, Windows Server supports the deployment of software load balancers for both north-south traffic (data center entry/exit) and east-west traffic (between virtual machines), Layer 3 gateways, Site-to-site IPsec VPN (IKEv2)GRE gateways and transit routing via BGP with M+N redundancy. In all these cases, having multiple physical NICs aggregated via SET or NIC Teaming helps to provide sufficient bandwidth and redundancy.
Regarding the network platform, Windows Server allows the use of Converged NICs that combine RDMA and Ethernet traffic On a single adapter, create low-latency routes with Packet Direct enabled on the Hyper-V virtual switch and configure SET to distribute SMB Direct and RDMA flows between a maximum of two physical adapters. Here, teaming is fully integrated with virtualization features.
If we look specifically at the Hyper-V virtual switch, scenarios such as creating a vSwitch with an RDMA vNIC, a vSwitch with vNICs that leverage SET and RDMA, creating SET devices within the switch itself, and are supported. manage those teams via PowerShellIn this context, NIC teaming is no longer an optional extra, but a central component of many modern virtualization architectures.
Related services: DNS, IPAM and other network components
NIC Teaming does not work in isolation; it usually coexists with network services such as DNS, DHCP or IPAMespecially in Windows Server deployments. Understanding which scenarios are supported by these services helps to fit the role of NIC teams into the bigger picture.
For example, on DNS servers with Windows Server you can define directives based on geographic location to direct traffic to different addresses depending on the origin of the query, configure split-brain DNS to give different internal and external answers, apply filters to queries, use policies to load balance applications through DNS, and generate smart answers based on the time of day.
In addition, DNS can work with zone transfer policiesIntegration with Active Directory Domain Services (AD DS) via integrated zones, response rate limiting to protect against certain attacks, and support for DNS-based authentication of named entities (DANE) are all supported. This relies on the server having a stable network with redundant routes and adequate bandwidth thanks to teaming.
In the field of IPAM (IP Address Management), Windows Server allows you to detect and manage DNS and DHCP servers, as well as IP address ranges. multiple federated Active Directory forestsYou can centralize the management of zones and records, define highly granular role-based access controls, and delegate the administration of certain properties to specific user groups. Furthermore, many of these tasks can be automated using IPAM-specific PowerShell cmdlets.
When these services (DNS, DHCP, IPAM) are run on hosts with multiple network cards aggregated using NIC Teaming or SET, They gain resilience and performanceA temporary outage of a link does not bring down the service, and bandwidth aggregation helps to better support peaks in queries or address renewals without throttling a single network port.
Creation, expansion and removal of NIC equipment
Within the NIC team formation scenarios supported in Windows Server and, more cautiously, in certain client systems, not only the creation of teams is considered, but also their modification and deletion in a controlled manner. This allows you to adapt the network topology without having to rebuild everything from scratch every time your needs change.
When creating a NIC team in a compatible configuration, adapters of the same type (for example, two 1 Gbps Ethernet cards) are selected and added under a team name, choosing the appropriate operating mode based on the support offered by the switch (LACP, static, switch-independent, etc.). In addition to New-NetSwitchTeam, you have cmdlets to adjust advanced parameters, check member status, and view usage statistics.
If at any point you need more capacity, you can add additional physical adapters to an existing system, provided the combination is compatible with the configured teaming mode. This allows, for example, starting with two NICs and later adding a third to gain more bandwidth or redundancy, as long as your switch and available ports allow it.
It is also possible to do the opposite: remove team membersWhether you want to use them for other purposes or you're reorganizing the network, the team will continue to function with fewer members, albeit with less bandwidth and lower fault tolerance. If you reduce it to a single NIC, you'll effectively lose the benefits of teaming, although you'll maintain centralized IP configuration.
Finally, you always have the option to completely remove a NIC device when you no longer need it. When you delete the device, Windows unmounts the logical adapter and returns the physical cards to their original state, without the IP address associated with the old device. This step needs to be carefully planned because it involves a major change in network topology of the host and can leave services inaccessible if you do not reassign addresses properly.
All this flexibility, both in Windows 11 with PowerShell and specific solutions, and in Windows Server with SET and SDN, is designed so that we can to get the most out of the available network hardwareGaining performance, stability, and resilience when something as simple as a cable or port decides to fail at the worst possible moment.
Thanks to the possibilities offered by NIC Teaming, LBFO, SET and associated network services, it is now relatively easy to build anything from a home lab to a professional environment where traffic is distributed efficientlyHardware failures are handled without drama, and the network infrastructure adapts as the project grows and becomes more complex. Share this information and more users will learn about the topic.
