Create a local driver repository and deploy it to multiple computers

  • Centralizing drivers in a local repository with well-organized .inf files allows for automated deployments and updates across many machines without duplicating images.
  • Driver and image management tools (HII, DISM, AOMEI, etc.) generate databases and images that link hardware to the correct driver and facilitate mass network deployment.
  • Visual Studio and Git allow you to manage controller projects, packages, and deployment scripts across multiple repositories, keeping versions and changes under control.
  • Well-designed storage repositories in hypervisors like Citrix Hypervisor ensure that system and driver images and VDIs are stored and served with performance and security.

local driver repository

When you manage dozens or hundreds of teams, keep the drivers under control It makes all the difference between a stable environment and a nightmare of issues. Manually installing drivers, model by model, is impractical in any moderately large organization. The smart approach is to set up a local driver repository, integrate it with your deployment tools, and then automate as much as possible.

At the same time, it is becoming increasingly common that deployments, code and tools coexist in the same workflow: Git repositories, Visual Studio solutions, system images, storage repositories in hypervisors such as Citrix Hypervisor/XenServer, etc. All of this intersects with driver management and the need to deploy images and drivers on multiple machines across the network.

What is a local driver repository and what problems does it solve?

A local driver repository is, basically, a centralized location in your infrastructure (typically a shared folder on a server) where you store all the driver packages you'll use to deploy, maintain, and update your systems. This folder is usually exposed as a UNC share (\\server\resource) and, in many scenarios, also as a directory accessible via HTTP(S) through a web server.

This controller repository serves as single source of truth For your image task sequences, deployment scripts, Hardware Independent Imaging (HII) mechanisms, and update processes. Instead of embedding specific drivers in each system image (which increases the number of images to maintain), you work with the most generic images possible and delegate to the repository the logic of which driver to apply to each device.

Endpoint management solutions often include the concept of preferred or replica content serversThere you decide where the driver repository for each location is physically housed: normally each large office has a nearby server to avoid having computers cross a saturated WAN to download drivers.

What are the risks of downloading and running scripts from GitHub?
Related article:
What should a README include to stand out on GitHub?

Basic requirements for your driver repository to function properly

To prevent the setup from becoming a headache, your repository must meet a series of requirements. minimum technical requirementsFirst, ensure the UNC path is stable, consistent, and properly published in your management tool. This same location, or a synchronized replica, must be reachable via URL if your product downloads drivers over HTTP(S) during pre-installation or from WinPE.

One of the critical points is that Each driver should include its .inf fileThis file describes the supported hardware, installation paths, dependencies, and parameters that Plug and Play (PnP) or the HII tool use to match a physical device with the correct driver. Simply having executables or generic installers isn't enough; without an .inf file, automation becomes significantly more complex.

At an internal organizational level, it is highly recommended that you maintain a logical structure of subfolders (by manufacturer, model, device type, or a reasonable mix: /Dell/Laptops, /HP/Desktop, /Network, /Audio, etc.). Although many driver management consoles scan recursively without relying on this structure, when it comes time to clean up or perform maintenance, you'll save yourself many hours and errors.

How is the driver database generated on the repository?

Once the packages are copied to the repository, the following comes into play: driver management tool which comes with your provisioning suite (often under names like “HII Controller Management”). From that console, you convert the simple shared folder into an indexed repository usable by task sequences.

The usual flow involves a wizard accessible from menus similar to Tools > Provisioning > Driver ManagementThere you select the UNC path of the repository, verify that the associated URL is correct, and begin the library generation process. The system traverses the folder tree, reads each .inf and creates a database (drivers.db3 or equivalent) that links hardware IDs to specific drivers.

When it finishes, the tool itself shows you the number of files processed and how many of them have been recognized as valid drivers. If you detect a significant difference between what you expected and what it has indexed, it's likely that .inf files are missing, there are poorly packaged files, or you're including installers that don't expose the driver in a standard way.

Each time you add new packages to the repository you will need to repeat the library generation processThe system will re-analyze all the content, updating the database without losing previous entries. It's advisable to integrate this step into your maintenance routine whenever you introduce new drivers or major updates.

Maintaining and updating an existing driver repository

local driver repository

A driver repository is an element alive and constantly changingNew models are released, manufacturers fix bugs, security updates are published, and drivers generally become outdated. That's why it's crucial to define a procedure for keeping the driver database reasonably up-to-date.

The typical cycle consists of downloading the Latest packages from official websitesVerify that they include their well-structured .inf files and copy them to the appropriate subfolders within the central repository. Once this is done, return to the driver management console and relaunch the library build task to incorporate these new features.

If you keep your organization by manufacturer and model, you'll find it much easier. Remove old versionsYou can keep only the recommended drivers or segment them by hardware generation. Although you can dump "everything" into the same folder and let the tool manage it, in practice this usually results in an unmanageable repository with multiple duplicate versions and a higher risk that Plug and Play will prefer a driver that isn't the one you wanted.

Using the repository in image task sequences

The most common way to leverage a local driver repository is to integrate it with operating system deployment task sequencesThe idea is to work with a Windows image that is as independent of the hardware as possible and let the "deploy driver package" tasks apply the appropriate drivers according to the computer.

In many environments, a single image sequence is configured for the entire organization and then added several driver deployment stepsEach step is protected with a WMI query or a model filter, so it only runs on the computers it's actually intended for. This way, a single workflow covers laptops, desktops, and workstations from various brands without the need to maintain multiple parallel sequences.

These steps are directly nourished by driver repository and its associated databaseThe sequence determines which package to install, and the provisioning tool locates the necessary drivers in the repository, downloading them via the UNC path or HTTP(S) depending on whether the computer is in the pre-installation phase, in WinPE, or already with the operating system running.

Allow users to update drivers from a portal

A recurring question in IT teams is whether it is possible to prepare a sequence of tasks consisting of only through driver package deployment steps and make it available on a Software Center-type portal so that the user can run it themselves when they need to update their drivers.

On a technical level, if your tool supports launch on-demand sequences And if you respect permissions and policies, the setup is viable. You would define a sequence without destructive steps (no reformatting or reinstalling the system), only with driver update actions based on the packages you already have cataloged in the repository.

The steps would continue using WMI queries or model filters so that each machine only runs the packages it's meant to. The difference is the trigger point: instead of orchestrating them from the console as part of an image project, you publish the sequence in a kind of application catalog that the user runs at will or when IT indicates it in an update campaign.

However, it's wise to be cautious: Update drivers on production equipment It can reveal incompatibilities that don't appear in a clean installation. The sensible thing to do is test each package thoroughly.Restrict this type of sequence to highly controlled models or advanced users and clearly document when and how they should be executed.

Driver projects and driver packages in Visual Studio

On the development side, installing drivers on end-user devices usually goes hand in hand with controller projects in Visual StudioHere it is useful to distinguish between two concepts: the controller project itself and the controller package project.

A controller project is the one that generates the driver binary (typically a .sys file in kernel mode or another type of driver component) and often the corresponding INF file. These templates are created using Windows driver development tools, and Visual Studio offers specific wizards for this purpose.

The driver package project acts as installation containerThis groups one or more driver binaries and all related files into a single "package" that will be used to distribute, install, and debug the driver on remote computers. When you compile such a project, Visual Studio generates the necessary structure for the driver to be deployed in a standard way.

When you create a driver solution using a modern template, Visual Studio will normally generate it automatically. two projectsOne for the driver and one for the package. If for any reason your solution does not yet have the package, you can add it manually using the new project option and selecting the "Windows Driver Installation Package" template, checking the option to add it to the existing solution.

If your solution already has a driver package, you can modify it to reference other projects From the same solution. From the Solution Explorer, open the package project, access the References node, and add or remove references to the driver projects you want to include in the final package. This way, a single solution can contain multiple drivers and associated packages, a common practice in examples like the classic "Toaster Sample Driver".

Multiple Git repositories and driver workflow in Visual Studio

In environments where development and systems work together, it is very common for the driver automation scripts, deployment tools, and backend code are distributed across several Git repositories. Visual Studio 2022 (from version 17.4 onwards) facilitates this scenario by allowing you to work with up to 25 active repositories at the same time in the same instance of the IDE.

How to synchronize configurations across multiple PCs using Git
Related article:
How to synchronize configurations across multiple PCs using Git

This means you can open a complex solution that combines frontend, APIs, libraries, deployment scripts, documentation and utilitiesEach repository is managed from the integrated Git views. The "Git Changes" and "Git Repository" windows clearly separate changes by repository and allow you to stage, commit, merge, rebase, rename branches, and perform other common operations without losing sight of which repository you're working with.

Visual Studio also handles scenarios well with multiple GitHub accounts or mixed corporate and personal repositories. Each repository's Git configuration remembers which account was used, greatly simplifying access to different remotes depending on the project. Furthermore, you can create branches simultaneously in several related repositories, which is very useful when preparing a new version of a deployment tool that affects multiple components.

Regarding how to load those repositories, you can choose to work with a solution (.sln) that groups projects You can either access different repositories or open a root folder containing several subdirectories, each with its own .git file. In both cases, Visual Studio will detect the repositories and activate them transparently, showing you a unified but organized view of the status of each one.

Strategies for organizing forks and remote repositories

When you get into the dynamic of collaborating on external projects and maintaining forks, the question arises whether it's better clone each fork to a different directory (~/src/user1/project, ~/src/me/project) or work with a single code tree and several configured remotes. Both strategies make sense depending on the volume of changes and the type of collaboration.

If you opt for a single directory, you usually have an "origin" remote pointing to your fork and an additional remote pointing to the original repositoryThen you create branches that follow each remote as needed. It's a more compact approach and reduces local code duplication, but it requires careful consideration of which remote you use to push to each branch.

Windows installation and image deployment on multiple computers simultaneously

Beyond the driver repository, many organizations need Install Windows on multiple computers simultaneously with the same system configuration, applications, and drivers. The practical way to do this is to create a reference image and deploy it over a LAN, instead of reinstalling them one by one.

This is done using image backup and deployment tools such as AOMEI Backupper y AOMEI Image DeployThe idea is very simple: you prepare a "master" machine with the operating system, drivers, and software you need, generalize the system (for example, with Sysprep to remove the SID and avoid conflicts), and then you create a complete image of system or disk that is stored in a shared resource or NAS accessible by all destination computers.

AOMEI Image Deploy lets you boot client computers via network using PXEProvided your network card supports it and you have a DHCP server available (or enable it through the tool). On the server, you create a bootable WinPE environment, clients connect, the central console detects the IPs of each machine, and once confirmed, you select the image file, the destination disks, and how many machines you want to deploy simultaneously.

This solution makes it much easier massive installation of “bare metal”Simply prepare the original equipment properly, create the image, and let the tool replicate that configuration on dozens of machines. Furthermore, the more advanced versions of AOMEI allow "Universal Restoration," meaning you can deploy the same image on different hardware by adjusting the necessary drivers for a smooth boot process.

Concept of image deployment and advantages in the company

The image display consists of customize an operating system with your applications, drivers, and settings on one computer and capture an image of that state, which is then automatically distributed to the rest of the computers. It is, in practice, a controlled cloning of the source machine.

The advantages are clear: saving time and effortStandardization of configuration and the ability to perform rapid installations of newly acquired systems. Instead of spending hours installing and configuring each workstation, you deploy a standard image in parallel across many clients and then only make the fine adjustments that don't make sense to automate.

Prerequisites for deploying images over a network with AOMEI Image Deploy

For the deployment to work without surprises, you need to define a server computer with Windows A fully operational server and one or more client machines where the image will be restored are required. It is important that the server and clients are on the same network segment within the LAN and that the client NICs support PXE booting.

In the client's BIOS you will need to Configure network boot as the first optionEnsure that the target disks have the same logical numbering (ideally, leave only the target disk connected) and verify that the server's Windows Recovery Environment (Windows RE) is complete. If the server uses a system earlier than Windows 7 or lacks WinRE, you will need to install the Windows AIK/ADK and consult recovery utilities such as Windows Boot Recovery Toolkit.

The system or disk image is created with AOMEI Backupper, saving it to a NAS or shared folder accessible from the same LAN. Subsequently, from AOMEI Image Deploy, the WinPE support is generated, the necessary services are started, the clients are booted via the network, they are detected, the destination disks are assigned, and the deployment is launched, with the possibility of viewing the progress of each computer and automatically shutting down or restarting upon completion.

Design and management of driver packages in Windows

When we talk about low-level “drivers” in Windows, it's helpful to understand exactly what a driver is. driver packageIt's not just the .sys file, but the entire set of files required for device installation: INF files, binaries, signature catalogs, auxiliary DLLs, etc. Windows allows you to add these packages to an image before, during, or after system installation.

In offline maintenance mode, using DISMYou can mount a Windows or Windows PE image and add, remove, or list driver packages without booting the operating system. Drivers distributed as .cab files with the "Designed for Windows" logo typically require you to expand the .cab file before installation; those embedded in non-standard installers can only be applied on online systems, often using custom commands in answer files.

During the automatic installation of Windows, using the installation program and an unattended response fileYou can define local or network paths where the .inf packages are located. Depending on the configuration phase (windowsPE, offlineServicing, etc.), these packages are integrated into the driver store before the first boot, allowing the system to have the necessary drivers to boot or for critical components such as storage and networking.

Once the system is up and running, you can use PnPUtil This allows you to add or remove packages on the fly, or rely on scripts and response files that run the installation in audit mode. This approach is useful when you want to maintain a very simple base image and add only the essential drivers based on the specific hardware where it's deployed.

Controller classification, digital signature, and folder management

One of the typical problems is that a driver imports successfully to the warehouse However, when the system starts, PnP decides to install a different driver that it considers "better." This is due to the ranking performed by the PnP manager, which follows a priority order based on the signature, PnP ID match, date, and driver version.

This implies that a driver signed with compatible match It can override another unsigned driver that better matches the hardware at the exact ID level. It's also possible that an older version will still prevail if it has a signature or Plug and Play match that the system considers superior. That's why it's so important to check which versions you're importing and how they behave in the presence of generic Windows drivers.

Regarding security, driver packages should go digitally signedKernel-mode boot service binaries (critical .sys files for accessing the system disk) typically require embedded signatures to prevent them from being lost during updates. Signed Plug and Play drivers include a catalog file (.cat) containing the hashes of all the package files, and it is this signature that Windows verifies to allow installation.

In the source folder, it is advisable to separate the packages into separate directories by driver or categoryTo avoid filename collisions when adding many .inf files, Windows internally renames them to OemX.inf after installation, but conflicts can occur at the source if multiple packages include the same filename. If you use answer files or DISM with /recurse on a path, all .inf files in subfolders will be added to the store, so you must carefully control the contents of each directory to avoid bloating the image with unnecessary drivers.

Storage repositories and considerations in Citrix Hypervisor/XenServer

In virtualized environments, driver and system image management often relies on storage repositories (SR) where virtual disks, templates, ISOs, and other files are stored. Citrix Hypervisor (formerly XenServer) offers a wide range of SR types: local LVM, EXT3/EXT4, NFS, SMB, GFS2, iSCSI, HBA, ISO, udev, etc., each with its own VDI size limits and performance characteristics.

From XenCenter you can use assistants like “New storage repository"or CLI commands like sr-create to define SRs on local disks, iSCSI LUNs, Fibre Channel arrays, or NFS/SMB shares. Each SR type has its own parameters (device, server, serverpath, SCSIid, provider, targetIQN, nfsversion, etc.) and its own warnings: maximum VDI sizes, block size requirements (usually 512 bytes, requiring emulation if using native 4K), support for thin provisioning, snapshot restrictions, and metrics, among others."

How to remove the "Windows is not supported" watermark from Windows 11
Related article:
How to install and manage the Windows WinGet package manager

SRs based on local LVM or over HBA offer stable write performance and fast cloning and snapshot operations, in exchange for certain overhead and limitations. SR EXT3/EXT4 allows for thin provisioning on local storage, but can degrade performance in intensive operations. For shared storage, NFS and SMB provide VDI in VHD format with thin provisioning, ideal for live migration and booting VMs on any host in the pool, although the available space must be carefully monitored to prevent writes from failing and machines from crashing when it reaches 100%.

The GFS2 type over shared block provides lightweight provisioning in clustered environmentsThis allows for VDI of up to 16 TiB and improved space efficiency with shared base images and numerous snapshots. In return, it introduces restrictions such as a maximum number of GFS2 SRs, the requirement for clustering and multipathing, the lack of support for certain features (CHAP, trimming, some storage migration methods, etc.), and the obligation to monitor SR usage to ensure it does not exceed certain thresholds to avoid severely degrading performance.

In the case of SRs based on ISO files (NFS or CIFS), they are used to build centralized CD/DVD image libraries From these, you can install system installers, tools, and distributions. There are also specific parameters here (location, type, version, username, cifspassword_secret, etc.), and it is recommended to use SMB3.0 versus 1.0 for security and robustness reasons.

In all these environments, the recommendation is to use dedicated storage networks (ideally with aggregated links and redundant switches), constantly monitor free space and avoid manually touching the contents of SR directories on the file server, as Citrix Hypervisor assumes full control over them and any external modification can corrupt VDI or metadata.

As you can see, setting up a local driver repository, integrating it with deployment sequences, combining it with image solutions like AOMEI, and properly managing code and storage repositories is a game of pieces that fit together.

When everything is well designed —centralized and indexed drivers, ready-made standard images, healthy shared storage, and rigorous version control in Git and Visual Studio— deploying and maintaining many teams ceases to be an exercise in survival and becomes a repeatable, traceable, and much more manageable process. Share this guide so other users can learn about the topic.