La Persistent memory (PMEM) It has become one of those concepts everyone talks about when discussing performance, in-memory databases, or advanced virtualization. It's not just "another" storage technology: it introduces a new level in the memory hierarchy, bridging the gap between the world of RAM and traditional storage.
In the following lines we will break down in a It is very detailed, explaining what PMEM is, how it works, and what modes of use it has.This article will cover what platforms like Windows, Azure, VMware vSphere, Lenovo, Red Hat, Oracle Exadata, and SQL Server support, and what real advantages and disadvantages they offer. The goal is that, by the end, you'll have a clear understanding of when it makes sense to use them and what practical implications they have in terms of hardware, operating system, and applications.
What is persistent memory (PMEM) and why does it matter?
When we talk about persistent memory, we are referring to a type of Non-volatile memory mounted on standard DIMM modulesIt installs in the same slots as traditional DRAM. Unlike RAM, data is not lost when the server is shut down or in the event of an unexpected power outage, but access is much faster than with an SSD or NVMe drive.
In practice, the PMEM is located between the DRAM and the storage Within the memory hierarchy, it's somewhat slower than DRAM, but offers nanosecond latencies and a much lower cost per gigabyte, with capacities of 128, 256, or 512 GB per module. Compared to SSDs and HDDs, it provides significantly faster access and connects directly to the memory bus.
This type of module is also known as NVDIMM or DCPMM (Data Center Persistent Memory Module) In the case of Intel Optane, they can be used as system memory (memory mode) or as very low latency storage (App Direct mode), or even combine both approaches in a mixed mode.
The main point is that The information survives restarts, shutdowns, and failures.This opens the door to in-memory databases that don't have to be rebuilt from scratch at startup, ultrafast persistent caches, and architectures where DRAM acts as a super-fast cache while PMEM provides additional and resilient capacity.
Basic concepts and architecture of persistent memory
PMEM relies on technologies of non-volatile memory (NVM) directly accessible by the CPU, without going through the normal disk I/O paths. This means that data can be handled as if it were memory byte by byte, and not just in blocks, simplifying certain workloads and drastically reducing latency.
In a classic architecture, we only have a large pool of DRAM and, underneath, a storage subsystem using SSDs or HDDs. PMEM introduces tiered memory architectures, in which the DRAM acts as high-performance cache and PMEM as a high-capacity, slower but persistent layer, which in turn relies on even more massive flash and disk storage.
From the server's point of view, these modules reside in standard DIMM slotsLocated near the CPU, the physical path to the data is minimized. To maximize bandwidth, manufacturers allow the creation of interleaved sets, where multiple PMEM cards are logically aggregated to form a continuous address space or region.
In environments such as vSphere, Windows Server, or Linux, PMEM can to present itself as a logical disk, as a namespace or directly as a memory-type addressable region, depending on the operating mode and the capabilities of the operating system and the hypervisor.

Access modes: block, DAX and internal operation
PMEM can be exploited in two main ways: as traditional block device or as direct access memory (DAX). In block mode, the operating system manages it the same as a disk: access is performed in sectors or blocks, passing through the storage stack and the file system.
Direct Access Mode (DAX) exposes the device as addressable byte memoryThis allows applications to map files directly to the address space and read or write them as if they were RAM, but with persistence. This approach minimizes intermediate layers and reduces latency to a minimum, although it also carries additional risks if the software is not designed with atomic writes and fault handling in mind.
Windows, for example, allows the use of DAX on specific NTFS volumes, while ReFS is typically used in block access configurationsIn Linux environments, DAX is supported on file systems prepared for it and on namespaces configured in that mode.
When using PMEM as storage in Azure Stack HCI or Windows Server, it is common to... automatically assign to cache High-performance storage is used, while slower storage media (SSD, HDD) are allocated for capacity. This fits well with Storage Spaces Direct scenarios, where the PMEM acts as a hot data layer with latencies of tens of microseconds.
Regions, PmemDisk and Block Translation Table (BTT)
On platforms like Windows Server, PMEM is grouped into regionsThese are sets of one or more modules interleaved to create a continuous address space. These regions are typically configured in the server's BIOS or firmware, taking advantage of 2-way or 4-way interleaving to distribute addresses among physical modules and improve performance.
These regions can be created PMEM logical disksThese are known as PmemDisk in Windows. A PmemDisk is simply a contiguous range of non-volatile memory that the system sees as a disk partition or a LUN. This disk can then be initialized, partitioned, and formatted with NTFS or ReFS, with or without DAX enabled as needed.
Each persistent memory module has its own area of Label Storage Area (LSA)where configuration metadata is stored: which regions belong to each module, how they are interleaved, which PmemDisks exist, etc. Tools such as the PowerShell cmdlets Get-PmemDisk, Get-PmemPhysicalDevice, or Get-PmemUnusedRegion allow you to inspect this configuration.
One tricky point about PMEM is that, unlike many SSDs, It does not, in itself, protect against incomplete deeds In the event of a power outage or sudden system failure, the Block Translation Table (BTT) is introduced to reduce this risk. This layer provides atomic sector update semantics for persistent memory devices.
The BTT allows applications to continue seeing the device as a reliable block storageThis prevents mixing old and new data during a power outage. Generally, it's recommended to enable BTT when PMEM is used as a block (especially for database log files), and consider disabling it only in DAX contexts with large pages where the performance impact could be significant.
Supported hardware and operating modes (App Direct, memory, mixed)
Within the persistent memory ecosystem, we find solutions such as NVDIMM-N and, especially, Intel Optane DC Persistent Memory (DCPMM). Windows Server 2019 and later support both, although with nuances depending on whether they are used in memory mode or App Direct (persistent) mode.
App Direct mode, also called direct from the appPMEM is presented as low-latency persistent storage. This is the recommended mode for server workloads where PMEM is to be used as a cache, as storage for in-memory databases, or as ultra-fast disk in hyperconverged clusters.
In memory mode, on the other hand, the system treats the PMEM as if it were slower RAMWhile the DRAM acts as a hot cache, this expands the host's total memory capacity without needing to fill all the slots with expensive DRAM modules. The trade-off is that, in this mode, the data is not considered persistent for practical purposes.
Mixed mode allows you to assign a percentage of PMEM capacity in memory mode and the rest to App Direct mode. For example, you could reserve 65% for memory and 35% for persistent use. Manufacturers like Intel and Lenovo often recommend trying different allocations, as needs vary greatly depending on the application's I/O profile.
These configurations are usually defined from the BIOS or platform management tools (such as Lenovo XClarity Essentials OneCLI or equivalent utilities from other manufacturers), and are only applied after a reboot, at which point the firmware reconfigures the PMEM regions and namespaces.
PMEM configuration on Windows Server and Azure Local
In Windows environments, PMEM administration relies on Specific PowerShell cmdlets and vendor utilities such as Intel's ipmctl. The basic workflow involves checking unused regions, creating PMEM logical disks, defining whether they will use BTT, and then initializing and formatting them.
To create a PmemDisk that uses BTT, you can generate a special type of VHD with the extension .vhdpmem Using the New-VHD cmdlet, specifying the BTT address abstraction type and setting a fixed size. Alternatively, you can convert an existing VHD without BTT to one with BTT using Convert-VHD, and regenerate its namespace identifier with Set-VHD to avoid conflicts when connecting to the same virtual machine.
Once the disks are created, Initialize-Disk, New-Partition and Format-Volume are used to prepare the NTFS volumes, being able to enable DAX mode and set an appropriate allocation unit size (e.g., 2 MB) to align with the needs of high-performance applications.
Azure Local and Windows Server 2019 natively integrate PMEM within Direct storage spacesThe PMEM is used as a read/write cache level or as a small dedicated area for data that particularly benefits from low latency. For most configurations, the system automatically decides which devices to use as cache and which as capacity.
In case of module failure, it is necessary reprovision the PMEM diskYou can remove existing disks with Remove-PmemDisk, reset regions with Get-PmemUnusedRegion | New-PmemDisk, and if necessary, completely clear labels with Initialize-PmemPhysicalDevice, assuming data loss and starting from scratch.
PMEM in VMware vSphere: vPMem, vPMemDisk, and high availability scenarios

VMware vSphere supports persistent memory starting with version 6.7, integrating it in such a way that the virtual machines can consume it as a virtual NVDIMM (vPMem) or as an ultra-fast virtual disk (vPMemDisk), even in vSAN-based clusters, although the vSAN itself does not use PMEM as a backend.
When the host is configured in App Direct mode, the PMEM appears as a special type of local datastore. A vPMem appliance exposes the PMEM to the guest operating system as a byte-addressable virtual NVDIMM, ideal for systems that natively support persistent memory and can use it as ultra-low latency storage.
For its part, a vPMemDisk presents itself to the guest as a virtual SCSI device It's a classic storage format, but its data physically resides in the PMEM data store. This allows even older operating systems, which don't understand NVDIMM, to benefit from the speed without requiring major changes.
In a cluster, vSphere reserves PMEM capacity for virtual machines at the time the disk is created or the virtual NVDIMM is allocated. Total PMEM consumption cannot exceed the aggregate cluster capacity, and admission control comes into play just as with other critical resources. especially in scenarios with vSphere HA.
Migrating virtual machines that use PMEM requires certain considerations: a VM with vPMem can only be moved to hosts that have PMEM resources, while a VM with vPMemDisk can be migrated to a host without PMEM if the disk is moved to a conventional datastore using Storage vMotion.
Namespace, region, and security management on server platforms
On Lenovo and other manufacturers' servers, PMEM configuration is done from system setup utilities and remote management tools like OneCLI. There you choose the memory mode (memory, mixed, App Direct), adjust the percentage of capacity that acts as system memory, and manage aspects such as interleaving between modules connected to the same processor.
Once the distribution is defined, the system automatically generates App Direct regionsThese must then be mapped to namespaces in the operating system. In Windows, the pmem command is used for this; in Linux, utilities like ndctl; and in VMware, the namespaces are created automatically after a reboot, detecting the regions configured by the firmware.
In addition to the basic configuration, enterprise platforms offer options for security and secure deletionIt is possible to activate passphrases for the modules, and to run Secure Erase operations that delete all stored data, including encrypted data, which is highly recommended before returning hardware or recycling nodes.
Tools like OneCLI allow you to launch these operations from the operating system, for example with specific commands to perform a secure erase without a passphrase or to adjust warning thresholds when the internal reserve of spare cells of the PMEM is exhausted, indicating that it is advisable to back up data and plan a replacement.
When the percentage of spare cells drops below one configurable limitThe firmware issues warnings so that the administrator can run specific diagnostics (e.g., with Lenovo XClarity Provisioning Manager) and assess the need to replace the module before the internal repairability reaches zero.
PMEM in Red Hat Enterprise Linux and NVDIMM as storage
Red Hat Enterprise Linux supports the use of NVDIMM as storage class memoryintegrating them both as persistent memory of the pmem type and in modes where they can act as blocks to install the operating system itself.
In this context, NVDIMMs combine storage durability with low latency and high DRAM bandwidthThey are especially useful for downtime-sensitive applications that need very short restart times, or for databases and analytics engines that benefit from maintaining structures in memory between restarts.
Installing RHEL on NVDIMM devices requires prior preparation of the namespaces and labelingensuring that the system recognizes the modules as devices suitable for hosting boot and data partitions. From there, subsequent management is very similar to that of other storage subsystems, with specific tools for viewing regions and assigning them specific uses.
In general, Red Hat's philosophy is to treat PMEM as just another layer in its storage architecture, giving the administrator the option to choose whether to use it as support for critical volumes, as storage for in-memory databases, or as support for low-latency applications with direct access.
PMEM in Oracle Exadata X8M and cloud migration
Oracle Exadata X8M and later generations use persistent memory as active storage layer above the Smart Flash cache and the storage cell disks. Thanks to the combination of PMEM and RoCE (RDMA over Converged Ethernet), data access is achieved with latencies of less than 19 microseconds.
In this architecture, the PMEM acts as a ultra-hot levelThis is where the most critical data and metadata for read operations and transaction commits are placed. Below this, the intelligent flash cache acts as the hot layer, and finally, the disks make up the highest-capacity cold layer.
The performance benefits are clearly reflected in AWR statistics, where service times for wait events such as single-block physical reads or log file synchronization are measured in microseconds. Oracle also exposes specific counters for read and write visits to the PMEM cache in views such as V$SYSSTAT and in the AWR reports.
When migrating an Exadata environment of this type to AWS, it is important to keep in mind that EC2 does not currently offer native PMEMAs compensation, you can use instances with large amounts of memory to expand the SGA and file systems such as Amazon FSx for OpenZFS, which provides more than one million IOPS with latencies of a few hundred microseconds, sufficient for many demanding workloads, although not reaching the levels of local PMEM.
Using PMEM with SQL Server on Windows
SQL Server 2016 and, especially, SQL Server 2019 incorporate multiple features in memory that directly benefit from persistent memory, both to accelerate data and log files and to reduce recovery times.
The setup begins with the creation of PMEM namespaces or disks Using vendor tools (such as ipmctl in the case of Intel Optane) and the PowerShell cmdlets already mentioned (Get-PmemDisk, Get-PmemPhysicalDevice, Get-PmemUnusedRegion). Once the regions and PmemDisks are defined, NTFS volumes with DAX support are initialized and formatted when appropriate.
In this context, it is especially important to combination of mountain bike and DAXFrom a support perspective, it is recommended that the transaction log be hosted on devices with BTT enabled to ensure industry semantics, while for volumes with large pages and direct access, disabling BTT may be preferable to reduce additional costs.
SQL Server benefits from PMEM by reducing latency when accessing critical data and improving the performance of I/O-intensive operations. Furthermore, file alignment checks (for example, using `fsutil dax queryFileAlignment`) help ensure that file offsets and sizes They fit the requirements of DAX, maximizing performance.
When a PMEM module is replaced on a server hosting databases, it is necessary to revert to provision PMEM disks: delete logical disks, recreate regions, initialize physical devices and reconfigure volumes, always using recent backups, as these operations involve data loss.
Real-world use cases and benefits of persistent memory
The PMEM has already been applied in a good number of business scenarios where latency and persistence They are key. Among the most common are in-memory databases such as SAP HANA, big data analytics engines such as Hadoop or Spark, and high-performance virtualization platforms.
Other very interesting use cases include genome sequencingwhere rapid access to huge datasets accelerates analysis; training of machine learning and AI models, where loading large volumes from traditional storage can be a bottleneck; real-time processing of IoT data; cybersecurity threat analysis; professional video editing and rendering; and video games that seek to minimize loading times.
The most obvious advantages include a Improved performance and reduced latencyBut also for better scalability: several terabytes of low-latency accessible memory can be achieved per server by combining DRAM and PMEM, something economically unfeasible with DRAM alone.
Furthermore, being non-volatile, PMEM offers a very attractive data persistence For environments where power failures or restarts should not involve recomputing or reloading a lot of information from scratch, thus improving RTO and RPO targets.
In terms of costs, PMEM has historically positioned itself between DRAM and flash storageMore expensive than NVMe and 3D NAND SSDs, but significantly cheaper per gigabyte than high-capacity DRAM. In well-designed environments, this balance has allowed for a lower total cost of ownership by requiring less DRAM and making better use of the more economical storage.
Limitations, compatibility and future of PMEM
It's not all advantages: persistent memory entails challenges of compatibility, cost and complexityNot all processors and motherboards support PMEM modules, and the firmware and operating system need to understand the operating modes, regions, and namespaces.
Furthermore, although it has decreased in price compared to DRAM, PMEM remains more expensive than traditional storageTherefore, replacing the entire SSD and HDD layer with these solutions is not viable. The sensible approach is to use them strategically, for the most latency-sensitive data layers or to expand system memory for workloads that truly benefit from it.
At the ecosystem level, the discontinuation of the Intel Optane line has raised doubts about the future evolution of persistent memoryHowever, the problems it was intended to solve (extreme latencies, memory-storage gap) remain, and we are likely to see other approaches such as memory stratification and new generations of NVMs with similar characteristics.
Meanwhile, many software technologies are already prepared to exploit PMEM when it becomes available: operating systems with DAX support, hypervisors capable of presenting vPMem and vPMemDisk, databases and in-memory cache engines optimized to work with non-volatile memory, and hardware platforms that integrate management, security, and secure erase utilities.
Persistent memory consolidates as a key component in modern high-performance architectureespecially when combined with DRAM, NVMe, and low-latency networks. Although the market is still evolving and has experienced ups and downs, its concept and use cases continue to pave the way for increasingly integrated memory and storage models.