M.2 SSD Caching Performance Boost
Using M.2 SSD as Caching and SSD’s as a Solution for Driving Performance and Increase in Productivity
SSD caching is designed to bring improved performance to computer systems that implement older hard drives in a manner that is both easy to configure and cost effective. This is achievable via a relatively cheap, small SSD drive for cache or storage of the most commonly accessed data. Since an SSD device is much faster than the traditional hard drive, this results in faster reading of cached data.
While SSD caching plays a role in reducing load time for commonly used programs, there is a single limit to the benefits you reap. When data is already stored in your device’s RAM, SSD caching will not improve on load time since the RAM is faster than the fastest SSD drive you can find on the market currently.
The main advantage of using SSD cache comes in when you are booting into Windows or when a program runs for the first time after powering off or rebooting. While data in RAM is cleared every time you power down the computer, whereas data is present in the SSD cache drive.
With SSD caching setup and properly configured, all that it requires is for a program to run only once. Once done, data is stored future access in the cache drive – a process that also occurs for NAS devices. At the chipset level, SSD caching is only possible with compatible chipsets, but some motherboard manufacturers already have software on the market that helps with SSD caching.
SSD Caching in Action
To better understand SSD caching and its inner workings, you need to look at what goes on behind the scenes as your device tries to find the needed data. On first running a program, all the dll, main .exe and other files required are read from your hard drive and loaded onto different temporary storage devices. The basic hierarchy starts with the CPU cache, RAM then the hard drive.
As you go through the hierarchy of different storage locations, speed of storage devices tends to get slower. However, in an ideal situation, you would rather have your most often accessed data higher up in the storage list. For instance, reading data from a CPU cache is faster compared to reading the same data from the RAM, which is faster than reading from a hard drive.
SSD caching creates an extra step between your device RAM and hard drive. Since the SSD drive is faster than the hard drive, it gives your system one more place to look for data before reading it from a much slower storage device.
Unfortunately, not all data in your computer can be stored in the relatively small CPU cache, RAM or SSD caching. The algorithm that decides what data is cached is confidential, but files larger than a couple of MBs are not stored in an SSD cache. Some common means of managing data in SSD caching are:
- Least frequently used caching where data with the lowest access times is removed from the cache
- Least recently used caching where your most recently accessed data is stored near the cache’s top
The form factor used for flash-based cache are Nonvolatile Memory Express (NVMe), Serial Attached SCSI (SAS), Dual In-line Memory Module (DIMM), and PCI Express (PCIe). SSD cache software applications work with the drive hardware to boost the performance of Virtual Machines (VMs) and applications. The software also extends the caching features of Linux and Windows, and you can get it from third party vendors, storage, OS, and VM.
SSD Caching Types
System manufacturers implement different SSD caching types, which include write-through SSD caching where system writes data to the cache and primary storage device at a go. The data is not available from an SSD cache until it the write operation is confirmed by the host. This type of caching is cheaper for manufacturers since data protection is not required.
The other type of SSD caching is write-back SSD caching where the host confirms each data I/O block written to an SSD cache before data is written to a primary storage. In this case, the data is already available in the SSD cache before it is even written to the primary storage. The advantage achieved is low latency in both reading and writing operations.
Write-around SSD caching involves writing data directly to a device’s primary storage device and bypassing the SSD cache. In this case, the SSD cache will require some warm-up period as the storage device responds to data requests and populates cache. Response time for initial data request is slower, but this type of caching helps reduce chances of infrequently accessed data from flooding the cache.
SSD Cache on a Synology NAS
Synology SSD Cache enables use of superior random-access performance of SSDs by boosting read and write speeds without the baggage of adding more disks. Statistically, only a small portion of data is required for read operations. Storing the most required data in the SSD cache creates read buffer that also helps in reducing the overall ownership cost.
In terms of random write operations, SSD read/write cache helps accelerate performance of iSCSI LUNs and volumes. In turn, this reduces random write operation latency, greatly reducing impact, if any, other data transfers have on the performance.
Synology SSD cache technology is implemented in XS, XS+ and a few Plus series devices. The SSD cache can be attached to an iSCSI LUN (at the block level) or single storage volume, greatly enhancing the overall system performance of your NAS.
SSD Caching on QNAP NAS
QNAP implements an SSD technology that is based on disk input/output reading caches. As applications access the system hard drive or drives, data is stored in the SSD. When the same data is required again, it is read from the SSD cache and not the HDD.
Fortunately, SSD does not feature any moving parts or mechanical properties, which means high-speed data transfer. Therefore, if QNAP NAS applications require random read requests, SSD cache will come in and significantly improve access speeds.
QNAP SSD cache technology uses LRU as the default algorithm, which offers you a higher HIT rate but requires more resources from the CPU. Once the cache is full, LRU removes the least accessed items first.
Another algorithm used by QNAP SSD cache technology is FIFO. This algorithm requires less CPU resources, but features a lower HIT rate. Once the cache is full, the oldest data in the cache is discarded first.
In order to attain maximum efficiency, QNAP implements Qtier technology that incorporates the speed of your SSDs and capacity of your HDDs in a single NAS system. The technology automatically moves data based on frequency hits using a 12 Gb/s SAS controller. The result is better system performance even under the most mixed, complex workloads and applications, while still providing high-capacity storage for your cold data.
Conclusion – Improved Performance for Demanding Environments
The SSD cache feature on QNAP and Synology NAS helps in accelerating IOPS performance by an incredible 10 times, while reducing latency by 3 times for your storage volumes. SSD caching is the perfect solution for IOPS-demanding applications such as virtualization and databases, significantly improving the quality of workflow.