Storage Tiers allow for use of SSD and hard drive storage within the same storage pool as a new feature in Windows Server 2012 R2. If you’ve not read Jose Barreto’s Step-by-step post on this subject already, it is a great source for links about Storage Tiers as well as a fantastic place to find examples of how to use PowerShell cmdlets to implement Storage Tiers with Storage Spaces. In this episode, I’m going to show you how to implement Storage Tiers using mostly the UI.
If you’re not familiar with Storage Tiers, the idea is to be able to mix Solid State Disk (SSD) storage with conventional disks (HDD). However, Storage Tiers provides the ability to store more frequently accessed data on SSD media…with both types of media used as block based storage for the same virtual disk: the best of both types of storage. That’s a pretty high level summary…and a pretty awesome concept. Previously, in my basement lab I had two different pools: one for each type of storage.
If implementing tiers using PowerShell, some calculations may be required…and it looks a bit complicated if you’re just attempting to try out. Granted, below are quite a few screen shots and this is a lengthy post. However, the process using the UI is fairly easy. I made one diversion into PowerShell to show how to define MediaTypes for storage devices if they’re not detected automatically. The technique I use for that is very similar to Jose’s example but is another variation to show that you’re not limited to just one technique.
If you’ve read my recent post about expanding a storage pool, you may have a better understanding of how Storage Spaces uses columns. Using the UI to configure Storage Tiers will attempt to use the defaults for the number of columns. Using some quick and easy PowerShell during the creation process, you may change the column defaults for a specific storage pool.
Remember: If you have difficulty reading any of the screenshots below, you can obtain a full size image by clicking on them.
Creating Tiered Storage
1. The first step involves attaching the devices you intend to use. You must have at least one SSD and one physical drive attached. For this example, I chose 4 SSD devices, and 9 1 TB drives. This is indeed an odd arrangement but I’ve chosen it with a purpose: to show the layout of a defined virtual disk, and to show that Storage Spaces will use what it can from this arrangement and leave remaining space for other uses. In this example, I’ve connected the devices and can see them within Server Manager.
Figure1: Server Manager view of attached disks
2. Next, drop into Storage Pools within Server Manager to see the Primordial pool of available disks.
Figure 2: The Primordial Pool
3. Right-click the Primordial pool, and create a new pool.
Figure 3: Create a new pool
4. Give the new pool a name.
Figure 4: Naming the pool
5. In my example, I’m choosing 2 of the disks to be hot spares. The only reason I chose the specific devices below is because they’re the top drives in each external eSATA cabinet and are easy for me to keep track of this way. 😉 In this section of the wizard, you will also choose the devices to include in the pool. You may click the checkbox on the line with column labels to select all available devices should you so choose.
Figure 5: Assign device uses
Figure 6: Manually setting MediaType with PowerShell
Notice that the SSD devices were detected as SSD media. However, in this case the physical drives show as unknown. If yours are not detected like in this example, they should be set correctly which can be done using PowerShell. We will proceed for now but will need to correct this later. Leaving the majority of the devices as Unknown will result in error in a later step. Next, confirm choices and proceed.
Figure 7: Confirm Selections
Figure 8: Successful Pool Creation
6. If you proceed forward from here and attempt to create a virtual disk, you may receive the following status message. Also note that the new option to create tiered storage is grayed out. This is because the devices in the pool currently don’t meet the minimum requirements due to the Unknown MediaType of my physical disk storage.
Figure 9: Can’t proceed if storage type needs to be defined
7. The above problem is an easy fix. If your storage was properly detected as HDD, then you can skip this step. Otherwise, open a PowerShell prompt and use commands like in the example below:
Figure 10: Assigning MediaType to Unknown disks.
In PowerShell, the disks show as Unspecified. Since all my physical disks show as Unspecified that are part of the pool, I’m simply using the PowerShell WHERE command (can be abbreviated with a question mark) to filter results and only act on those devices that need definition…setting them to HDD.
8. This is also a good time to override the defaults on number of columns to be used within this particular storage pool. By default, as you’ll note from the first command below, the pool is defined to use automatic selection. I am planning to use a mirroring on the virtual disk to be created and want to use two columns. If I were going to create a simple volume, I would want 4 columns as I have 4 SSD drives. You may also be thinking that I’m crazy for mirroring SSD drives. SSD drives are not exempt from failure.
Figure 11: Changing resiliency defaults for a storage pool
9. Getting back to the UI…now is a good time to refresh Server Manager. The UI needs to be refreshed to be aware of the changes just made or subsequent steps may yield errors.
Figure 12: Refresh Server Manager
10. Next, we will create the virtual disk. You don’t have to create the tiers first in PowerShell because the UI will do this for you by using the available checkbox to enable tiering.
Figure 13: Create the virtual disk
Figure 14: Use the checkbox to enable Storage Tiers
11. Select the layout for the storage. In my example, I want to use mirroring.
Figure 15: Choose resiliency level
12. Select two-way mirror.
Figure 16: Two-way mirror
13. When using tiers, you must use fixed provisioning.
Figure 17: Fixed Provisioning
14. Here you get to view the size of the tiers for the virtual disk. Both tiers will be put together to make the resulting virtual disk. In this example, I’m going with the maximum for each.
Figure 18: Selecting SSD and HDD tier size
15. Reviewing the selections below, you see that out of 9TB of available data, the resulting virtual disk is only 3.6TB. Remember that we used all the available SSD space (which was smaller than available HDD space to begin with), and that by choosing mirroring that we’re really using 7.2TB for a 3.6TB volume. Any space not used will remain available in the pool. The maximum size of 3.6TB for this virtual disk is due to the overall disk layout.
Figure 19: Confirm selections
Figure 20: Completion of virtual disk creation
16. Next step is to create a volume on the virtual disk. Note that because of using tiers, you must use all available space on the virtual disk just created.
Figure 21: Create a volume
Figure 22: Choosing max size in New Volume Wizard
17. Assign a drive letter for the volume.
Figure 23: Assign drive letter.
18. Specify a volume label if you choose.
Figure 24: Volume Label
19. Confirm choices.
Figure 25: Final Confirmation
Figure 26: Last step may take a while to complete
20. Once everything has completed, if you look at the storage pool again, you will see that space remains available in the pool — even though we chose the maximum size for the virtual disk and volume. This is due to the storage configuration I chose for this example. Based on the configuration options I chose for the virtual disk, Storage Spaces chose the largest virtual disk it could create based on the available disk layout and columns needed. Therefore, essentially the space on two remaining 1TB drives remains for whatever I might want to use it for.
Figure 27: Remaining Space
21. After the wizard completes, the volume may be used for data. You may or may not know that there are scheduled tasks associated with Storage Tiers. Initially they are not enabled, but after establishing the first tiered storage, they will be enabled automatically. After successful completion, those tasks should appear enabled as follows.
Figure 28: Task Scheduler jobs.
I hope this helps to illustrate how to create tiered storage with Storage Spaces using Windows Server 2012 R2. Having the option to use different classes of storage within the same virtual disk is a great feature to have as an option for your storage needs.
For More Information about Storage Spaces:
Prior Posts about Storage Spaces
Jose Barreto’s Storage Tiering Step-By-Step with PowerShell
TechNet Reference for Get-VirtualDisk
TechNet Reference for Get-StoragePool
Until next time!
But with Windows Server 2012 R2, scheduled for release later this year, Microsoft has improved Storage Spaces by adding the ability to use SSDs for automated tiering and/or write-back caching. It might be because I've seen a half-dozen "software-defined storage" packages in recent years and have become more open to the idea, but I find Storage Spaces a bit more impressive now. Let's take a look at the new functionality.
Microsoft beefed up Storage Space’s strengths, including the software RAID engine, which, like a Compellent or 3Par array, works by spreading the stripes data and parity information across all the drives in the storage space. The company added double parity using a Microsoft Research-developed algorithm that rebuilds with less I/O than the standard Reed-Solomon RAID-6. Microsoft also added support for new SAS JBODs with expanders and enclosure services, making Windows Servers aware of enclosure events and temperatures.
The automated tiering feature uses the SSDs and HDDs in a Storage Space as a pool. It then tracks the access frequency of data chunks at a 1-Mbyte granularity so it can dynamically promote and demote chunks based on their I/O “temperature.” Tiering is a scheduled Windows task that by default runs at 1 a.m. every day. Administrators can adjust the schedule and set limits on the amount of the SSD tier that can be replaced in each scheduled tiering adjustment.
In addition to the automatic tiering, administrators can pin high I/O, or otherwise latency sensitive files, like VDI “gold images” to the SSD tier without having to dedicate volumes for them. The tiering traffic is only sent to disks when the queue for that disk is one or less, limiting the performance impact of a tiering operation. However, like any other daily tiering system, it will be most effective with repeatable workloads.
The write-back caching feature creates a separate write-back cache for each accelerated volume from available SSDs in the Storage Spaces pool. The cache is intended to be stored on an SSD that’s shared in an external SAS JBOD to make the cache persistent across server failures in a Windows Server, including Hyper-V and clusters using cluster shared volumes (CSV). Microsoft has been careful to promote the write-back cache as being a solution for short write bursts, which leads us to believe that they’re only supporting a limited write-cache size in the initial release.
[SSDs have a range of storage uses, including supporting VDI performance. Find out how in "Solving VDI Problems With SSDs and Data Deduplication."]
Conspicuously missing is any form of read caching. When we asked the Microsoft folks about this, they pointed at the various RAM caches in Windows, but those are very small compared to a modern SSD read cache. I found it interesting that all the ISVs--such as FlashSoft, VeloBit and Intel--decided to build read caches, but Microsoft choose to do the trickier write back caching and automated tiering.
Storage Spaces is intended as a replacement for traditional SAN storage with the Windows Server directly addressing disk drives, SDDs and HDDs. While it may be theoretically possible to expose SDD and HDD logical drives from a SAN array to a Windows server, and to use Storage Spaces to create a tiered pool from those LUNs, this configuration is not supported by Microsoft. That leaves space in the marketplace for caching ISVs as an acceleration tool for SAN arrays.
Storage Spaces may turn out to be a perfectly good way to build a server cluster and storage. I’m afraid as good as it may be, it has two strikes against it:
1. It’s for shared SAS JBODs, not pooled internal server disks, which is becoming the new fashionable software defined storage model. Those SAS enclosures still cost some money.
2. Longtime storage administrators think of software RAID as a technology they’ve outgrown. Unless it’s in a shiny SDS, new technology wrapper, they’re not going to trust it. Frankly, for these guys, the new version of Windows isn’t that shiny of a wrapper.
Luckily, there are as many Microsoft loyalists as there are haters, and some of them will fire up Storage Spaces. I’d like to hear about your Storage Spaces experience--good, bad and especially ugly. Share your thoughts in the comment section below.
We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.