728x90

This topic describes how to add servers or drives to Storage Spaces Direct.

Adding servers

Adding servers, often called scaling out, adds storage capacity and can improve storage performance and unlock better storage efficiency. If your deployment is hyper-converged, adding servers also provides more compute resources for your workload.

Typical deployments are simple to scale out by adding servers. There are just two steps:

  1. Run the cluster validation wizard using the Failover Cluster snap-in or with the Test-Cluster cmdlet in PowerShell (run as Administrator). Include the new server <NewNode> you wish to add.This confirms that the new server is running Windows Server 2016 Datacenter Edition, has joined the same Active Directory Domain Services domain as the existing servers, has all the required roles and features, and has networking properly configured.
  2. [!IMPORTANT] If you are re-using drives that contain old data or metadata you no longer need, clear them using Disk Management or the Reset-PhysicalDisk cmdlet. If old data or metadata is detected, the drives aren't pooled.
  3. Test-Cluster -Node <Node>, <Node>, <Node>, <NewNode> -Include "Storage Spaces Direct", Inventory, Network, "System Configuration"
  4. Run the following cmdlet on the cluster to finish adding the server:
Add-ClusterNode -Name NewNode

[!NOTE] Automatic pooling depends on you having only one pool. If you've circumvented the standard configuration to create multiple pools, you will need to add new drives to your preferred pool yourself using Add-PhysicalDisk.

From 2 to 3 servers: unlocking three-way mirroring

With two servers, you can only create two-way mirrored volumes (compare with distributed RAID-1). With three servers, you can create three-way mirrored volumes for better fault tolerance. We recommend using three-way mirroring whenever possible.

Two-way mirrored volumes cannot be upgraded in-place to three-way mirroring. Instead, you can create a new volume and migrate (copy, such as by using Storage Replica) your data to it, and then remove the old volume.

To begin creating three-way mirrored volumes, you have several good options. You can use whichever you prefer.

Option 1

Specify PhysicalDiskRedundancy = 2 on each new volume upon creation.

New-Volume -FriendlyName <Name> -FileSystem CSVFS_ReFS -StoragePoolFriendlyName S2D* -Size <Size> -PhysicalDiskRedundancy 2

Option 2

Instead, you can set PhysicalDiskRedundancyDefault = 2 on the pool's ResiliencySetting object named Mirror. Then, any new mirrored volumes will automatically use three-way mirroring even if you don't specify it.

Get-StoragePool S2D* | Get-ResiliencySetting -Name Mirror | Set-ResiliencySetting -PhysicalDiskRedundancyDefault 2

New-Volume -FriendlyName <Name> -FileSystem CSVFS_ReFS -StoragePoolFriendlyName S2D* -Size <Size>

Option 3

Set PhysicalDiskRedundancy = 2 on the StorageTier template called Capacity, and then create volumes by referencing the tier.

Set-StorageTier -FriendlyName Capacity -PhysicalDiskRedundancy 2

New-Volume -FriendlyName <Name> -FileSystem CSVFS_ReFS -StoragePoolFriendlyName S2D* -StorageTierFriendlyNames Capacity -StorageTierSizes <Size>

From 3 to 4 servers: unlocking dual parity

With four servers, you can use dual parity, also commonly called erasure coding (compare to distributed RAID-6). This provides the same fault tolerance as three-way mirroring, but with better storage efficiency. To learn more, see Fault tolerance and storage efficiency.

If you're coming from a smaller deployment, you have several good options to begin creating dual parity volumes. You can use whichever you prefer.

Option 1

Specify PhysicalDiskRedundancy = 2 and ResiliencySettingName = Parity on each new volume upon creation.

New-Volume -FriendlyName <Name> -FileSystem CSVFS_ReFS -StoragePoolFriendlyName S2D* -Size <Size> -PhysicalDiskRedundancy 2 -ResiliencySettingName Parity

Option 2

Set PhysicalDiskRedundancy = 2 on the pool's ResiliencySetting object named Parity. Then, any new parity volumes will automatically use dual parity even if you don't specify it

Get-StoragePool S2D* | Get-ResiliencySetting -Name Parity | Set-ResiliencySetting -PhysicalDiskRedundancyDefault 2

New-Volume -FriendlyName <Name> -FileSystem CSVFS_ReFS -StoragePoolFriendlyName S2D* -Size <Size> -ResiliencySettingName Parity

With four servers, you can also begin using mirror-accelerated parity, where an individual volume is part mirror and part parity.

For this, you will need to update your StorageTier templates to have both Performance and Capacity tiers, as they would be created if you had first run Enable-ClusterS2D at four servers. Specifically, both tiers should have the MediaType of your capacity devices (such as SSD or HDD) and PhysicalDiskRedundancy = 2. The Performance tier should be ResiliencySettingName = Mirror, and the Capacity tier should be ResiliencySettingName = Parity.

Option 3

You may find it easiest to simply remove the existing tier template and create the two new ones. This will not affect any pre-existing volumes which were created by referring the tier template: it's just a template.

Remove-StorageTier -FriendlyName Capacity

New-StorageTier -StoragePoolFriendlyName S2D* -MediaType HDD -PhysicalDiskRedundancy 2 -ResiliencySettingName Mirror -FriendlyName Performance
New-StorageTier -StoragePoolFriendlyName S2D* -MediaType HDD -PhysicalDiskRedundancy 2 -ResiliencySettingName Parity -FriendlyName Capacity

That's it! You are now ready to create mirror-accelerated parity volumes by referencing these tier templates.

Example

New-Volume -FriendlyName "Sir-Mix-A-Lot" -FileSystem CSVFS_ReFS -StoragePoolFriendlyName S2D* -StorageTierFriendlyNames Performance, Capacity -StorageTierSizes <Size, Size>

Beyond 4 servers: greater parity efficiency

As you scale beyond four servers, new volumes can benefit from ever-greater parity encoding efficiency. For example, between six and seven servers, efficiency improves from 50.0% to 66.7% as it becomes possible to use Reed-Solomon 4+2 (rather than 2+2). There are no steps you need to take to begin enjoying this new efficiency; the best possible encoding is determined automatically each time you create a volume.

However, any pre-existing volumes will not be "converted" to the new, wider encoding. One good reason is that to do so would require a massive calculation affecting literally every single bit in the entire deployment. If you would like pre-existing data to become encoded at the higher efficiency, you can migrate it to new volume(s).

For more details, see Fault tolerance and storage efficiency.

Adding servers when using chassis or rack fault tolerance

If your deployment uses chassis or rack fault tolerance, you must specify the chassis or rack of new servers before adding them to the cluster. This tells Storage Spaces Direct how best to distribute data to maximize fault tolerance.

  1. Create a temporary fault domain for the node by opening an elevated PowerShell session and then using the following command, where <NewNode> is the name of the new cluster node:
  2. New-ClusterFaultDomain -Type Node -Name <NewNode>
  3. Move this temporary fault-domain into the chassis or rack where the new server is located in the real world, as specified by <ParentName>:For more information, see Fault domain awareness in Windows Server 2016.
  4. Set-ClusterFaultDomain -Name <NewNode> -Parent <ParentName>
  5. Add the server to the cluster as described in Adding servers. When the new server joins the cluster, it's automatically associated (using its name) with the placeholder fault domain.

Adding drives

Adding drives, also known as scaling up, adds storage capacity and can improve performance. If you have available slots, you can add drives to each server to expand your storage capacity without adding servers. You can add cache drives or capacity drives independently at any time.

[!IMPORTANT] We strongly recommend that all servers have identical storage configurations.

To scale up, connect the drives and verify that Windows discovers them. They should appear in the output of the Get-PhysicalDisk cmdlet in PowerShell with their CanPool property set to True. If they show as CanPool = False, you can see why by checking their CannotPoolReason property.

Get-PhysicalDisk | Select SerialNumber, CanPool, CannotPoolReason

Within a short time, eligible drives will automatically be claimed by Storage Spaces Direct, added to the storage pool, and volumes will automatically be redistributed evenly across all the drives. At this point, you're finished and ready to extend your volumes or create new ones.

If the drives don't appear, manually scan for hardware changes. This can be done using Device Manager, under the Action menu. If they contain old data or metadata, consider reformatting them. This can be done using Disk Management or with the Reset-PhysicalDisk cmdlet.

[!NOTE] Automatic pooling depends on you having only one pool. If you've circumvented the standard configuration to create multiple pools, you will need to add new drives to your preferred pool yourself using Add-PhysicalDisk.

Optimizing drive usage after adding drives or servers

Over time, as drives are added or removed, the distribution of data among the drives in the pool can become uneven. In some cases, this can result in certain drives becoming full while other drives in pool have much lower consumption.

To help keep drive allocation even across the pool, Storage Spaces Direct automatically optimizes drive usage after you add drives or servers to the pool (this is a manual process for Storage Spaces systems that use Shared SAS enclosures). Optimization starts 15 minutes after you add a new drive to the pool. Pool optimization runs as a low-priority background operation, so it can take hours or days to complete, especially if you're using large hard drives.

Optimization uses two jobs - one called Optimize and one called Rebalance - and you can monitor their progress with the following command:

Get-StorageJob

You can manually optimize a storage pool with the Optimize-StoragePool cmdlet. Here's an example:

Get-StoragePool <PoolName> | Optimize-StoragePool
728x90
728x90

This article describes minimum hardware requirements for Storage Spaces Direct. For hardware requirements on Azure Stack HCI, our operating system designed for hyperconverged deployments with a connection to the cloud, see Before you deploy Azure Stack HCI: Determine hardware requirements.

For production, Microsoft recommends purchasing a validated hardware/software solution from our partners, which include deployment tools and procedures. These solutions are designed, assembled, and validated against our reference architecture to ensure compatibility and reliability, so you get up and running quickly. For hardware solutions, visit the Azure Stack HCI solutions website.

 Tip

Want to evaluate Storage Spaces Direct but don't have hardware? Use Hyper-V or Azure virtual machines as described in Using Storage Spaces Direct in guest virtual machine clusters.

Base requirements

Systems, components, devices, and drivers must be certified for the operating system you’re using in the Windows Server Catalog. In addition, we recommend that servers and network adapters have the Software-Defined Data Center (SDDC) Standard and/or Software-Defined Data Center (SDDC) Premium additional qualifications (AQs), as pictured below. There are over 1,000 components with the SDDC AQs.

The fully configured cluster (servers, networking, and storage) must pass all cluster validation tests per the wizard in Failover Cluster Manager or with the Test-Cluster cmdlet in PowerShell.

In addition, the following requirements apply:

Servers

  • Minimum of 2 servers, maximum of 16 servers
  • Recommended that all servers be the same manufacturer and model

CPU

  • Intel Nehalem or later compatible processor; or
  • AMD EPYC or later compatible processor

Memory

  • Memory for Windows Server, VMs, and other apps or workloads; plus
  • 4 GB of RAM per terabyte (TB) of cache drive capacity on each server, for Storage Spaces Direct metadata

Boot

  • Any boot device supported by Windows Server, which now includes SATADOM
  • RAID 1 mirror is not required, but is supported for boot
  • Recommended: 200 GB minimum size

Networking

Storage Spaces Direct requires a reliable high bandwidth, low latency network connection between each node.

Minimum interconnect for small scale 2-3 node

  • 10 Gbps network interface card (NIC), or faster
  • Two or more network connections from each node recommended for redundancy and performance

Recommended interconnect for high performance, at scale, or deployments of 4+

  • NICs that are remote-direct memory access (RDMA) capable, iWARP (recommended) or RoCE
  • Two or more network connections from each node recommended for redundancy and performance
  • 25 Gbps NIC or faster

Switched or switchless node interconnects

  • Switched: Network switches must be properly configured to handle the bandwidth and networking type. If using RDMA that implements the RoCE protocol, network device and switch configuration is even more important.
  • Switchless: Nodes can be interconnected using direct connections, avoiding using a switch. It's required that every node has a direct connection with every other node of the cluster.

Drives

Storage Spaces Direct works with direct-attached SATA, SAS, NVMe, or persistent memory (PMem) drives that are physically attached to just one server each. For more help choosing drives, see the Choosing drives and Understand and deploy persistent memory articles.

  • SATA, SAS, persistent memory, and NVMe (M.2, U.2, and Add-In-Card) drives are all supported
  • 512n, 512e, and 4K native drives are all supported
  • Solid-state drives must provide power-loss protection
  • Same number and types of drives in every server – see Drive symmetry considerations
  • Cache devices must be 32 GB or larger
  • Persistent memory devices are used in block storage mode
  • When using persistent memory devices as cache devices, you must use NVMe or SSD capacity devices (you can't use HDDs)
  • If you're using HDDs to provide storage capacity, you must use storage bus caching. Storage bus caching isn't required when using all-flash deployments
  • NVMe driver is the Microsoft-provided one included in Windows (stornvme.sys)
  • Recommended: Number of capacity drives is a whole multiple of the number of cache drives
  • Recommended: Cache drives should have high write endurance: at least 3 drive-writes-per-day (DWPD) or at least 4 terabytes written (TBW) per day – see Understanding drive writes per day (DWPD), terabytes written (TBW), and the minimum recommended for Storage Spaces Direct

 Note

When using all flash drives for storage capacity, the benefits of storage pool caching will be limited. Learn more about the storage pool cache.

Here's how drives can be connected for Storage Spaces Direct:

  • Direct-attached SATA drives
  • Direct-attached NVMe drives
  • SAS host-bus adapter (HBA) with SAS drives
  • SAS host-bus adapter (HBA) with SATA drives
  • NOT SUPPORTED: RAID controller cards or SAN (Fibre Channel, iSCSI, FCoE) storage. Host-bus adapter (HBA) cards must implement simple pass-through mode for any storage devices used for Storage Spaces Direct.

Drives can be internal to the server, or in an external enclosure that is connected to just one server. SCSI Enclosure Services (SES) is required for slot mapping and identification. Each external enclosure must present a unique identifier (Unique ID).

  • Drives internal to the server
  • Drives in an external enclosure ("JBOD") connected to one server
  • NOT SUPPORTED: Shared SAS enclosures connected to multiple servers or any form of multi-path IO (MPIO) where drives are accessible by multiple paths.

Minimum number of drives (excludes boot drive)

The minimum number of capacity drives you require varies with your deployment scenario. If you're planning to use the storage pool cache, there must be at least 2 cache devices per server.

You can deploy Storage Spaces Direct on a cluster of physical servers or on virtual machine (VM) guest clusters. You can configure your Storage Spaces Direct design for performance, capacity, or balanced scenarios based on the selection of physical or virtual storage devices. Virtualized deployments take advantage of the private or public cloud's underlying storage performance and resilience. Storage Spaces Direct deployed on VM guest clusters allows you to use high availability solutions within virtual environment.

The following sections describe the minimum drive requirements for physical and virtual deployments.

Physical deployments

This table shows the minimum number of capacity drives by type for hardware deployments such as Azure Stack HCI version 21H2 or later, and Windows Server.

Drive type present (capacity only)Minimum drives required (Windows Server)Minimum drives required (Azure Stack HCI)
All persistent memory (same model) 4 persistent memory 2 persistent memory
All NVMe (same model) 4 NVMe 2 NVMe
All SSD (same model) 4 SSD 2 SSD

If you're using the storage pool cache, there must be at least 2 more drives configured for the cache. The table shows the minimum numbers of drives required for both Windows Server and Azure Stack HCI deployments using 2 or more nodes.

Drive type presentMinimum drives required
Persistent memory + NVMe or SSD 2 persistent memory + 4 NVMe or SSD
NVMe + SSD 2 NVMe + 4 SSD
NVMe + HDD 2 NVMe + 4 HDD
SSD + HDD 2 SSD + 4 HDD

 Important

The storage pool cache cannot be used with Azure Stack HCI in a single node deployment.

Virtual deployment

This table shows the minimum number of drives by type for virtual deployments such as Windows Server guest VMs or Windows Server Azure Edition.

Drive type present (capacity only)Minimum drives required
Virtual Hard Disk 2

 Tip

To boost the performance for guest VMs when running on Azure Stack HCI or Windows Server, consider using the CSV in-memory read cache to cache unbuffered read operations.

If you're using Storage Spaces Direct in a virtual environment, you must consider:

  • Virtual disks aren't susceptible to failures like physical drives are, however you're dependent on the performance and reliability of the public or private cloud
  • It's recommended to use a single tier of low latency / high performance storage
  • Virtual disks must be used for capacity only

Learn more about deploying Storage Spaces Direct using virtual machines and virtualized storage.

Maximum capacity

MaximumsWindows Server 2019 or laterWindows Server 2016
Raw capacity per server 400 TB 100 TB
Pool capacity 4 PB (4,000 TB) 1 PB
728x90
728x90

This topic describes how to add servers or drives to Storage Spaces Direct.

Adding servers

Adding servers, often called scaling out, adds storage capacity and can improve storage performance and unlock better storage efficiency. If your deployment is hyper-converged, adding servers also provides more compute resources for your workload.

Typical deployments are simple to scale out by adding servers. There are just two steps:

  1. Run the cluster validation wizard using the Failover Cluster snap-in or with the Test-Cluster cmdlet in PowerShell (run as Administrator). Include the new server <NewNode> you wish to add.
    Test-Cluster -Node <Node>, <Node>, <Node>, <NewNode> -Include "Storage Spaces Direct", Inventory, Network, "System Configuration"
    
    This confirms that the new server is running Windows Server 2016 Datacenter Edition, has joined the same Active Directory Domain Services domain as the existing servers, has all the required roles and features, and has networking properly configured.
  2.  Important
  3. If you are re-using drives that contain old data or metadata you no longer need, clear them using Disk Management or the Reset-PhysicalDisk cmdlet. If old data or metadata is detected, the drives aren't pooled.
  4. PowerShellCopy
  5. Run the following cmdlet on the cluster to finish adding the server:
Copy
Add-ClusterNode -Name NewNode

 Note

Automatic pooling depends on you having only one pool. If you've circumvented the standard configuration to create multiple pools, you will need to add new drives to your preferred pool yourself using Add-PhysicalDisk.

From 2 to 3 servers: unlocking three-way mirroring

With two servers, you can only create two-way mirrored volumes (compare with distributed RAID-1). With three servers, you can create three-way mirrored volumes for better fault tolerance. We recommend using three-way mirroring whenever possible.

Two-way mirrored volumes cannot be upgraded in-place to three-way mirroring. Instead, you can create a new volume and migrate (copy, such as by using Storage Replica) your data to it, and then remove the old volume.

To begin creating three-way mirrored volumes, you have several good options. You can use whichever you prefer.

Option 1

Specify PhysicalDiskRedundancy = 2 on each new volume upon creation.

PowerShellCopy
New-Volume -FriendlyName <Name> -FileSystem CSVFS_ReFS -StoragePoolFriendlyName S2D* -Size <Size> -PhysicalDiskRedundancy 2

Option 2

Instead, you can set PhysicalDiskRedundancyDefault = 2 on the pool's ResiliencySetting object named Mirror. Then, any new mirrored volumes will automatically use three-way mirroring even if you don't specify it.

PowerShellCopy
Get-StoragePool S2D* | Get-ResiliencySetting -Name Mirror | Set-ResiliencySetting -PhysicalDiskRedundancyDefault 2

New-Volume -FriendlyName <Name> -FileSystem CSVFS_ReFS -StoragePoolFriendlyName S2D* -Size <Size>

Option 3

Set PhysicalDiskRedundancy = 2 on the StorageTier template called Capacity, and then create volumes by referencing the tier.

PowerShellCopy
Set-StorageTier -FriendlyName Capacity -PhysicalDiskRedundancy 2

New-Volume -FriendlyName <Name> -FileSystem CSVFS_ReFS -StoragePoolFriendlyName S2D* -StorageTierFriendlyNames Capacity -StorageTierSizes <Size>

From 3 to 4 servers: unlocking dual parity

With four servers, you can use dual parity, also commonly called erasure coding (compare to distributed RAID-6). This provides the same fault tolerance as three-way mirroring, but with better storage efficiency. To learn more, see Fault tolerance and storage efficiency.

If you're coming from a smaller deployment, you have several good options to begin creating dual parity volumes. You can use whichever you prefer.

Option 1

Specify PhysicalDiskRedundancy = 2 and ResiliencySettingName = Parity on each new volume upon creation.

PowerShellCopy
New-Volume -FriendlyName <Name> -FileSystem CSVFS_ReFS -StoragePoolFriendlyName S2D* -Size <Size> -PhysicalDiskRedundancy 2 -ResiliencySettingName Parity

Option 2

Set PhysicalDiskRedundancy = 2 on the pool's ResiliencySetting object named Parity. Then, any new parity volumes will automatically use dual parity even if you don't specify it

PowerShellCopy
Get-StoragePool S2D* | Get-ResiliencySetting -Name Parity | Set-ResiliencySetting -PhysicalDiskRedundancyDefault 2

New-Volume -FriendlyName <Name> -FileSystem CSVFS_ReFS -StoragePoolFriendlyName S2D* -Size <Size> -ResiliencySettingName Parity

With four servers, you can also begin using mirror-accelerated parity, where an individual volume is part mirror and part parity.

For this, you will need to update your StorageTier templates to have both Performance and Capacity tiers, as they would be created if you had first run Enable-ClusterS2D at four servers. Specifically, both tiers should have the MediaType of your capacity devices (such as SSD or HDD) and PhysicalDiskRedundancy = 2. The Performance tier should be ResiliencySettingName = Mirror, and the Capacity tier should be ResiliencySettingName = Parity.

Option 3

You may find it easiest to simply remove the existing tier template and create the two new ones. This will not affect any pre-existing volumes which were created by referring the tier template: it's just a template.

PowerShellCopy
Remove-StorageTier -FriendlyName Capacity

New-StorageTier -StoragePoolFriendlyName S2D* -MediaType HDD -PhysicalDiskRedundancy 2 -ResiliencySettingName Mirror -FriendlyName Performance
New-StorageTier -StoragePoolFriendlyName S2D* -MediaType HDD -PhysicalDiskRedundancy 2 -ResiliencySettingName Parity -FriendlyName Capacity

That's it! You are now ready to create mirror-accelerated parity volumes by referencing these tier templates.

Example

PowerShellCopy
New-Volume -FriendlyName "Sir-Mix-A-Lot" -FileSystem CSVFS_ReFS -StoragePoolFriendlyName S2D* -StorageTierFriendlyNames Performance, Capacity -StorageTierSizes <Size, Size>

Beyond 4 servers: greater parity efficiency

As you scale beyond four servers, new volumes can benefit from ever-greater parity encoding efficiency. For example, between six and seven servers, efficiency improves from 50.0% to 66.7% as it becomes possible to use Reed-Solomon 4+2 (rather than 2+2). There are no steps you need to take to begin enjoying this new efficiency; the best possible encoding is determined automatically each time you create a volume.

However, any pre-existing volumes will not be "converted" to the new, wider encoding. One good reason is that to do so would require a massive calculation affecting literally every single bit in the entire deployment. If you would like pre-existing data to become encoded at the higher efficiency, you can migrate it to new volume(s).

For more details, see Fault tolerance and storage efficiency.

Adding servers when using chassis or rack fault tolerance

If your deployment uses chassis or rack fault tolerance, you must specify the chassis or rack of new servers before adding them to the cluster. This tells Storage Spaces Direct how best to distribute data to maximize fault tolerance.

  1. Create a temporary fault domain for the node by opening an elevated PowerShell session and then using the following command, where <NewNode> is the name of the new cluster node:
    New-ClusterFaultDomain -Type Node -Name <NewNode>
    
  2. PowerShellCopy
  3. Move this temporary fault-domain into the chassis or rack where the new server is located in the real world, as specified by <ParentName>:
    Set-ClusterFaultDomain -Name <NewNode> -Parent <ParentName>
    
    For more information, see Fault domain awareness in Windows Server 2016.
  4. PowerShellCopy
  5. Add the server to the cluster as described in Adding servers. When the new server joins the cluster, it's automatically associated (using its name) with the placeholder fault domain.

Adding drives

Adding drives, also known as scaling up, adds storage capacity and can improve performance. If you have available slots, you can add drives to each server to expand your storage capacity without adding servers. You can add cache drives or capacity drives independently at any time.

 Important

We strongly recommend that all servers have identical storage configurations.

To scale up, connect the drives and verify that Windows discovers them. They should appear in the output of the Get-PhysicalDisk cmdlet in PowerShell with their CanPool property set to True. If they show as CanPool = False, you can see why by checking their CannotPoolReason property.

PowerShellCopy
Get-PhysicalDisk | Select SerialNumber, CanPool, CannotPoolReason

Within a short time, eligible drives will automatically be claimed by Storage Spaces Direct, added to the storage pool, and volumes will automatically be redistributed evenly across all the drives. At this point, you're finished and ready to extend your volumes or create new ones.

If the drives don't appear, manually scan for hardware changes. This can be done using Device Manager, under the Action menu. If they contain old data or metadata, consider reformatting them. This can be done using Disk Management or with the Reset-PhysicalDisk cmdlet.

 Note

Automatic pooling depends on you having only one pool. If you've circumvented the standard configuration to create multiple pools, you will need to add new drives to your preferred pool yourself using Add-PhysicalDisk.

Optimizing drive usage after adding drives or servers

Over time, as drives are added or removed, the distribution of data among the drives in the pool can become uneven. In some cases, this can result in certain drives becoming full while other drives in pool have much lower consumption.

To help keep drive allocation even across the pool, Storage Spaces Direct automatically optimizes drive usage after you add drives or servers to the pool (this is a manual process for Storage Spaces systems that use Shared SAS enclosures). Optimization starts 15 minutes after you add a new drive to the pool. Pool optimization runs as a low-priority background operation, so it can take hours or days to complete, especially if you're using large hard drives.

Optimization uses two jobs - one called Optimize and one called Rebalance - and you can monitor their progress with the following command:

PowerShellCopy
Get-StorageJob

You can manually optimize a storage pool with the Optimize-StoragePool cmdlet. Here's an example:

PowerShellCopy
Get-StoragePool <PoolName> | Optimize-StoragePool
728x90
728x90

When taking an S2D server offline for patching or other reasons, it is not only taking away the compute and memory for that server but also a portion of the storage pool. Care must be taken to keep your data safe and ensure quick resumption of production-level readiness to your cluster.

 

Visit Microsoft for the full description and latest information: https://docs.microsoft.com/en-us/windows-server/storage/storage-spaces/maintain-servers

 

Key Steps to reboot servers:

1. Open PowerShell as Admin.

 

2. Check to make sure the virtual disks are healthy by running Get-VirtualDisk.

 

3. Run Suspend-ClusterNode -Drain to move the VMs to another node.

 

4. Run to cleanly put the storage into maintenance mode. At this point writes to this node’s storage are still active until step 5 has been completed.

Get-StorageFaultDomain -type StorageScaleUnit | Where-Object {$_.FriendlyName -eq “<Node Name>”} | Enable-StorageMaintenanceMode

 

5. Run to verify the disks for the node are in maintenance mode. You should see “In Maintenance Mode, OK” under Operational Status.

Foreach($Node in (Get-ClusterNode).Name){$Node;Get-StorageNode -Name $Node*|Get-PhysicalDisk -PhysicallyConnected}

 

6. Reboot server.

 

7. Once you’re ready to put the server back into production, open PowerShell as Admin.

 

8. Run to put the storage back into production.

Get-StorageFaultDomain -type StorageScaleUnit | Where-Object {$_.FriendlyName -eq “<Node Name>”} | Disable-StorageMaintenanceMode

 

9. A storage job will initiate in the background to repair and resync the data. To check on the status, run (as Admin) Get-StorageJob  If it returns to a command prompt that means there are no jobs running. Do not reboot the next node until all of the jobs have been completed.

 

10. Run Get-VirtualDisk to verify the virtual disks are healthy after storage jobs complete. Wait until steps 9 and 10 have been completed before live migrating VMs back to this node as storage jobs will consume system resources potentially affecting the response time of your applications.

 

11. Run Resume-ClusterNode -Failback Immediate to put the cluster node back into production to handle VM workloads.

 

Alternative:

The steps to reboot each servers can take some time especially with post storage resync and repair. If you have the ability to shutdown the entire cluster this link will walk through the steps to make the entire process faster.

https://docs.microsoft.com/en-us/windows-server/storage/storage-spaces/maintain-servers#how-to-update-storage-spaces-direct-nodes-offline

728x90
728x90

How to Run Cluster Validation

1. Open Failover Cluster Manager

2. Right click on the cluster and select “Validate Cluster”

3. A ‘Validate a Configuration Wizard’ will open and select next to continue

 

4. Select “Run only tests I select” and select next to continue

 

5. Make sure “Storage” is unchecked. If you run cluster validation with storage, you will corrupt your data in your production environment

 

6. Select next to continue and the test will run. Once it finish open the Report to view the result.

 

 

728x90
728x90

How to Disable Cluster Quorum:

  1. Open Failover Cluster Manager
  2. Select the cluster
  3. Right click on the cluster or select ‘More Action’ on the Actions panel on the right
  4. Select ‘Configure Cluster Quorum Setting’
  5. This will open the Quorum Wizard and select ‘Next’ to continue
  6. Select the second options. ‘Select the quorum witness’
  7. Next select the last options to disable the quorum witness. ‘Do not configure a quorum witness’
  8. Once the witness is removed. Check the Failover Cluster Manager to make sure it is removed in the 'Cluster Core Resources’

 

How to Enable Cluster Quorum:

  1. Open Failover Cluster Manager
  2. Select the cluster
  3. Right click on the cluster or select ‘More Action’ on the Actions panel on the right
  4. Select ‘Configure Cluster Quorum Setting’
  5. This will open the Quorum Wizard and select ‘Next’ to continue.
  6. Select the second options. ‘Select the quorum witness’
  7. Here you can select 3 different Quorum Witness:
    1. Disk Witness: will create a witness on a disk. This is not an option for Storage Space Direct (S2D) because it does not work. The option is there for Storage Space. It not recommended by DataON.
    2. File Share Witness: Will create a small file which will act as the witness. Recommend having the witness on another cluster, server, or workstation that is not on the cluster itself.
    3. Cloud Witness: will create a witness on the cloud. This requires a Azure account and subscription. This also require having constant internet connection for the witness to be active.

        

  1. Once you select the Witness, select the path where the witness will lie.
  2. Confirm the witness and select ‘Next

  3. Once it is configure, you will reach the summary page and select ‘Finish’ to exit.

 

 

728x90
728x90

There are three high-level steps for migration of VMs from an existing cluster to a new cluster.

  1. Setup new servers and storage and configure iSCSI access to new storage.
  2. Setup Hyper-V and failover clustering.
  3. Setup the CSV storage for the new cluster.

There are two methods by which you can migrate VMs from an existing cluster to a new cluster:

 

Option 1
You could transfer roles to the new cluster, but this does not move any of the actual data for the VMs. If you are using iSCSI with different LUNs and mount points, the process is more involved for migrating the roles and VMs. The easier process is to simply remove each of the Hyper-V VM roles from the failover cluster manager and use the native Hyper-V Manager to “move” the actual VM’s to the new cluster. Both processes can be done while the VMs are running.

 

Option 2
Use the built-in Microsoft “Shared Nothing Live Migration” to migrate VMs to new cluster. For the live migration to work between servers you must initiate the move from the source server. Otherwise you need to employ Kerberos authentication for the Hyper-V settings > Live Migration > Advanced settings and the Delegation > Trust properties of the computer object. It is highly recommended to start this process on non-production VMs at first to ensure the process works smoothly!!

 

Here are the simple step-by-step instructions how to perform this migration.

 

Step 1: Remove Role

Open Failover Cluster Manager and remove the virtual machine role for the VM you want to move. This does not remove the VM, it simply removes it from the cluster manager and thus is no longer highly available.

 

Step 2: Hyper-V Manager Move

Open Hyper-V Manager on the server where the VM resides. Right-click the VM and select Move.

 

Step 3: Select Type of Move

Select the type of move you want to perform. The top option allows you to move the entire VM (config, snapshots, VHDs, etc.).

 

Step 4: Destination Server Name

Specify the name of your destination server. This is not the cluster name but one of the nodes in the cluster.

 

Step 5: What to Move

Now select what you want to move. Again, the top option moves all the VM files necessary to run the VM on a different server.

 

Step 6: Choose folder and move

Browse and select the CSV volume (already created as part of the cluster setup process). We recommend creating a folder with the name of the server inside the CSV for easier identification. Click Next to perform the move.

 

Step 7: Network Check

When the move initially begins, it pre-checks a variety of things to make sure the move will be successful. One of those items is the virtual switch the VM is connected to. If your virtual switches have the same name between your clusters, then you will not receive this prompt. In this example, there was a prompt for the virtual switch for the VM to connect with due to the name change as shown here. If you have snapshots of the VM, these will also move but you will be prompted for the switch to connect with, if the names are different.

 

Step 8: Finishing up

The tool will move and preserve the snapshots which you cannot do with the export option. This is one of the main reasons the Move feature is recommended.

 

 

www.dataonstorage.com | 1-888-725-8588 | sales@dataonstorage.com 

Copyright © 2020 DataON. All Rights Reserved. Specifications may change without notice. DataON is not responsible for photographic or typographical errors. DataON, the DataON logo, MUST, and the MUST logo are trademarks of DataON in the United States and certain other countries. Other company, product, or services names may be trademarks or service marks of others.

728x90
728x90

Below is a step-by-step guide on how to add a third node to an existing 2-node Storage Spaces Direct (S2D) cluster with the production workload still running.

 

1.) Ensure that the cluster is configured with a witness.

2.) Pause/Drain a node in the cluster.

3.) Place the paused node's drives into a storage maintenance mode.

Get-StorageFaultDomain -type StorageScaleUnit | Where-Object {$_.FriendlyName -eq “<Node Name>”} | Enable-StorageMaintenanceMode 

4.) Physically add the 3rd node into the 2-node cluster configuration by cabling the three servers like the image below:

DataON 2U Platforms:

DataON 1U Platform:

5.) Make sure the node that is being added to the cluster has the same firmware and drivers installed as the two servers in the existing 2-node cluster. Also install the necessary Windows features and configure the network correctly. Before adding the third node to the cluster, run a cluster validation.

 

$S1 = "Node1"

$S2 = "Node2"

$S3 = "Node3"

 

$nodes = ($S1,$S2,$S3)

 

Test-Cluster -node $nodes -Include "Storage Spaces Direct",Inventory,Network, "System Configuration" -ReportName C:\Windows\cluster\Reports\report

 

6.) With a clean validation report, proceed to add the third node into the S2D cluster.

<<Add-ClusterNode -Name ‘NewNodeName’ >>

7.) Disable the storage maintenance mode on the drives of the paused node; Resume the node to the cluster.

Get-StorageFaultDomain -type StorageScaleUnit | Where-Object {$_.FriendlyName -eq “<Node Name>”} | Disable-StorageMaintenanceMode 

8.) By adding a third node to the S2D cluster, 3-way-mirror resiliency is unlocked; Run the following command to configure this setting on the existing storage pool.

<< Get-StoragePool S2D* | Get-ResiliencySetting -Name Mirror | Set-ResiliencySetting -PhysicalDiskRedundancyDefault 2>>

9.) Allow S2D optimization job to complete as well as any other storage jobs to finish before creating a new 3-way-mirror virtual disk/CSV.

10.) With the increased capacity from joining the third node to the cluster, you may now create a new volume (3-way-mirror).

 

NOTE: The virtual disks that were previously created will remain as 2-way-mirror resiliency.

728x90
728x90

Introduction

The value of business data grows to the extent that companies simply cannot ignore the importance of a safe data storage. Modern technologies offer a variety of methods how to warrant a highly-available and fault-tolerant storage. One of the techniques includes synchronous mirroring – an approach when a source storage has an exact replica (one or more). At the same time, the data is considered written only when the primary storage receives a signal that all secondary copies have been created. This document suggests taking a look at two-way and three-way synchronous mirroring to better understand the benefits each of them has behind.

Cost-efficiency

2-way synchronous mirroring

This configuration requires storage redundancy on the nodes. The use of RAID10* is recommended. 2-node HA ensures the synchronous mirroring of data between two storage nodes. Taking into account that each storage node has only 50% of usable capacity with RAID10, synchronous mirroring makes the further dividing of those 50% by half resulting in the underutilization of storage capacity – only 25% is used.

* RAID10 use is recommended for a HDD setup. In case of RAID5 use in a HDD setup, there arises the risk of disk failure while rebuilding RAID5. RAID6 in a HDD setup gives low write performance. At the same time, RAID5 and RAID6 configurations can be used for SSD setups due to a high tolerance to physical failures and faster performance of the latter.


With 2-way synchronous mirroring, usable capacity is 25% – ¼ of storage space

3-way synchronous mirroring

Non-redundant RAID0 configuration provides the highest level of performance, therefore, RAID0 can be used for performance. Synchronous mirroring between 3 storage nodes with RAID0 configured on each, results in 33% usable capacity and thus provides a higher level of storage utilization compared to 2-way synchronous mirroring.
While it looks like this configuration does not require storage redundancy on nodes (since 3-way synchronous mirroring already ensures the required level of data protection), you should evaluate the potential risks of disks failing and potential data loss probability (if disks failed on all 3 nodes). Assuming the above, please make sure that the data is protected by backup applications according to the 3-2-1 backup rule.

With 3-way synchronous mirroring, usable capacity is 33% – 1/3 of storage space

As a result, 3-way synchronous mirroring increases the storage utilization rate. The cost-efficiency of this configuration differs depending on the medium type – spindle or flash. The difference is shown on the charts below.

Increased reliability

2-way synchronous mirroring

2-way synchronous mirroring provides 99.99% uptime. The outage of one storage node results in a single point of failure and immediately brings the system to a degraded performance mode. Cache is flushed and turned from write-back to write-through mode on the running node. A number of MPIO paths is reduced twofold because one node is down. Consequently, the storage performance falls.

With 2-way synchronous mirroring, there is a risk of downtime

3-way synchronous mirroring

3-way synchronous mirroring provides 99.9999% uptime. No single point of failure occurs when one node of a 3-node storage cluster goes down. In such a situation, storage performance falls by up to 33% because the system loses 1/3 of the MPIO paths. Performance-critical applications usually can continue running in an ordinary way. The 3-node HA configuration tolerates a double fault and retains the availability of service.

With 3-way synchronous mirroring, constant system uptime is ensured

Higher performance

Higher storage performance is influenced by a number of factors, which include I/O policy, RAID level and cache policy. The effect of these factors is given below:

MPIO paths

2-way synchronous mirroring

With Round Robin/List Queue Depth policy used, I/Os are processed up to two times faster comparing to a single-node configuration.

3-way synchronous mirroring

Owing to the Round Robin/List Queue Depth policy, the I/Os throughput rises by a factor of 3 compared to single-node storage.

As a result, performance is increased by up to 50% compared to a 2-node configuration.

RAID10 vs RAID0

2-way synchronous mirroring

This configuration strongly requires extra redundancy for data protection on the storage nodes themselves. This redundancy can be provided through the use of RAID. RAID10* is recommended for a HDD setup as it ensures mirroring between the disk stripes and makes fast reads and writes. However, storage utilization is considerably low because the same data is mirrored and stored on two stripes of RAID.

* Use of RAID5, RAID6 for a HDD setup is possible but not recommended because of the high probability of a disk failure while rebuilding RAID5, and the low write performance of RAID6. At the same time, RAID5 and RAID6 configurations can be used for SSD setups due to a high tolerance to physical failures and faster performance of the latter.

3-way synchronous mirroring

RAID0 can be used for performance. Both reads and writes are faster here as the system reads the data from all disks simultaneously. With RAID0 you should evaluate the potential risks of disks failing and potential data loss probability (if disks failed on all 3 nodes).

Cache policy

2-way synchronous mirroring

If one node fails, the cache is flushed and turned from write-back to write-through mode and the system immediately switches to a degraded performance mode on reads.

With 2-way synchronous mirroring, cache is flushed and turned to write-through mode causing critical performance degradation

3-way synchronous mirroring

If one node goes down, the system downgrades from a 3 to 2-node cluster and continuous operation with minimal performance degradation (about 33%) due to the absence of one node and because a number of MPIO paths is reduced.

With 3-way synchronous mirroring, cache policy remains write-back that results in minor performance degradation

Important note: Even the highest possible level of redundancy and reliability does not ensure 100% protection against data loss, e.g. due to malicious actions or disaster. So, nothing compares to a good old backup that substantially increases the chances your data is in place.

Conclusion

With synchronous mirroring admins can decide on either two-way mirroring – to ensure basic data protection and high availability, or three-way mirroring – to increase overall system reliability and to also “play games with” cost-efficiency and performance.

728x90
728x90

In this article, I’ll show you how to use the SCCM feature update option to perform a Windows 10 22H2 upgrade. We will use Windows Servicing in SCCM to upgrade Windows 10 devices to version 22H2 via the enablement package.

Windows 10, version 22H2, also known as the Windows 10 2022 Update, is available for eligible devices running Windows 10, versions 20H2 and newer. Microsoft released Windows 10, version 22H2 (KB5015684) on 18th October 2022, and it is an enablement package.

For home users running Windows 10 version 20H2 and later, the recommended way to get the Windows 10 22H2 is via Windows Update. Go to Start > Settings > Windows Update and run Check for Updates. Select the Feature update to Windows 10, version 22H2 and install it.

When you want to upgrade multiple Windows 10 devices to version 22H2, Configuration Manager is the best tool. It simplifies the way you deploy and manage updates and saves a lot of time. With Configuration Manager, you can select the “Feature Update to Windows 10 Version 22H2 via Enablement Package” and deploy it to a set of devices.

Ways to upgrade to Windows 10 22H2

There are multiple ways that you can use to upgrade to Windows 10 version 22H2.

  • Get the Windows 10 22H2 update via Windows Update.
  • Use Servicing Plans to upgrade eligible Windows 10 devices to version 22H2.
  • Upgrade Windows 10 21H2 to Windows 10 22H2 using ConfigMgr Windows Servicing feature.
  • Deploy the Windows 10 22H2 update using the SCCM task sequence.
  • Using the Intune to upgrade to Windows 10 22H2 upgrade for WUfB managed devices.

We will use the Configuration Manager feature update option to deploy the Windows 10 22H2 enablement package out of all the above methods. This is the easiest and quickest way I can think of to update Windows 10 to version 22H2.

Recommended: Upgrade to Windows 10 21H2 using SCCM | ConfigMgr

Can I use SCCM Servicing Plans to Upgrade to Windows 10 version 22H2?

Yes, you can use the Servicing Plans in SCCM to upgrade the computers running Windows 10 to version 22H2. With Configuration manager servicing plans you can ensure that all the Windows 10 systems are kept up-to-date when new builds are released. Servicing plans are ADR that can help you to upgrade Windows 10 to version 22H2. Take a look at detailed guide on Windows 10 Servicing Plans in SCCM.

If you are unsure whether to upgrade to Windows 10 22H2 using a Servicing Plan or a feature update deployment, I would advise choosing the latter. There are only a few steps involved in deploying a Windows 10 22H2 feature update, making it much simpler than a servicing plan. You get to make the final decision.

Related Article: Windows 11 22H2 upgrade using SCCM | ConfigMgr

About Windows 10 version 22H2

Windows 10 22H2 is an enablement package and this is good news for admins. The enablement package is a great option for installing a scoped feature update like Windows 10, version 22H2. You can upgrade from version 2004, 20H2, 21H1, or 21H2 to version 22H2 with a single restart, reducing update downtime. Since Windows 10 22H2 is an enablement package, the devices currently on Windows 10, version 20H2 or newer will have a fast installation experience because the update will install like a monthly update.

Prerequisites

With Configuration Manager, you need few things to be configured before you deploy the feature updates to computers. If you have deployed the feature updates before, it means most of the prerequisites are already taken care of. If your SCCM setup is new, you may have to configure them once before deploying the upgrades.

Listed below are all the prerequisites required for Windows 10 22H2 upgrade using SCCM.

Download Windows 10 22H2 Enablement Package

Let’s look at steps for downloading Windows 10 22H2 enablement package using SCCM. First of all ensure you have synchronized the updates in Configuration Manager. You can either download the feature first and deploy or directly deploy the feature update. I prefer to download it first, let the content get distributed to distribution points and then deploy it.

Go to Software Library > Overview > Windows 10 Servicing > All Windows 10 Updates. Look for the update “Feature Update to Windows 10 Version 22H2 x64-based systems 2022-10 via Enablement Package“. You can also search the updates with article ID 5015684.

Download Windows 10 22H2 Enablement Package

If you have got both x64-based systems and x86-based systems, you must download Feature Update to Windows 10 Version 22H2 x64-based and x86-based. In my lab, all my Windows 10 VMs are running Windows 10 64-bit OS, and therefore I am going to download only Feature Update to Windows 10 Version 22H2 x64-based systems 2022-10 via Enablement Package.

Create Deployment Package

On the Deployment package window, specify the deployment package for Windows 10 22H2 upgrade files. This deployment package that you create will contain the software update files that will be deployed to the clients.

Select Create a new deployment package and specify the package name as “Windows 10 22H2 Deployment Package.” Add a brief description and specify the package source – a shared folder where you want to download the updates. Enabling the binary differential replication is optional. Click Next.

Download Windows 10 22H2 Enablement Package

Click on Add button and select the distribution point servers to which you would like to distribute the Windows 10 22H2 upgrade files. Click Next.

Download Windows 10 22H2 Enablement Package

On the Distribution Settings page, you can specify the general distribution settings for the deployment package. Click Next.

Specify the Deployment Package Distribution Settings

The download location lets you select an option to download the updates from. For example, you can download the software updates directly from internet or choose to download the software updates from the location on network. Select the option “Download software updates from the internet“. Click Next.

Choose the Download Location for Updates

Specify the update language for the products. The default language is English US. Click Next.

Select Update Languages for Products

Review the Windows 10 22H2 deployment package settings and close the Download software updates wizard.

Close Download Software Updates Wizard

Note: In case you encounter issues while downloading the Windows 10 22H2 enablement package update, review the PatchDownloader.log located in the %temp% folder. This log file will log all the errors that occur during the download of Windows 10 feature updates.

In the above step, the files required to upgrade Windows 10 to version 22H2 are downloaded to the specified location. If you browse to the downloaded location, you will notice that Windows 10 22H2 enablement package contains only one update which is Windows10.0-KB5015684-x64. The size of the Windows 10 22H2 enablement package is 32 KB. This one file is sufficient to upgrade Windows 10 computers to version 22H2.

Windows 10 22H2 Enablement Package

Deploy Windows 10 22H2 Feature Update using SCCM

In this step, using the Windows Servicing, we will perform the Windows 10 22H2 feature update deployment using Configuration Manager. You can deploy the feature update to a single client or to multiple devices using device collection.

If you are deploying the Windows 10 22H2 update for the first time, I suggest testing the upgrade on a Pilot device collection. A few devices that are intended to test the upgrade to 22H2 should be included in this device collection. Use the following guide to create a device collection for Windows 10 computers in SCCM.

Recommended Article: Upgrade Windows 10 21H1 using SCCM | ConfigMgr

Use the following steps to deploy the Windows 10 22H2 feature update using SCCM:

  • In the ConfigMgr console, navigate to Software Library\Overview\Windows Servicing\All Windows Feature Updates.
  • Right-click Feature Update to Windows 10 Version 22H2 x64-based systems 2022-10 via Enablement Package update, and select Deploy.

Deploy Windows 10 22H2 Feature Update using SCCM

On the General page, enter the details for Windows 10 22H2 feature update deployment.

  • Deployment Name: Enter a suitable name such as Deploy Windows 10 22H2 Enablement Package.
  • Description: Although it’s optional, you may add a brief description.
  • Software Update Group: This is created automatically and is visible under Software Update Groups in Configuration Manager console.
  • Collection: Click Browse and select a device collection consisting of few pilot devices selected for testing Windows 10 2022 upgrade.

Click the Next button to continue.

Deploy Windows 10 22H2 Feature Update using SCCM

On the Deployment Settings page, specify the type of deployment. Choose whether you want to make the Windows 10 22H2 upgrade available for users in the Software Center or deploy it as required. Read more about SCCM Available vs. Required to know the differences. Click Next.

Deploy Windows 10 22H2 Feature Update using SCCM

The Scheduling page lets you configure schedule details for the deployment. Configure the following settings:

  • Schedule Evaluation | Time Based on: Select Client Local Time.
  • Software Available Time: As soon as possible.

If you want to make the Windows 10 22H2 upgrade available at a specific date and time, select Specific time and define them. Define the deadline installation to ensure the upgrade happens in the defined period.

Click Next to continue.

Deploy Windows 10 22H2 Feature Update using SCCM

Specify the following user experience settings for the deployment.

  • User Notifications: Display in Software Center and show all notifications
  • Commit changes at the deadline or during a maintenance window (requires restarts): Yes
  • If any update in this deployment requires a system restart, run updates deployment evaluation cycle after restart: No.

Click Next.

Deploy Windows 10 22H2 Feature Update using SCCM

On the Download settings page, specify the download settings for the current deployment:

  • Client Computers can use distribution points from neighbor boundaryNo
  • Download and install software updates from the fallback content source locationYes

Click Next to continue.

Deploy Windows 10 22H2 Feature Update using SCCM

Review the deployment settings of Windows 10 22H2 on Summary page and click Next. On the Completion window, click Close.

Deploy Windows 10 22H2 Feature Update using SCCM

Windows 10 22H2 Servicing: End-User Experience for Upgrade

After you have deployed Windows 10 22H2 enablement package update using Windows Servicing, it’s time to test the upgrade installation on end computers. The end-user experience is similar to other feature upgrades.

On the client computer, first launch the Software Center. Once the Software Center has been opened, select Feature Update to Windows 10 Version 22H2 x64-based systems 2022-10 via Enablement Package update by clicking the Updates tab. Click on the Install button.

Windows 10 22H2 Upgrade Servicing: End-User Experience

Confirm you want to upgrade the operating system on this computer. It will take only few minutes to upgrade the operating system as this is an enablement package. Note that this is an in-place upgrade to version 22H2, and the setup automatically migrates your apps, data, and settings. Click on the Install button to begin the Windows 10 22H2 upgrade.

Windows 10 22H2 Upgrade Servicing: End-User Experience

The Feature Update to Windows 10 Version 22H2 x64-based systems 2022-10 via Enablement Package is installed in less than 2 minutes. Click on Restart.

Restart the Computer to Complete Windows 10 22H2 Upgrade

On the confirmation box, click Restart.

Restart the Computer to Complete Windows 10 22H2 Upgrade

The Windows 10 22H2 upgrade is complete. Log in to the Windows 10 computers and click Start > About My PC. Under Windows Specifications, you will find the build number and version of Windows 10. In the below screenshot, we see the Windows version is “22H2” and the OS build number is 19045.2006.

Verify the Windows 10 22H2 Version and Build Number

After you have upgraded multiple Windows 10 computers to version 22H2, you can create a device collection in SCCM. Refer to this article to learn how to create Windows 10 22H2 device collection using WQL Query. If you want to get to Windows 11, you can upgrade to Windows 11 using different methods.

728x90

+ Recent posts