728x90

오류 (12711)오류 때문에 VMM 서버(HPVCluster)
에서 WMI 작업을 완료할 수 없습니다: [MSCluster_Resource.Name=&] 클러스터 리소스를 찾을 수 없습니다.
클러스터 리소스를 찾을 수 없습니다(0x138F)

권장 작업
문제를 해결한 다음 작업을 다시 시도하십시오.

클러스터 리소스를 찾을 수 없습니다.

이 문제를
해결하려면 Hyper-V 노드에 로그인합니다. 그리고 달려라.

1
2
#Replace VMNAME with your VM which give the errorcode
get-clusterresource | where {$_.ownergroup -match "VMNAME" -and $_.resourcetype.name -eq 'virtual machine configuration'} | Update-ClusterVirtualMachineConfiguration

이 후 복구 -> SCVMM에서 무시 및 새로 고침을 클릭 할 수 있습니다.

728x90
728x90

조짐

만료가 다가오고 있는지 어떻게 알 수 있습니까? Hyper-V-VMMS 이벤트 로그에 다음 경고가 표시됩니다.

로그 이름: 마이크로소프트-윈도우-하이퍼-V-VMMS-관리
원본: 마이크로소프트-윈도우-하이퍼-V-VMMS
이벤트 ID: 12510
작업 범주: 없음
수준: 경고
사용자: 시스템
설명: 서버 인증에 사용되는 인증서는 30일 이내에 만료됩니다. 인증서가 만료된 후에는 가상 컴퓨터에 대한 원격 액세스를 사용할 수 없습니다. 인증서를 갱신하거나 다시 만드십시오.

 

조치

1. 각 VMM 및 Host 서버에서 인증서 삭제

2. 아래 파워셀로 인증서 다시 등록

$Credential = Get-Credential
Get-SCVMMManagedComputer -ComputerName "vmm.contoso.com" | Register-SCVMMManagedComputer  -Credential $credential

3. 인증서 생성 확인

4. 연결 확인
Get-SCVMMManagedComputer | ft Name, StateString, RoleString,State,VersionStateString,AgentVersion,UpdatedDate,IsFullyCached,MostRecentTaskIfLocal

5. SCVMM 작업창에서 연결 확인

728x90
728x90

Its been a while with no updates and it's not that I haven't been working hard, it's that I have been doing a lot of stuff directly with the API of systems like WHMCS or NetBox, which have an extremely low appeal to anyone not working in the service provider space, so haven't been adding it all to the blog. If anyone wants me to add this stuff please just get in touch and I will put it on.

I was recently asked to create a function that would allow you to set the IP address of a virtual machine from the host in a Hyper-V environment.

Getting this working for Windows was pretty easy:

function Set-VMIp {
param(
$VMhost,
$VMname,
$Mask,
$GateW,
$IPaddress
)

Invoke-Command -ComputerName $VMhost -ArgumentList $IPaddress, $Mask, $Gatew, $VMname  -ScriptBlock {

[string]$VMname = $args[3]

$VMManServ =  Get-WmiObject -Namespace root\virtualization\v2 -Class Msvm_VirtualSystemManagementService

$vm = Get-WmiObject -Namespace 'root\virtualization\v2' -Class 'Msvm_ComputerSystem' | Where-Object { $_.ElementName -eq $VMname }

$vmSettings = $vm.GetRelated('Msvm_VirtualSystemSettingData') | Where-Object { $_.VirtualSystemType -eq 'Microsoft:Hyper-V:System:Realized' } 

$nwAdapters = $vmSettings.GetRelated('Msvm_SyntheticEthernetPortSettingData') 

$ipstuff = $nwAdapters.getrelated('Msvm_GuestNetworkAdapterConfiguration')

$ipstuff.DHCPEnabled = $false
$ipstuff.DNSServers = "8.8.8.8"
$ipstuff.IPAddresses = $args[0]
$ipstuff.Subnets = $args[1]
$ipstuff.DefaultGateways = $args[2]

$setIP = $VMManServ.SetGuestNetworkAdapterConfiguration($VM, $ipstuff.GetText(1))
}
}

I can't really claim this as all being my own work, I used a blog post from a Microsoft employee, head of product dev or something or other. Unfortunately, I can't seem to source the link right now to give proper credit. You dont really need to see it though. His post was pretty hardcore and mine is much easier, I promise.

728x90
728x90

This topic describes how to add servers or drives to Storage Spaces Direct.

Adding servers

Adding servers, often called scaling out, adds storage capacity and can improve storage performance and unlock better storage efficiency. If your deployment is hyper-converged, adding servers also provides more compute resources for your workload.

Typical deployments are simple to scale out by adding servers. There are just two steps:

  1. Run the cluster validation wizard using the Failover Cluster snap-in or with the Test-Cluster cmdlet in PowerShell (run as Administrator). Include the new server <NewNode> you wish to add.
    Test-Cluster -Node <Node>, <Node>, <Node>, <NewNode> -Include "Storage Spaces Direct", Inventory, Network, "System Configuration"
    
    This confirms that the new server is running Windows Server 2016 Datacenter Edition, has joined the same Active Directory Domain Services domain as the existing servers, has all the required roles and features, and has networking properly configured.
  2.  Important
  3. If you are re-using drives that contain old data or metadata you no longer need, clear them using Disk Management or the Reset-PhysicalDisk cmdlet. If old data or metadata is detected, the drives aren't pooled.
  4. PowerShellCopy
  5. Run the following cmdlet on the cluster to finish adding the server:
Copy
Add-ClusterNode -Name NewNode

 Note

Automatic pooling depends on you having only one pool. If you've circumvented the standard configuration to create multiple pools, you will need to add new drives to your preferred pool yourself using Add-PhysicalDisk.

From 2 to 3 servers: unlocking three-way mirroring

With two servers, you can only create two-way mirrored volumes (compare with distributed RAID-1). With three servers, you can create three-way mirrored volumes for better fault tolerance. We recommend using three-way mirroring whenever possible.

Two-way mirrored volumes cannot be upgraded in-place to three-way mirroring. Instead, you can create a new volume and migrate (copy, such as by using Storage Replica) your data to it, and then remove the old volume.

To begin creating three-way mirrored volumes, you have several good options. You can use whichever you prefer.

Option 1

Specify PhysicalDiskRedundancy = 2 on each new volume upon creation.

PowerShellCopy
New-Volume -FriendlyName <Name> -FileSystem CSVFS_ReFS -StoragePoolFriendlyName S2D* -Size <Size> -PhysicalDiskRedundancy 2

Option 2

Instead, you can set PhysicalDiskRedundancyDefault = 2 on the pool's ResiliencySetting object named Mirror. Then, any new mirrored volumes will automatically use three-way mirroring even if you don't specify it.

PowerShellCopy
Get-StoragePool S2D* | Get-ResiliencySetting -Name Mirror | Set-ResiliencySetting -PhysicalDiskRedundancyDefault 2

New-Volume -FriendlyName <Name> -FileSystem CSVFS_ReFS -StoragePoolFriendlyName S2D* -Size <Size>

Option 3

Set PhysicalDiskRedundancy = 2 on the StorageTier template called Capacity, and then create volumes by referencing the tier.

PowerShellCopy
Set-StorageTier -FriendlyName Capacity -PhysicalDiskRedundancy 2

New-Volume -FriendlyName <Name> -FileSystem CSVFS_ReFS -StoragePoolFriendlyName S2D* -StorageTierFriendlyNames Capacity -StorageTierSizes <Size>

From 3 to 4 servers: unlocking dual parity

With four servers, you can use dual parity, also commonly called erasure coding (compare to distributed RAID-6). This provides the same fault tolerance as three-way mirroring, but with better storage efficiency. To learn more, see Fault tolerance and storage efficiency.

If you're coming from a smaller deployment, you have several good options to begin creating dual parity volumes. You can use whichever you prefer.

Option 1

Specify PhysicalDiskRedundancy = 2 and ResiliencySettingName = Parity on each new volume upon creation.

PowerShellCopy
New-Volume -FriendlyName <Name> -FileSystem CSVFS_ReFS -StoragePoolFriendlyName S2D* -Size <Size> -PhysicalDiskRedundancy 2 -ResiliencySettingName Parity

Option 2

Set PhysicalDiskRedundancy = 2 on the pool's ResiliencySetting object named Parity. Then, any new parity volumes will automatically use dual parity even if you don't specify it

PowerShellCopy
Get-StoragePool S2D* | Get-ResiliencySetting -Name Parity | Set-ResiliencySetting -PhysicalDiskRedundancyDefault 2

New-Volume -FriendlyName <Name> -FileSystem CSVFS_ReFS -StoragePoolFriendlyName S2D* -Size <Size> -ResiliencySettingName Parity

With four servers, you can also begin using mirror-accelerated parity, where an individual volume is part mirror and part parity.

For this, you will need to update your StorageTier templates to have both Performance and Capacity tiers, as they would be created if you had first run Enable-ClusterS2D at four servers. Specifically, both tiers should have the MediaType of your capacity devices (such as SSD or HDD) and PhysicalDiskRedundancy = 2. The Performance tier should be ResiliencySettingName = Mirror, and the Capacity tier should be ResiliencySettingName = Parity.

Option 3

You may find it easiest to simply remove the existing tier template and create the two new ones. This will not affect any pre-existing volumes which were created by referring the tier template: it's just a template.

PowerShellCopy
Remove-StorageTier -FriendlyName Capacity

New-StorageTier -StoragePoolFriendlyName S2D* -MediaType HDD -PhysicalDiskRedundancy 2 -ResiliencySettingName Mirror -FriendlyName Performance
New-StorageTier -StoragePoolFriendlyName S2D* -MediaType HDD -PhysicalDiskRedundancy 2 -ResiliencySettingName Parity -FriendlyName Capacity

That's it! You are now ready to create mirror-accelerated parity volumes by referencing these tier templates.

Example

PowerShellCopy
New-Volume -FriendlyName "Sir-Mix-A-Lot" -FileSystem CSVFS_ReFS -StoragePoolFriendlyName S2D* -StorageTierFriendlyNames Performance, Capacity -StorageTierSizes <Size, Size>

Beyond 4 servers: greater parity efficiency

As you scale beyond four servers, new volumes can benefit from ever-greater parity encoding efficiency. For example, between six and seven servers, efficiency improves from 50.0% to 66.7% as it becomes possible to use Reed-Solomon 4+2 (rather than 2+2). There are no steps you need to take to begin enjoying this new efficiency; the best possible encoding is determined automatically each time you create a volume.

However, any pre-existing volumes will not be "converted" to the new, wider encoding. One good reason is that to do so would require a massive calculation affecting literally every single bit in the entire deployment. If you would like pre-existing data to become encoded at the higher efficiency, you can migrate it to new volume(s).

For more details, see Fault tolerance and storage efficiency.

Adding servers when using chassis or rack fault tolerance

If your deployment uses chassis or rack fault tolerance, you must specify the chassis or rack of new servers before adding them to the cluster. This tells Storage Spaces Direct how best to distribute data to maximize fault tolerance.

  1. Create a temporary fault domain for the node by opening an elevated PowerShell session and then using the following command, where <NewNode> is the name of the new cluster node:
    New-ClusterFaultDomain -Type Node -Name <NewNode>
    
  2. PowerShellCopy
  3. Move this temporary fault-domain into the chassis or rack where the new server is located in the real world, as specified by <ParentName>:
    Set-ClusterFaultDomain -Name <NewNode> -Parent <ParentName>
    
    For more information, see Fault domain awareness in Windows Server 2016.
  4. PowerShellCopy
  5. Add the server to the cluster as described in Adding servers. When the new server joins the cluster, it's automatically associated (using its name) with the placeholder fault domain.

Adding drives

Adding drives, also known as scaling up, adds storage capacity and can improve performance. If you have available slots, you can add drives to each server to expand your storage capacity without adding servers. You can add cache drives or capacity drives independently at any time.

 Important

We strongly recommend that all servers have identical storage configurations.

To scale up, connect the drives and verify that Windows discovers them. They should appear in the output of the Get-PhysicalDisk cmdlet in PowerShell with their CanPool property set to True. If they show as CanPool = False, you can see why by checking their CannotPoolReason property.

PowerShellCopy
Get-PhysicalDisk | Select SerialNumber, CanPool, CannotPoolReason

Within a short time, eligible drives will automatically be claimed by Storage Spaces Direct, added to the storage pool, and volumes will automatically be redistributed evenly across all the drives. At this point, you're finished and ready to extend your volumes or create new ones.

If the drives don't appear, manually scan for hardware changes. This can be done using Device Manager, under the Action menu. If they contain old data or metadata, consider reformatting them. This can be done using Disk Management or with the Reset-PhysicalDisk cmdlet.

 Note

Automatic pooling depends on you having only one pool. If you've circumvented the standard configuration to create multiple pools, you will need to add new drives to your preferred pool yourself using Add-PhysicalDisk.

Optimizing drive usage after adding drives or servers

Over time, as drives are added or removed, the distribution of data among the drives in the pool can become uneven. In some cases, this can result in certain drives becoming full while other drives in pool have much lower consumption.

To help keep drive allocation even across the pool, Storage Spaces Direct automatically optimizes drive usage after you add drives or servers to the pool (this is a manual process for Storage Spaces systems that use Shared SAS enclosures). Optimization starts 15 minutes after you add a new drive to the pool. Pool optimization runs as a low-priority background operation, so it can take hours or days to complete, especially if you're using large hard drives.

Optimization uses two jobs - one called Optimize and one called Rebalance - and you can monitor their progress with the following command:

PowerShellCopy
Get-StorageJob

You can manually optimize a storage pool with the Optimize-StoragePool cmdlet. Here's an example:

PowerShellCopy
Get-StoragePool <PoolName> | Optimize-StoragePool
728x90
728x90

When taking an S2D server offline for patching or other reasons, it is not only taking away the compute and memory for that server but also a portion of the storage pool. Care must be taken to keep your data safe and ensure quick resumption of production-level readiness to your cluster.

 

Visit Microsoft for the full description and latest information: https://docs.microsoft.com/en-us/windows-server/storage/storage-spaces/maintain-servers

 

Key Steps to reboot servers:

1. Open PowerShell as Admin.

 

2. Check to make sure the virtual disks are healthy by running Get-VirtualDisk.

 

3. Run Suspend-ClusterNode -Drain to move the VMs to another node.

 

4. Run to cleanly put the storage into maintenance mode. At this point writes to this node’s storage are still active until step 5 has been completed.

Get-StorageFaultDomain -type StorageScaleUnit | Where-Object {$_.FriendlyName -eq “<Node Name>”} | Enable-StorageMaintenanceMode

 

5. Run to verify the disks for the node are in maintenance mode. You should see “In Maintenance Mode, OK” under Operational Status.

Foreach($Node in (Get-ClusterNode).Name){$Node;Get-StorageNode -Name $Node*|Get-PhysicalDisk -PhysicallyConnected}

 

6. Reboot server.

 

7. Once you’re ready to put the server back into production, open PowerShell as Admin.

 

8. Run to put the storage back into production.

Get-StorageFaultDomain -type StorageScaleUnit | Where-Object {$_.FriendlyName -eq “<Node Name>”} | Disable-StorageMaintenanceMode

 

9. A storage job will initiate in the background to repair and resync the data. To check on the status, run (as Admin) Get-StorageJob  If it returns to a command prompt that means there are no jobs running. Do not reboot the next node until all of the jobs have been completed.

 

10. Run Get-VirtualDisk to verify the virtual disks are healthy after storage jobs complete. Wait until steps 9 and 10 have been completed before live migrating VMs back to this node as storage jobs will consume system resources potentially affecting the response time of your applications.

 

11. Run Resume-ClusterNode -Failback Immediate to put the cluster node back into production to handle VM workloads.

 

Alternative:

The steps to reboot each servers can take some time especially with post storage resync and repair. If you have the ability to shutdown the entire cluster this link will walk through the steps to make the entire process faster.

https://docs.microsoft.com/en-us/windows-server/storage/storage-spaces/maintain-servers#how-to-update-storage-spaces-direct-nodes-offline

728x90
728x90

How to Run Cluster Validation

1. Open Failover Cluster Manager

2. Right click on the cluster and select “Validate Cluster”

3. A ‘Validate a Configuration Wizard’ will open and select next to continue

 

4. Select “Run only tests I select” and select next to continue

 

5. Make sure “Storage” is unchecked. If you run cluster validation with storage, you will corrupt your data in your production environment

 

6. Select next to continue and the test will run. Once it finish open the Report to view the result.

 

 

728x90
728x90

How to Disable Cluster Quorum:

  1. Open Failover Cluster Manager
  2. Select the cluster
  3. Right click on the cluster or select ‘More Action’ on the Actions panel on the right
  4. Select ‘Configure Cluster Quorum Setting’
  5. This will open the Quorum Wizard and select ‘Next’ to continue
  6. Select the second options. ‘Select the quorum witness’
  7. Next select the last options to disable the quorum witness. ‘Do not configure a quorum witness’
  8. Once the witness is removed. Check the Failover Cluster Manager to make sure it is removed in the 'Cluster Core Resources’

 

How to Enable Cluster Quorum:

  1. Open Failover Cluster Manager
  2. Select the cluster
  3. Right click on the cluster or select ‘More Action’ on the Actions panel on the right
  4. Select ‘Configure Cluster Quorum Setting’
  5. This will open the Quorum Wizard and select ‘Next’ to continue.
  6. Select the second options. ‘Select the quorum witness’
  7. Here you can select 3 different Quorum Witness:
    1. Disk Witness: will create a witness on a disk. This is not an option for Storage Space Direct (S2D) because it does not work. The option is there for Storage Space. It not recommended by DataON.
    2. File Share Witness: Will create a small file which will act as the witness. Recommend having the witness on another cluster, server, or workstation that is not on the cluster itself.
    3. Cloud Witness: will create a witness on the cloud. This requires a Azure account and subscription. This also require having constant internet connection for the witness to be active.

        

  1. Once you select the Witness, select the path where the witness will lie.
  2. Confirm the witness and select ‘Next

  3. Once it is configure, you will reach the summary page and select ‘Finish’ to exit.

 

 

728x90
728x90

There are three high-level steps for migration of VMs from an existing cluster to a new cluster.

  1. Setup new servers and storage and configure iSCSI access to new storage.
  2. Setup Hyper-V and failover clustering.
  3. Setup the CSV storage for the new cluster.

There are two methods by which you can migrate VMs from an existing cluster to a new cluster:

 

Option 1
You could transfer roles to the new cluster, but this does not move any of the actual data for the VMs. If you are using iSCSI with different LUNs and mount points, the process is more involved for migrating the roles and VMs. The easier process is to simply remove each of the Hyper-V VM roles from the failover cluster manager and use the native Hyper-V Manager to “move” the actual VM’s to the new cluster. Both processes can be done while the VMs are running.

 

Option 2
Use the built-in Microsoft “Shared Nothing Live Migration” to migrate VMs to new cluster. For the live migration to work between servers you must initiate the move from the source server. Otherwise you need to employ Kerberos authentication for the Hyper-V settings > Live Migration > Advanced settings and the Delegation > Trust properties of the computer object. It is highly recommended to start this process on non-production VMs at first to ensure the process works smoothly!!

 

Here are the simple step-by-step instructions how to perform this migration.

 

Step 1: Remove Role

Open Failover Cluster Manager and remove the virtual machine role for the VM you want to move. This does not remove the VM, it simply removes it from the cluster manager and thus is no longer highly available.

 

Step 2: Hyper-V Manager Move

Open Hyper-V Manager on the server where the VM resides. Right-click the VM and select Move.

 

Step 3: Select Type of Move

Select the type of move you want to perform. The top option allows you to move the entire VM (config, snapshots, VHDs, etc.).

 

Step 4: Destination Server Name

Specify the name of your destination server. This is not the cluster name but one of the nodes in the cluster.

 

Step 5: What to Move

Now select what you want to move. Again, the top option moves all the VM files necessary to run the VM on a different server.

 

Step 6: Choose folder and move

Browse and select the CSV volume (already created as part of the cluster setup process). We recommend creating a folder with the name of the server inside the CSV for easier identification. Click Next to perform the move.

 

Step 7: Network Check

When the move initially begins, it pre-checks a variety of things to make sure the move will be successful. One of those items is the virtual switch the VM is connected to. If your virtual switches have the same name between your clusters, then you will not receive this prompt. In this example, there was a prompt for the virtual switch for the VM to connect with due to the name change as shown here. If you have snapshots of the VM, these will also move but you will be prompted for the switch to connect with, if the names are different.

 

Step 8: Finishing up

The tool will move and preserve the snapshots which you cannot do with the export option. This is one of the main reasons the Move feature is recommended.

 

 

www.dataonstorage.com | 1-888-725-8588 | sales@dataonstorage.com 

Copyright © 2020 DataON. All Rights Reserved. Specifications may change without notice. DataON is not responsible for photographic or typographical errors. DataON, the DataON logo, MUST, and the MUST logo are trademarks of DataON in the United States and certain other countries. Other company, product, or services names may be trademarks or service marks of others.

728x90
728x90

Below is a step-by-step guide on how to add a third node to an existing 2-node Storage Spaces Direct (S2D) cluster with the production workload still running.

 

1.) Ensure that the cluster is configured with a witness.

2.) Pause/Drain a node in the cluster.

3.) Place the paused node's drives into a storage maintenance mode.

Get-StorageFaultDomain -type StorageScaleUnit | Where-Object {$_.FriendlyName -eq “<Node Name>”} | Enable-StorageMaintenanceMode 

4.) Physically add the 3rd node into the 2-node cluster configuration by cabling the three servers like the image below:

DataON 2U Platforms:

DataON 1U Platform:

5.) Make sure the node that is being added to the cluster has the same firmware and drivers installed as the two servers in the existing 2-node cluster. Also install the necessary Windows features and configure the network correctly. Before adding the third node to the cluster, run a cluster validation.

 

$S1 = "Node1"

$S2 = "Node2"

$S3 = "Node3"

 

$nodes = ($S1,$S2,$S3)

 

Test-Cluster -node $nodes -Include "Storage Spaces Direct",Inventory,Network, "System Configuration" -ReportName C:\Windows\cluster\Reports\report

 

6.) With a clean validation report, proceed to add the third node into the S2D cluster.

<<Add-ClusterNode -Name ‘NewNodeName’ >>

7.) Disable the storage maintenance mode on the drives of the paused node; Resume the node to the cluster.

Get-StorageFaultDomain -type StorageScaleUnit | Where-Object {$_.FriendlyName -eq “<Node Name>”} | Disable-StorageMaintenanceMode 

8.) By adding a third node to the S2D cluster, 3-way-mirror resiliency is unlocked; Run the following command to configure this setting on the existing storage pool.

<< Get-StoragePool S2D* | Get-ResiliencySetting -Name Mirror | Set-ResiliencySetting -PhysicalDiskRedundancyDefault 2>>

9.) Allow S2D optimization job to complete as well as any other storage jobs to finish before creating a new 3-way-mirror virtual disk/CSV.

10.) With the increased capacity from joining the third node to the cluster, you may now create a new volume (3-way-mirror).

 

NOTE: The virtual disks that were previously created will remain as 2-way-mirror resiliency.

728x90

+ Recent posts