Quantcast
Channel: Virtual Machine Manager – Clustering forum
Viewing all 545 articles
Browse latest View live

Dynamic optimization not working

$
0
0

Hi,

i'm trying to enable DO on a cluster and I have no idea why it's not optimizing

it worked previously

the settings are move vm's when ram is below 12% and CPU usage is over 15%

and is there any log to view the migrations?


scvmm disapeaar vm

$
0
0

Dear  all,

I have a very strange problem with SCVMM 2012. The hosts are 2 hyperv 2012 in cluster mode. The problem is that when a domain administrator try to create a vm I can see the logs that the machine is created and then immediately "remove a recourse""remove a vm deployment configuration" and the machine never been created. Another domain admin can create machines without any problem.  I have check that domain admins are administrators on virtual machine manager. Keep in mind that He is able to create the machine directly from "failover cluster" console. Do you have any idea why this may happens

regards

pantos

Live Migration(not Storage) vm with shared vhdx fails in VMM but passes in FCM

$
0
0

i have a 2 node Hyper-V cluster that has been up and running for 6 months. it is the management cluster for my SCVMM instance. On it I have a WS2012R2 file server cluster consisting of two nodes(guest vm's) with 1 OS disk on each and 3 shared vhdx. I am using CSV's on FC to store my vm's. When I am in Failover Cluster Manager I can Live migrate the nodes between the two hosts. When i try to use SCVMM to Live Migrate a node I get to the Select host part of the Migrate VM Wizard dialog. the ratings are grayed out and the Rating Explanation states, The virtual machine (MYVMNAME) contains a VHDX marked as shared storage. It cannot migrate to a new storage location.  In VMM with the target vm selected i did not nor cannot choose Migrate Storage as it is grayed out. I did choose Migrate Virtual Machine. Again it is successful in FCM.

Has anyone experienced this or heard of it? I did some searching online but didn't find anything.


Need help on Hyper V 2012 for configuring pass through disks

$
0
0

Hallo guys,

I have 7 hypervisors of Windows server 2012 running on bare metal, all are in cluster. I also configured SCVMM 2012 server.I created 10 Vm's. Now i want to make clusters of 2 node each & assign LUNs to my VM.

Storage guys have assigned all the LUNs to my physical servers, when i go to disk management of my physical servers , i see all LUNs are offline state(by default..)

My question is how do i assign the LUN to my VM? what is the steps for this ? i also wil be making 2 node cluster of VM's.

Regards,

Amol Sutar

Changing Hyper-V host and cluster virtual IP addresses to new subnet/VLAN

$
0
0

I have a 2 node Hyper-V 2012 R2 failover cluster, managed by System Center Virtual Machine Manager 2012 R2, and I would like to change the IP addresses of the hosts and the cluster, in order to move them to a new subnet and VLAN. The existing and new subnets are able to route to each other so all hosts will still be able to communicate throughout the parts of the process where they may be on separate subnets. There is also a dedicated cluster heartbeat network on its own subnet and VLAN that I am not altering in any way.

The 2 hosts are configured with 4 nics in a team, with dedicated virtual interfaces for each of the following:
-Live Migration
-Cluster Heartbeating
-Host management/general traffic (the cluster virtual IP address is also on the same subnet as these interfaces).

It is the host management/general traffic addresses that I want to change. The interfaces were created and configured with the Add-VMNetworkAdapter, New-NetIPAddres and Set-VMNetworkAdapterVlan commands.

Please advise if the following process is correct:
1) Evacuate all the VMs from the first host to be changed and put it in maintenance mode.
2) Use Set-VMNetworkAdapter to change the name of the interface (the current name refers to the VLAN it's on)
3) Use Set-NetIPAddress to change the IP address and gateway of the interface as appropriate
4) Use Set-VMNetworkAdapterVlan to set the VLAN ID
5) Take the host out of maintenance mode and move all VMs off the other host
6) Repeat above steps on the other host

I know that I will then need to change the cluster virtual IP address, but I have no idea how to do this or where to look for that setting. Please advise!

Cheers.

Convert from PassThrough to Dynamic VHDX using cmdlets - Powershell Failure

$
0
0

Hi

I am automating the procedure of converting from pass-through disks to dynamically expanding vhdx files, but have some problems when using the PowerShell command "Convert-SCVirtualDiskDrive" to do so.

Scenarios:

Doing it from SCVMM console: everything works perfect. The vhdx is created, and when I check failover cluster manager everything looks fine.

Doing it from Shell: At first everything looks good from VMM, but when I check Failover Cluster Manager I can see that no changes are made (the Pass-Through disk are still the one added to the scsi controller). When I select refresh from VMM console, then SCVMM also tells me its not OKAY (of course, that should reflect what is visible from Failover Cluster Manager).

The disk is createt though, so the only problem is that the disk isnt added to the VM.

I also see that doing it from Console (GUI), and doing it from Shell generates two slightly different jobs in VMM.

Doing it from SCVMM console: 

From GUI

Doing it from Shell: 

From Shell

I can't find failover cluster management after creating hyper-v cluster on SCVMM 2012 R2

$
0
0
I've created a hyper-v cluster on scvmm 2012 r2 but I can't find the failover cluster manager to move storage resources. all hosts are showing to have hyperv role and failover clustering feature installed. disk Witness in Quorum is good, same as for the other CSV lun. Please help me Microsoft. Thank you. 

Shutdown all Hyper-V Hosts in a cluster on the SCVMM 2012 R2 - What is the correct way to do this?

$
0
0

Dear Everyone,

I need to shutdown all my Hyper-V hosts in a cluster on my SCVMM 2012 R2 to move them physically. What is the correct way to do this? Do I have the stop the cluster from cluadmin.msc? 

Thank you in advanced.

Kind regards,

iOS_Android


Why is my Hyper-V Cluster showing as overcommitted?

$
0
0
I'm trying to determine why my Hyper-V cluster is saying I will be overcommitted with the creation of my next VM.  I'm unclear whether its the CPU or the RAM, so I'll give a breakdown of both.

The cluster has two host nodes, each with 24 GB of RAM and 2 Quad Core Procs.  Host reserves are all on the defaults.  So 512 KB of RAM is reserved for the hosts and 20 percent CPU.  My understanding, at least RAM wise, is this give me roughly 23.5 GB of usable RAM in the cluster to work with before its status goes to over committed (that's after subtracting the 512 KB of host reserve).

Here are the current VM configurations in the cluster:

VM1: CPU = 4 and RAM = 4 GB
VM2: CPU = 4 and RAM = 4 GB
VM3: CPU = 4 and RAM = 4 GB
VM4: CPU = 4 and RAM = 4 GB
VM5: CPU = 1 and RAM = 1 GB
VM6: CPU = 1 and RAM = 1 GB
VM7: CPU = 1 and RAM = 1 GB

That puts my total RAM usage at 19 GB, leaving me with 4.5 GB to play with.

The current status of the cluster is "OK", but if I try to create a new VM now, regardless of how much RAM I specify (I've tried 512 MB and even less), I get no stars in my host ratings and the rating explanation is:

"This configuration causes the host cluster to become overcommitted"

I've verified that the "Cluster Reserve (nodes)" is set to 1.

Can anyone shed some light on what might be the possible cause of this?

Hyper-V Live Migration Compatibility with Hyper-V Replica/Hyper-V Recovery Manager

$
0
0

Hi,

Is Hyper-V Live Migration compatible with Hyper-V Replica/Hyper-V Recovery Manager?

I have 2 Hyper-V clusters in my datacenter - both using CSVs on Fibre Channel arrays. These clusters where created and are managed using the same "System Center 2012 R2 VMM" installation. My goal it to eventually move one of these clusters to a remote DR site. Both sites are connected/will be connected to each other through dark fibre.

I manually configured Hyper-V Replica in the Fail Over Cluster Manager on both clusters and started replicating some VMs using Hyper-V Replica.

Now every time I attempt to use SCVMM to do a Live Migration of a VM that is protected using Hyper-V Replica to another host within the same cluster, the Migration VM Wizard gives me the following "Rating Explanation" error:

"The virtual machine virtual machine name which requires Hyper-V Recovery Manager protection is going to be moved using the type "Live". This could break the recovery protection status of the virtual machine.



When I ignore the error and do the Live Migration anyway, the Live migration completes successfully with the info above. There doesn't seem to be any impact on the VM or it's replication.

When a Host Shuts-down or is put into maintenance, the VM Migrates successfully, again, with no noticeable impact on users or replication.

When I stop replication of the VM, the error goes away.

Initially, I thought this error was because I attempted to manually configure the replication between both clusters using Hyper-V Replica in Failover Cluster Manager (instead of using Hyper-V Recovery Manager).

However, even after configuring and using Hyper-V Recovery Manager, I still get the same error. This error does not seem to have any impact on the high-availability of my VM or on Replication of this VM. Live migrations still occur successfully and replication seems to carry on without any issues.

However, it now has me concern that Live Migration may one day occur and break replication of my VMs between both clusters.

I have searched, and searched and searched, and I cannot find any mention in official or un-official Microsoft channels, on the compatibility of these two features. 

I know vMware vSphere replication and vMotion are compatible with each otherhttp://pubs.vmware.com/vsphere-55/index.jsp?topic=%2Fcom.vmware.vsphere.replication_admin.doc%2FGUID-8006BF58-6FA8-4F02-AFB9-A6AC5CD73021.html.

Please confirm to me: Are Hyper-V Live Migration and Hyper-V Replica compatible with each other?

If they are, any link to further documentation on configuring these services so that they work in a fully supported manner will be highly appreciated.

D

Cannot place HA VM on SMB file share after installing rollup update 5

$
0
0

Hello!

We've just installed Update Rollup 5 for VMM 2012 R2, and now we can't place clustered VMs on shares on the scale out file server. Shares have green sign in "File shares" tab of cluster properties, but they [shares] just not shown in list of avaliable locations in "Select destination folder" dialog. But when I place non-highly avaliable VM on the same host - I can choose SMB share as a location.

I've noticed, that everything works in VMM Console without rollup update 5 installed, so it looks like a bug in updated console. Is this a known issue?

I also have another question regarding this rollup update and scale out file server: there is a "Cluster nodes" tab in the properties of file server. List of nodes is empty after update. Although, text above the list says: "On one or more nodes, agent installations is pending. Click Add to see the node list".

When I click Add - I see my cluster nodes in list, and when I add these nodes - they a added with "Pending cluster addition" status. But they are already in this cluster. So, I don't know is it safe to press"Ok" in the properties window: will this brake my cluster or everythig should be fine?

Thanks in advance, 
   Dmitry.

Cluster failed: Error 25302 Required privilege not held by the client

$
0
0

Hello everyone,

I have a problem when creating the cluster in SCVMM 2012 R2. I am trying to create the cluster with 2 nodes, but the creation fails on step 1.5 "Creating cluster" with error:

Error (25302)
Failed to create the process to execute the task. Error A required privilege is not held by the client.
Recommended Action
Check if the user has permission on the VMM server and retry the operation.

- Cluster exists of 2 hosts

- 4 logical networks: managent, storage, live migration and CSV

- Both have all the virtual switches already setup and every nic can ping storage and the other node

- I am using a Equallogic PS4100 SAN. Plugin is loaded in SCVMM and SCVMM can see the LUNs

- All iSCSI connections on node are connected, but the LUNs have no in disk letter assigned in disk management

- Quorum disk is seen by all nodes and set to offline when creating cluster. Cluster wizard sets it online when creation starts.

- SCVMM service account and SCVMM runas account are both configured and local admin on all Hyper-v nodes.

I use a 3rd node added to SCVMM to run the application server with SCVMM on it. This node has also access to all logical networks. The server with SCVMM has access to the storage network and management network. 

- Policies have been blocked for inheritance ( i have moved all servers to an empty test OU)

- Firewalls are all off

- Used domain admin account to create the cluster

-SCVMM admin and SCVMM service account have sysadmin access on DB (named sql 2012 standard instance on same application server as scvmm)

- Cluster validation fails with an error on "Validate SCSI-3 Persistent Reservation" but I think that that is a bogus error because it refers to disk 0, which is the C drive. I think that this can be ignored:

Node NODE1 successfully issued call to Persistent Reservation RESERVE for Test Disk 0 which is currently reserved by node NODE2. This call is expected to fail.
Test Disk 0 does not provide Persistent Reservations support for the mechanisms used by failover clusters. Some storage devices require specific firmware versions or settings to function properly with failover clusters. Please contact your storage administrator or storage vendor to check the configuration of the storage to allow it to function properly with failover clusters.
Stop: 13-12-2013 15:48:41.
Test failed. Please look at the test log for more information.

- Exactly same error ( privilege not held by client) when turn on "Do not validate cluster" option in cluster wizard.

Anyone any idea where to look?

Edit: I wanted to change permissions on the iscsi quorum disk (located on EQL SAN) on NODE1, but failed to do so. I am nu sure if I should be able to do this, bcause I am not really a SAN/ EQuallogic expert. I would suspect that I could do this because it is an ntfs disk and the NODE1 has it online and is therefore owner? Is suspect that it is something with this disk and/ or something with the Equallogic permissions...

Secondly, can it be something with local policy privileges? I have removed all policies and scvmm is member of administrator group on NODE1, but maybe iam missing something? 



Source vhd(x) is kept after storage migration on csv volumes

$
0
0

Hi,

every time I make a storage migration (with VMM and/or Cluster Manager) the source vhd/x files are kept. The vm configuration is deleted on source.

Is this by design? If not, is this a known issue with a solution?

Thanks.

SMB File Share Storage Failover Cluster "path is not valid folder path"

$
0
0

I am having an issue that I am scratching my head over. So I have setup a 3 node Hyper-V host cluster that I am attempting to use SMB File Share storage for the shared storage medium. I have been trying to migrate a virtual machine from one of the nodes local storage into the file share using System Center VMM 2012 R2 with UR5 however I keep getting the message "The specified path is not a valid folder path on node2.domain.com" for the storage that VMM automatically selects for placement of files from the migration. However what is odd is to the same file share for storage I can do a new deployment of a virtual machine to the same cluster with the same shares; I can also delete a virtual machine from the share fine; the file share for the virtual machine I library I am deploying machines from is the same server as the file shares for the cluster that I am deploying to so maybe that is why that succeeds; if I try to move the recently deployed VM from one file share to another for storage same error comes up. The three nodes all reference the same file server, which is just one file server for the storage, and the shares were created in VMM and as such the file share permissions were setup by VMM so they should be sufficient. I have also attempted this with both delegation of CIFS and without through AD (trust to specified computers with CIFS, Hyper-V Replica and Microsoft Virtual Console Service via Kerberos only).

I am stumped as to what to check next or how to get this working and would appreciate any guidance anyone can give towards a resolution for this problem.

VMM Error while seting NTFS permissions on a SOFS Share

$
0
0

Hello,

I get the errors 2912 and 26272 while registering a file share created by VMM on a sofs 2 node cluster.

Share Permissions are set correctly by VMM, but for VMM it is obviously not possible to set ntfs permission.

I run SCVMM 2012 R2 UR3 on a 2012 R2 Server. The sofs cluster is running on 2 2012R2 Servers. The Hyper V Servers are one Hyper-V Server 2012 (the free one) and I get the same errors on a Hyper-V Server 2012 R2 (the free one) 2 node Cluster.

Both Cluster (SOFS and Hyper-V) were created manually with the Failover Cluster Manager.

Hope someone cangive me a hintwhere the problemis coming from.

Thanks in advance

Sebastian

Error (2912)
An internal error has occurred trying to contact the hafs01.lab.internal server: :w:InternalError: :Windows System Error 1332:Zuordnungen von Kontennamen und Sicherheitskennungen wurden nicht durchgeführt. .

WinRM: URL: [http://hafs01.lab.internal:5985], Verb: [INVOKE], Method: [GrantAccess], Resource: [http://schemas.microsoft.com/wbem/wsman/1/wmi/root/microsoft/windows/smb/MSFT_SmbShare?Name=CSV02+ScopeName=SOFS01]

No mapping between account names and security IDs was done (0x80070534)

Recommended Action
Check that WS-Management service is installed and running on server hafs01.lab.internal. For more information use the command "winrm helpmsg hresult". If hafs01.lab.internal is a host/library/update server or a PXE server role then ensure that VMM agent is installed and running. Refer to http://support.microsoft.com/kb/2742275 for more details.


Error (26272)
Failed to grant permissions for share \\sofs01.lab.internal\CSV02 on file server sofs01.lab.internal due to errors during the operation.

Recommended Action
Manually grant permissions on \\sofs01.lab.internal\CSV02.


Virtual Machines With Large RAM Fails Live Migration

$
0
0

Hi everyone....

I have a 2-node Hyper-V cluster managed by SCVMM 2012 R2.  I am current unable to migrate a VM that is using 48 GB of RAM.  Each node has 256 GB of RAM running Windows Server 2012 R2.

When the VM is running on node 1; there is 154 GB in use, and 102 GB available.  When I try to migrate this VM to node 2, (which has 5.6 GB in use, and 250 GB available), I get this error message in VMM:

Error (10698)
The virtual machine (abc-defghi-vm) could not be live migrated to the virtual machine host (xyz-wnbc-nd03) using this cluster configuration.

Recommended Action
Check the cluster configuration and then try the operation again.

(In case you were wondering, I ran the cluster validation and it passed without a problem.)

The Failover Cluster event log shows two key entries:

First:

Cluster resource 'SCVMM xyz-wnbc-vm' in clustered role 'SCVMM xyz-wnbc-vm Resources' has transitioned from state OfflineCallIssued to state OfflinePending. 

Exactly 60 seconds later, this message takes place:

Cluster resource 'SCVMM abc-defghi-vm in clustered role 'SCVMM abc-defghi-vm Resources' rejected a move request to node 'xyz-wnbc-nd03'. The error code was '0x340031'.  Cluster resource 'SCVMM abc-defghi-vm' may be busy or in a state where it cannot be moved.  The cluster service may automatically retry the move.

Nothing found after Googling "0x340031".  Does anyone know what error that is?

Other notes:

  • If the Virtual machine is shut down I can migrate it.
  • If I lower the VM RAM settings and start it up again I can do a Live Migration.
  • All other VMs can do the Live Migration; largest RAM size is 16GB.

Any suggestions?

VMM cluster refresh takes 30 minutes

$
0
0

Hi.

We have two Hyper-V 2012 R2 nodes in a VMM 2012 R2 Update Rollup 5 cluster, using FC shared storage (4 LUN).

Everything was working well in the begining but as we added new virtual machines suddenly everything started to move at a snails pace. Cluster refresh takes 30 minutes and other operations minutes. Start maintenance mode for one node took 24 minutes, left the machines in Saved State. Stop maintenance mode took 22 minutes with virtual machines left in Saved State.

Any suggestions on how to diagnose this?

Regards,
Markus


Markus Sveinn Markusson

NVGRE Gateway Cluster Problem

$
0
0

Hello

We have following setup:

Management Hyper-V hosts running WAP, SPF and SCVMM 2012 R2 components

Gateway Hyper-V host: single node gateway hyper-v host, configured as a single node cluster to be able to join extra hardware in the future
this Hyper-V host runs 2 Windows Server Gateway VMs,configured as a failover cluster.
The following script is used to deploy these windows server gateway VMs as a high available NVGRE gateway service:

http://www.hyper-v.nu/archives/mscholman/2015/01/hyper-v-nvgre-gateway-toolkit/

two tenant Hyper-V hosts running VMs which are using network virtualization

The setup is completed successfully and when creating a tenant in WAP and creating VM network for this tenant using NAT, the VMs of this tenant are accessible and can access Internet using the HA Gateway cluster.

The Gateway Hyper-V host and NVGRE Gateway VMs are running in a DMZ zone, in a DMZ Active Directory Domain.

Management and Tenant Hyper-V hosts, incl all Management VMs, are running in a dedicated internal Active Directory domain.

Problems start when we failover the Windows Server Gateway service to the other VM node of the NVGRE Gateway cluster. We see in the lookup records on the Gateway Hyper-V host that the MAC address of the gateway record for tenants is updated with the new MAC address of the VM node running the gateway service.

But in SCVMM, apparently, this record is not updated. The tenant hosts still use the old MAC address of the other Gateway VM node.
When looking in the SCVMM database, we can also see that in the VMNetworkGateway table that the record representing the gateway of the tenant, still points to the MAC address of the PA network adapter of the other node of the NVGRE Gateway cluster, not to the new node on which the gateway service is running after initiating a failover.
On the tenant hyper-v hosts, the lookup record for the gateway also points to the old node as well.
When manually changing the record in the VMNetworkGateway table to the new MAC address, and refreshing the tenant hosts in SCVMM, all starts working again and the tenant VMs can access the gateway again.

Anybody else facing this issue? Or is running a NVGRE Gateway cluster on a single Hyper-V node not supported?

To be complete, the deployed VMs running the gateway service are not configured as HA VMs.

Regards
Stijn


Cannot live migrate VM on multi-sites/SAN cluster (3par) since UR5 !

$
0
0

Hi,

Situation :

  • 2 sites with hyper-v nodes, each with an HP 3par SAN.
  • Our cluster is spanned between the 2 sites.
  • LUNs from SAN site 1 are replicated to SAN Site 2
  • We were able to live migrate VM from site 1 to site 2, e.g. during maintenance

We upgraded SCVMM from ur4 to ur5 this week. Since, we are not able to live migrate VM with vhd on replicated LUN. I get this error :

Error (23858)

Virtual machine <myvmname> isn't associated with a replication group and couldn't be moved to a location that's protected by replication group (<myreplicationgroupname>).


LM works well from MSFC console.

It looks like that SCVMM now thinks that I'm using Azure Site Recovery in conjunction with HP 3PAR SAN, but I DON'T !

Any idea who I should contact or what I could do ?

Thanks



SCVMM 2012 R2 – two iSCSI network interfaces connected to the same subnet

$
0
0

I would like to configure two networks in SCVMM 2012 R2 which will be used by VMs to connect to iSCSI SAN. Both of these networks should be connected to the same subnet (192.168.100.0/24) because they will connect VMs to Dell EqualLogic using iSCSI MPIO. Those networks should be available on all Windows Server 2012 R2 Hyper-V cluster nodes.

When I try to create two logical networks in SCVMM with the same subnet, I receive error (Unable to assign the subnet 192.168.100.0/24 because it overlaps with an existing subnet)

How should I configure networking in SCVMM to allow one virtual machine to connect to the same subnet using two network interfaces?

Viewing all 545 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>