Monday, July 22, 2013

How do I change the vmware network adapter type ?

Sure there are fancy powercli ways to do it,
but being a person i am who has stayed away from cli as much as i can i would rather do it the old fashion way through GUI.
Issue: change network adapter type from e1000 to vmxnet3
you may choose to do this when the vm is powered on/off but you will have a disconnectivity of the VM throughout this process.
1. make a note of the ip credentials (ip address, default gateway, subnet mask etc.,) of the network adapters of the VM.
2. vm>vm settings select the network adapter and make a note of the vlan or the portgroup information.
3. select network adapter>click remove>click ok.
4. vm settings> add network adapter and choose vmxnet3>click ok.
5. open console of the VM>assign the ip address to the network adapters in the tcp/ip properties of the nic.
you should now have the same working vm with different network adapter type.

Saturday, July 6, 2013

time lag in the cloned VMs in vmware vsphere 5.x

Issue: The cloned VMs are lagging behind the source VM for 5 minutes.
cloning is currently done via powercli script.
ESXi 5.0.0 build 768111
vCenter server 5.1.0 build 799731
not all but only cloned VMs of a particular vm has this issue.

What didn't work : time synchronization with the host is turned off .
The vms are currently syncing their time with the domain controller.
Vmware tools are up to date.
Sync driver isnt present under Non-Plug and Play Drivers. In device manager.
reinstall vmware tools without the vss (volume shadow copy) and tried cloning  but no go.

the .vmx file had the following entries
tools.syncTime = "0"
If set to TRUE, the clock syncs periodically. 60 sec is by defalut

time.synchronize.continue = "0"
If set to TRUE, the clock syncs after taking a snapshot.

time.synchronize.restore = "0"
If set to TRUE, the clock syncs after reverting to a snapshot.

time.synchronize.resume.disk = "0"
If set to TRUE, the clock syncs after resuming from suspend and after migrating to a new host using the VMware vMotion feature

time.synchronize.shrink = "0"
If set to TRUE, the clock syncs after defragmenting a virtual disk. = "0"
If set to TRUE, the clock syncs when the tools daemon starts up, normally while the guest operating system is booting = "0"
If set to TRUE, the clock syncs after the host resumes from sleep.

these parameters were defined in the cloned problematic VMs compared to the other working VMs.
This parameters are defined to disable the time synchronization completely see the article

this means the time sync with the host has been already disabled.

 the polling time was set to 3600sec which is 60 mins in the Domain Controller's registry settings. changed it back to 900 as per VMware and restarted the w32time service

What worked : turning 'sync to the host' on.

VMware boot, SD card, HDD or BFS (Boot from SAN)

There is no doubt that when it comes to reliability, the good old HDD with raid 1 or 5 will be the most reliable since it eliminates the network, switch, storage point of failure then again the server might just boot up fine if network, switch or storage (1 of the raid drive fails ) fails but you will still face the downtime of the VMs which are on a shared drive (assuming that you are using a shared storage for VMs). SD card offers a great alternative for what its worth but then your again SD card might fail (unless you have raid 1 on your SD card too) , the card reader might fail too, in any of these 2 cases your host and VMs are down.
So irrespective of SD, HDD or BFS your VMs will definitely face a downtime if the shared storage and/or the network fails. So, considering that you have already taken care of the switch part by having a redundant switch and your shared storage aka SAN is solid or near fail proof what will be the best option to consider to boot from? SD, HDD or BFS ?!
If you go with the BFS then you don't need internal storage for the servers. Let us consider a HP BL460 and if you have to order a 10 of those you will be saving a lot of money by not ordering any smart arrays with it. You can allcoate 10GB for BFS per server on your SAN. You can also eliminate the need of the physical NICs by using virtual connect or a similar network virtualization. If you take out the need of internal storage and physical NICs on the server, you are almost completely abstracting the VMware from the server hardware. You can even have a backup of the BFS lun of the server just in case you need to boot a different server with it. If you are doing a BFS them make sure you have a syslog server configured for all the hosts to store their log otherwise the BFS partition will be filled sooner than you think with all the logs of the host.