Wednesday, August 26, 2015

Rename those ugly ESXi local datastores via powercli

Who doesnt like it clean and near? I know that i like it. Whenever you see install a bunch of esxi and go to the datastore section you will see some ugly local datastores there as
datastore1
datastore1(1)
datastore1(2)
and so on. It will be very nice to rename them with something like host+local right? except you dont want to do it for 10 hosts do you?
Then here you go. A powercli scriptlet as your saviour for the day.

Sunday, August 9, 2015

Configuring the VM's hardware settings via Powercli

I am pretty sure you have come across situations where you had to set the hardware settings of the VMs as per your standard on multiple datacenters or you might have thought about how to do so...
Here is something that might be useful and as always i will keep it updated on my github page.

<#
.SYNOPSIS
    A brief description of the function or script. This keyword can be used
    only once in each topic.
.DESCRIPTION
    A detailed description of the function or script. This keyword can be
    used only once in each topic.
.NOTES
    File Name      : VMsettings.ps1
    Author         : gajendra d ambi
    Time           :
    Prerequisite   : PowerShell V2 over Vista and upper.
    Copyright      - None
.LINK
    Script posted over:
   
#>

#Fill these details
$vc = ""
$db = ""
$um = ""

#If using in powershell then add snapins below
Add-PSSnapin VMware.VimAutomation.Core -ErrorAction SilentlyContinue

#Disconnect from already connected viservers if any
Disconnect-VIServer * -ErrorAction SilentlyContinue

#Get host's details
$host = Read-Host "address of the esxi host on which the VMs are deployed?"
$user = "root"
$pass = Read-Host "Password for the esxi host?"
Connect-VIServer $host -User $user -Password $pass

#vCenter Server
#change memory and cpu
get-vm $vc | Set-VM -NumCpu 6 -MemoryGB 20 -Confirm:$false
#grow the 1st 2 harddisks
(Get-HardDisk -vm $vc)[0] | Set-HardDisk -CapacityGB 60 -Confirm:$false
(Get-HardDisk -vm $vc)[1] | Set-HardDisk -CapacityGB 40 -Confirm:$false
#add 2 additional hard disks and provision them as thin
New-HardDisk -vm $vc -CapacityGB 40 -StorageFormat Thin -Confirm:$false
New-HardDisk -vm $vc -CapacityGB 40 -StorageFormat Thin -Confirm:$false
Write-host ""

#database server
#change memory and cpu
get-vm $db | Set-VM -NumCpu 4 -MemoryGB 16 -Confirm:$false
#grow harddisks
(Get-HardDisk -vm $db)[0] | Set-HardDisk -CapacityGB 60 -Confirm:$false
(Get-HardDisk -vm $db)[1] | Set-HardDisk -CapacityGB 60 -Confirm:$false
(Get-HardDisk -vm $db)[2] | Set-HardDisk -CapacityGB 60 -Confirm:$false
(Get-HardDisk -vm $db)[3] | Set-HardDisk -CapacityGB 60 -Confirm:$false
Write-host ""

#update manager
#change memory and cpu
get-vm $um | Set-VM -NumCpu 4 -MemoryGB 8 -Confirm:$false
#grow harddisks
(Get-HardDisk -vm $um)[0] | Set-HardDisk -CapacityGB 50 -Confirm:$false
(Get-HardDisk -vm $um)[1] | Set-HardDisk -CapacityGB 50 -Confirm:$false
Write-host ""
 

Thursday, August 6, 2015

The saga of me, fedora gnu/linux 22 and automount of NFS

I have been trying to figure this stuff out from quite some time now. No thanks to all the headache and it did not work many a times.
I wanted to just document it here (even if it is a failure so that others don't have to do it.
I added the nfs mount commands to the fstab but I would never do that again. If these drives for some reason fail to mount during boot then your system will not boot. What a bummer. The boot screen was also cheating me with a wrong and unrelated error (in regards to the boot) about the nouveau driver (for nvidia gtx 980). After asking the all knowing google the grandpa i got some clues and more jitters.
I decided not to mount them during boot (so that atleast i will have a bootable OS) and mount them somehow after loging in and for this I surrendered myself to the autostart application of the gnu/linux OS. I created a shell script with containing the working commands to mount the nfs shares from my netgear nas.
something like this


mount -t cifs -o username=meow,password=roar //nas/Music /mnt/nfs/nas_music
and rebooted the system. This of course didn't help because only root can run the mount command. What a bummer.
Then I added sudo in front of the line, saved it and rebooted it but no go.
Then after bothering google grandpa for a while i did (to edit the sudo)
visudo
added the following 2 lines

##nfs mount script 
meow  ALL=(ALL:ALL) NOPASSWD: /pathtoscript/myscript.sh

but then again i was out of luck here too.
I ran the command from the script in the shell and sure enough it asked for the password. I thought that was the problem so i did.


echo roar | sudo -S mount -t cifs -o username=meow,password=roar //nas/Music /mnt/nfs/nas_music
7/8/2015
so i added the path to the myscript.sh in the .bash_profileHope it all goes well. Let's see. I am adament to find a permanent alternative to how this is done by everyone else and so far it isn't going great.
So the good news is here. It is finally working. I am going to summarize my way of doing nfs automount to your gnu/linux OS boxes which safe, will never obstruct your booting of the gnu/linux OS, safe and sound, works every time.

So here we go
  1. create a myscript.sh and place it somewhere in the directory where all users have access it. I suggest /opt/
  2. manually via shell mount a share or two and check which commands are working for you without a problem to mount the nfs shares; make a note of that syntax or the command
  3. add the following lines in front of that above working nfs mount commands
    echo $mypassword | sudo -S
    here $myassword is a variable which is the user's login password and yes in my case the user was me and i am part of the administrator group. The above command will make sure that when you mount the nfs share it provides the credentials to proceed.
  4. Now enter this command into the myscript.sh and save it, make it editable by root only and accessible, executible by the user, in my case me. Make the file owned by root and group root:
    sudo chown root.root myscript.sh
    Now set the sticky but, make it executable for all and writable only by root:
    sudo chmod 4755 myscript.sh
  5.  find the .bash_profile which is the user profile on fedora 22 and it can be different on your particular gnu/linux distro. open it with your editor (nano or vi) and add the path to the script at the end of it and save it like this
    /pathtoscript/myscript.sh

    That is it....

VMware VMstartup policy

If you have ever thought about automating the VM startup policy and given up saying, well it is just an one time task and it hardly takes 15 minutes or less, then you were wrong.
What if you have to rebuild your clusters?
what if you have a standard for the vm startup policy and you want all your datacenters to reflect the same?
what about the human errors?
What if you are someone who has to do it once every week on different datacenters?
Well you are not out of luck because I was untill now.

Here is a scriptlet called VMstartup to do that for you.

Remember : sharing is caring.