Category Archives: VMware

Manually Remove I/O Filters From vSphere VM

I was attempting to move a VM from one host to another, and received the fallowing error: “Host does not support the virtual hardware configuration of virtual machine. The IO Filters(s) XXXX configured on the VM’s disk are not installed on the destination host.

At one point, I was using a VM accelerator solution that was not cleanly removed. It took me awhile to figure out how to remove the IO filter from the VM, so hopefully this guide will save you some time.

Part 1 – Remove setting from the VM

After searching the config files of the VM, I came across the VM’s VMDK descriptor file. This is not the storage VMDK file itself, but the 1KB sized descriptor file which I had to edit.

There are two lines that contain configurations for the IO filter, and both need to be removed. These are the ddb.iofilters and ddb.sidecars settings. Both lines can just be removed and the file saved.

Upon trying to migrate the VM after removing these lines, I received the same error as before. I needed to make the host aware of these changes somehow. This was achieved by right-clicking the VM –> VM Policies –> Edit VM Storage Policies..

I didn’t have to change anything, but just needed to click OK.

After doing those tasks, I was able to successfully migrate the VM!

Part 2 – Remove setting from the Host

Although I probably could have done this first, I was in a hurry and didn’t want to impact production VMs. The process to remove the IO filter from the host is fairly quick and easy, but will require the host to be in maintenance mode, and a reboot is probably useful afterwards.

Put the host into maintenance mode.
SSH into the host.
Run “esxcli software vib list” to view a list of all installed filters.
Run “esxcli software vib remove -n filtername” (replacing filter name) to remove the filter.
While a reboot isnt required, it is suggested.

Reset HPE ILO Password from vSphere ESXi Host

Changing the login password of the ILO Out-of-Band Management with an ESX host can be done by doing the following:

  1. Enable SSH on the host you need the ILO IP from
  2. SSH into the host using Putty or another SSH client
  3. Type: cd /opt/hp/tools

From here, we will create a new file that will contain the new credentials you want to use on the iLO going forward. You can create this file and copy to the above location using WinSCP,or use Vi to do this all within putty

4. Type: vi pwreset.xml
5. Type: i
(this will put you into insert mode and allow you to copy the below text so you don’t have to type it. Please use your own password on line 5)

<RIBCL VERSION="2.0">
<LOGIN USER_LOGIN="Administrator" PASSWORD="unknown">
<USER_INFO MODE="write">
<MOD_USER USER_LOGIN="Administrator">
<PASSWORD value="Enter-Your-Password-Here"/>
</MOD_USER>
</USER_INFO>
</LOGIN>
</RIBCL>

6. Press the Esc key
7. Type: :wq to save and exit the file
8. Type: ./hponcfg -f pwreset.xml to reset the iLO

You should now be able to login with your new credentials. The last step is to remove the file you just created.

9. Type: cd /opt/hp/tools
10. Type: rm -rf pwreset.xml

Finding Raw Device Mappings (RDMs) used in your VMware vSphere Environment

Cleaning up legacy storage and vSphere environments is always fun, especially when you think you have everything moved off an old array, only to find that your production database goes offline when that array is unplugged -totally made up scenario, did not happen to me  🙂

The slow way to approach this would be to go through every VM, one by one, and check the disks associated with the VM, and then reference LUN numbers on the SAN, etc. OR, you could use PowerCLI and find that info in a snap.

For instructions on how to install PowerCLI, see my previous post here

  1. Connect to you vCenter Server through PowerCLI by using the following command and entering appropriate vSphere Credentials

connect-viserver YOUR IP ADDRESS

If you see the following error, you will need to set PowerCLI to disregard Self-Signed Certs

Set-PowerCLIConfiguration -InvalidCertificateAction ignore -confirm:$false

  2. Run the following command to produce a list of VMs with RDMs

Get-VM | Get-HardDisk -DiskType "RawPhysical","RawVirtual" | Select Parent,Name,DiskType,ScsiCanonicalName,DeviceName | fl

The output will look similar to this (sorry, I didnt have any additional RDMs when making this tutorial for a real screenshot)

  3. Finally, if you would like to save the output to a file, use the following command

Get-VM | Get-HardDisk -DiskType "RawPhysical","RawVirtual" | Select Parent,Name,DiskType,ScsiCanonicalName,DeviceName | fl | Out-File –FilePath RDM-list.txt

 

Install VMware’s PowerCLI in Windows

VMware PowerCLI is a very powerful tool to assist in automating tasks, advanced configurations and troubleshooting, etc. The following procedure can be used to install PowerCLI.
Downloading and installing PowerCLI is all done within Windows PowerShell itself.

  1. Open Windows PowerShell (Run as Admin)
  2. Run the following PowerShell Command to download the PowerCLI modules. (Path = wherever you save your PS modules). This Process may take a few mins.
    Save-Module -Name VMware.PowerCLI -Path <path>

  3. Run the following PowerShell Command to Install the PowerCLI Modules

    Install-Module -Name VMware.PowerCLI

  4. Finally, you can test to make sure the modules installed properly by running the following:
    Get-Module -ListAvailable -Name VMware*

Datrium Design – Architecture Matters

Lame Joke: What do you get when you stick NVMe-based SSD onto an All-Flash Array or Hyper-Converged Node?

Genuine Answer: A Bottleneck of course!

As flash technologies advance and increase in performance, existing (and upcoming) network infrastructure cannot meet the demands of Next-Gen NAND technologies, such as 3DXPoint.
This chart compares saturation rates of 10GbE, 40GbE, and 100GbE with various flash offerings.

 

Datrium was founded by Ex-Founders and Principal Architects of Companies like Data Domain and VMware, so it’s safe to say they know a thing or two about architecture. Their approach to overcoming some of the shortcoming in Traditional Converged and HyperConverged (HCI) platforms boils down to the following shift in architecture design:

Move the I/O Processing to a stateless compute nodes

Architectural Overview
There are basically two components to Datrium’s Open Convergence architecture.

Compute Nodes
Computer Nodes are Servers of any brand the customer would like to use. The more RAM and Flash these servers have, the more powerful the overall architecture. Each Server Node get’s Datrium’s DVX software installed into the userspace on the hypervisor.
Every compute node is responsible for data services (Deduplication, Compression, Erasure Coding, and Encryption). These nodes pull copies of data from Data Nodes (the next component we will address shortly), and keep that data in a stateless fashion, before the data is sent to the Data Nodes.

Data Nodes
The DVX Data Nodes are Hybrid or All-Flash Disk Enclosures that are purchased from Datrium.  (You can’t use your own Data Nodes). Since all data is processed on the server nodes, there is no data processing happening at the data node layer. This allows the data nodes to keep data that is only accessed if the data copies are not available in flash/cache on the compute nodes. The data that resides on the data nodes is heavily protected for resiliency.

Open Convergence is Datrium’s marketing term for this improved architecture, but taking the marketing out of the discussion, here is how Datrium solves for business outcomes:

  1. Simpler than HyperConverged
    – Zero HCI Cluster configuration or cluster sprawl
    – Independently and Simply provision compute or storage
    – Flexibly support any mix of hosts or hypervisors
    – No vendor lock-in on compute resources. Use existing compute hardware
  2. Faster than All-Flash Arrays
    – Flash is on the server, where is performs much faster
    – No Controller Bottlenecks
    – Performance scales with each server
  3. No Backup Silos
    – One console for VM consolidation and data protection
    – Reduce Management time for Backup, DR, Copy Data Management
    – Eliminate dedicated backup devices

Image result for datrium architecture

If you need a lightning fast, resilient, scalable, cloud-enabled architecture, Datrium might be exactly what you need. Because in the end,  Architecture Matters.

 

pRDM and vRDM to VMDK Migrations

I was assisting an amazing client in moving some VMs off an older storage array and onto a newer storage platform. They had some VMs that had Physical RDMs (pRDM) attached to the VMs, and we wanted them living as VMDKs on the new SAN.
Traditionally, I have always shutdown the VM, remove the pRDM, re-add with vRDM, and then do the migration, but found an awesome write-up on a few separate ways in doing this.
(Credit of the following content goes to Cormac Hogan of VMware)

VM with Physical (Pass-Thru) RDMs (Powered On – Storage vMotion):

  • If I try to change the format to thin or thick, then no Storage vMotion allowed.
  • If I chose not to do any conversion, only the pRDM mapping file is moved from the source VMFS datastore to the destination VMFS datastore – the data stays on the original LUN.

 

VM with Virtual (non Pass-Thru) RDMs (Power On – Storage vMotion):

  • On a migrate, if I chose to covert the format in the advanced view, the vRDM is converted to a VMDK on the destination VMFS datastore.
  • If I chose not to do any conversion, only the vRDM mapping file is moved from the source VMFS datastore to the destination VMFS datastore – the data stays on the original LUN (same behaviour as pRDM)

 

VM with Physical (Pass-Thru) RDMs (Powered Off – Cold Migration):

  • On a migrate, if I chose to change the format (via the advanced view), the pRDM is converted to a VMDKon the destination VMFS datastore.
  • If I chose not to do any conversion, only the pRDM mapping file is moved from the source VMFS datastore to the destination VMFS datastore – the data stays on the original LUN

 

VM with Virtual (non Pass-Thru) RDMs (Power Off – Cold Migration):

  • On a migrate, if I chose to covert the format in the advanced view, the vRDM is converted to a VMDK on the destination VMFS datastore.
  • If I chose not to do any conversion, only the vRDM mapping file is moved from the source VMFS datastore to the destination VMFS datastore – the data stays on the original LUN (same behaviour as pRDM).

vSphere Web Client Integration Plugin Not Working

When trying to manage your vSphere environment using the web client (or forced to in 6.5+), the Web Client Integration plugin is required to make use of many features the web client has to offer, like remote console, enhanced authentication, and deploying OVF appliances.

If you have downloaded and installed the plugin, but IE, Chrome, or Firefox do not activate the plugin, it can most likely be resolved by doing one of the following:

  1. Add the vCenter FQDN to the trusted site list:
    For vSphere 6.0-6.5: https://vCenter_FQDN
    For vSphere 5.5: https://vCenter_FQDN:9443
  2. Add the vCenter FQDN to the Local Intranet list (IE & Chrome)
  3. Uninstall Plugin, Clear Cache/Cookies, Reinstall Plugin, and Repeat option 1

 

HPE Proliant G7 Servers and vSphere 6.5 Purple Screen of Death

Upgrading VMware to ESXi 6.5 on HP G7 Servers will crash and cause you to scream and will require you to waste your time building a custom ISO that HPE could have easily done.
Best practice is to use the vendor’s custom ISO’s that have the hardware drivers integrated, so I used HPE’s latest Custom ISO.

HPE G7 Server support is being dropped by both HPE and VMware. In fact, vSphere 6.5 is supposedly the last version that will support the G7s. Knowing this info, I assumed upgrading from ESXi 6.0 to 6.5 on G7 would work, but I found out quickly that after the upgrade the hosts would “Purple Screen of Death” (PSOD) right after boot.

The Error: “PF Exception 14 in world 67667:sfcb-smx IP 0x0 addr 0x0″

The Issue: There are incompatible driver(s) in the customized ISO from HPE. Yes, there are more than one driver with issues.

The Workarounds: There are various workarounds that I have personally found to work, while others have been resolutions I have read about after I dealt with this, so I was not able to verify that they do indeed work, but I will list them nevertheless. Upgrading the firmware, BIOS, etc did not resolve the issue.
Note: All these workaround require a fresh install of ESXi. Running an Upgrade does not remove the incompatible drivers, and the host doesn’t stay alive long enough before crashing to manually remove them via SSH.

Solution 1: Use VMware’s Standard ISO Media
While this goes against many best practices, VMware doesnt offer too many vendor drivers in their ISO builds, so the offending drivers do not get installed and crash the system. While you can certainly use this method, you will want to follow-up and manually install the appropriate driver VIBs from HPE.

Solution 2: Build your own Custom ISO
This takes a bit more work, but is probably the most comprehensive path to resolution. You will basically need to remove drivers from the HPE Customized 6.5 ISO and inject those from the 6.0 ISO. The following are instructions on doing this.

Create Custom VMware ESXi Media

Prerequisites:

Instructions:

  • Launch vSphere PowerCLI

  • Add the HP ESXi 6.5 image bundle
    Add-EsxSoftwareDepot -DepotUrl C:\ESXi\HPE-6_5.zip

  • Check the Profile
    Get-EsxImageProfile

  • Copy the Profile
    New-EsxImageProfile -CloneProfile HPE-ESXi-6.5.0-OS-Release-6* -Name “G7-ESXi”


    Use “HPE Custom” for Vendor

  • Check the Profile
    Get-EsxImageProfile

  • Remove the driver from the image
    Remove-EsxSoftwarePackage G7-ESXi hpe-smx-provider

  • Add the HP ESXi 6.0 image bundle
    Add-EsxSoftwareDepot -DepotUrl C:\ESXi\HPE-6_0.zip
  • Check the Profile
    Get-EsxImageProfile

  • View both drivers in the two bundles
    Get-EsxSoftwarePackage | findstr smx

  • Add the necessary driver into the custom build
    add-esxsoftwarepackage -imageprofile G7-ESXi -softwarepackage “hpe-smx-provider 600.03.11.00.9-2768847”

  • Convert your custom bundle to ISO
    Export-EsxImageProfile -ImageProfile G7-ESXi -ExportToIso -filepath “C:\ESXi\G7-ESXi.iso”

  • Now take that ISO file that was created and use it to do a FRESH INSTALL. (Remember, upgrade will not work).

While attempting to upgrade a ESXi host from 6.0.0.3073146 to the latest 6.x build (6.0.0.update02-4192238) via CLI (see my post here about pathcing via CLI)

I got the following error:

[DependencyError]
VIB VMware_bootbank_esx-base_6.0.0-2.43.4192238 requires vsan >= 6.0.0-2.43, but the requirement cannot be satisfied within the ImageProfile.
VIB VMware_bootbank_esx-base_6.0.0-2.43.4192238 requires vsan << 6.0.0-2.44, but the requirement cannot be satisfied within the ImageProfile.
Please refer to the log file for more details.

The exact build on the error may be different on yours, but the issue is the same. I found this KB from VMware and decided to make a post that gets right to the point: VMware KB

This error occurs because the newest version of VSAN (which is built into ESXi) is looking for a specific base hypervisor build (esx-base). In order to run the update successfully, you’ll need to define the update profile for the VIB you are using. Its actually a lot easier than it may sound.

First, lets find the software profile the VIB you will be using contains. Run the following command, pointing the destination to the .zip VIB you uploaded to a datastore on the host.

esxcli software sources profile list -d <location_of_the_esxi_zip_bundle_on_the_datastore>

It will output something similiar to this:

That Name is the Profile you will need to add to your update command.
So in my case, the update command would look like this (highlighting added for emphasis):

esxcli software profile update -d /vmfs/volumes/datastore1/VMware-ESXi-6.0.0.update02-4192238.x86_64-Dell_Customized-offline-bundle-A04.zip -p Dell-ESXi-6.0U2-4192238-A04

It should update and finish with no errors:

The final step is to issue a reboot command, and you are done.