Author Archives: admin

Infinio Accelerator: Server-Side Caching for Insane Acceleration

Server Side Caching isn’t a totally new concept, but it is a hot market right now as storage providers try and push the speed limits of their perspective platforms. The 3DXPoint water cooler talk is all the craze, even if the product isn’t available to its full potential.

Infinio is a server-side caching solution I have been benchmarking as a potential offering to customers, and I have been very impressed with the quick results. Being able to reduce Read latency (400% in my case) in as little as 15 mins, is what sold me.

Infinio Accelerator is built on three fundamental principles:

  1. The highest performance storage architecture is one where the
    hottest data is co-located with applications in the server
    As storage media has become increasingly faster, culminating in the
    ubiquity of flash devices, the network has become the new bottleneck. An
    architecture that serves I/O server-side provides performance that is
    significantly better than relying on lengthy round-trips to and from even
    the highest performing network-based storage. By serving most I/O with
    server-side speed, as well as reducing demands on centralized arrays,
    Infinio can deliver 10X the IOPS and 20X lower latency of typical storage
    environments.
  2. A “memory-first” architecture is required to realize the best
    storage performance
    RAM is orders of magnitude faster than flash and SSDs, but is price prohibitive
    for most datasets. Infinio’s solution to this problem is a
    content-based architecture, whose inline deduplication enables RAM to
    cache 5X-10X more data than its physical capacity. The option of evicting
    from RAM to a server-side flash tier (which may comprise PCIe flash, SSDs,
    or NVMe devices) offers additional caching capacity. By creating a tiered
    cache such as this, Infinio makes it practical to reduce the storage
    requirements on the server side to just 10% of the dataset. Long-term
    industry trends such as storage-class memory are another indication that a
    memory-first architecture is appropriate for this application.
  3. Delivering storage performance should be 100% headache-free
    Infinio’s software enables the use of server-side RAM and flash to be
    transparent to storage environments, supporting the use of native storage features like snapshots and clones, as well as VMware integrations like
    VAAI and DRS. The introduction of Infinio begins to provide value
    immediately after a non-disruptive, no reboot, 15 minute installation. This
    is in sharp contrast to server-side flash devices used alone, which can
    provide impressive performance results, but require significant
    maintenance and cumbersome data protection.

What does Infinio do exactly?

Infinio Accelerator is a software-based server-side cache that provides high
performance to any storage system in a VMware environment. It increases
IOPS and decreases latency by caching a copy of the hottest data on serverside
resources such as RAM and flash devices. Native inline deduplication
ensures that all local storage resources are used as efficiently as possible,
reducing the cost of performance. Results can be seen instantly following the
non-disruptive, 15-minute installation that doesn’t require any downtime, data
migration, or reboots. 70% of I/O request are Reads (on average), most of your I/O Reads will come directly from super-fast Ram

How does it actually work?

Infinio is built on VMware’s VAIO (vSphere APIs for I/O Filters) framework,
which is the fastest and most secure way to intercept I/O coming from a virtual
machine. Its benefits can be realized on any storage that VMware supports; in
addition, integration with VMware features like DRS, SDRS, VAAI and vMotion
all continue to function the same way once Infinio is installed. Finally, future
storage innovation that VMware releases will be available immediately through
I/O Filter integration.

In short, Infinio is the most cost-effective and easiest way to add storage
performance to a VMware environment. By bringing performance closer to
applications, Infinio delivers:
20X decrease in latency
10X increase in throughput
Reduced storage performance costs ($/IOPS) and capacity costs ($/GB)

Final Thoughts

Honestly, there could not be an easier solution that provides as dramatic results as Server-Side caching. Deploying Ininfio when you are in a performance jam provides immediate relief, and should be part of your performance enhancing arsenal. There is a free trial as well, and remember, there is no downtime to install or uninstall Infinio in your environment.

Please reach out to myself, or your Solution Provider to learn more and test drive Infinio Accelerator. NetWize IT Solutions.

Datrium Design – Architecture Matters

Lame Joke: What do you get when you stick NVMe-based SSD onto an All-Flash Array or Hyper-Converged Node?

Genuine Answer: A Bottleneck of course!

As flash technologies advance and increase in performance, existing (and upcoming) network infrastructure cannot meet the demands of Next-Gen NAND technologies, such as 3DXPoint.
This chart compares saturation rates of 10GbE, 40GbE, and 100GbE with various flash offerings.

 

Datrium was founded by Ex-Founders and Principal Architects of Companies like Data Domain and VMware, so it’s safe to say they know a thing or two about architecture. Their approach to overcoming some of the shortcoming in Traditional Converged and HyperConverged (HCI) platforms boils down to the following shift in architecture design:

Move the I/O Processing to a stateless compute nodes

Architectural Overview
There are basically two components to Datrium’s Open Convergence architecture.

Compute Nodes
Computer Nodes are Servers of any brand the customer would like to use. The more RAM and Flash these servers have, the more powerful the overall architecture. Each Server Node get’s Datrium’s DVX software installed into the userspace on the hypervisor.
Every compute node is responsible for data services (Deduplication, Compression, Erasure Coding, and Encryption). These nodes pull copies of data from Data Nodes (the next component we will address shortly), and keep that data in a stateless fashion, before the data is sent to the Data Nodes.

Data Nodes
The DVX Data Nodes are Hybrid or All-Flash Disk Enclosures that are purchased from Datrium.  (You can’t use your own Data Nodes). Since all data is processed on the server nodes, there is no data processing happening at the data node layer. This allows the data nodes to keep data that is only accessed if the data copies are not available in flash/cache on the compute nodes. The data that resides on the data nodes is heavily protected for resiliency.

Open Convergence is Datrium’s marketing term for this improved architecture, but taking the marketing out of the discussion, here is how Datrium solves for business outcomes:

  1. Simpler than HyperConverged
    – Zero HCI Cluster configuration or cluster sprawl
    – Independently and Simply provision compute or storage
    – Flexibly support any mix of hosts or hypervisors
    – No vendor lock-in on compute resources. Use existing compute hardware
  2. Faster than All-Flash Arrays
    – Flash is on the server, where is performs much faster
    – No Controller Bottlenecks
    – Performance scales with each server
  3. No Backup Silos
    – One console for VM consolidation and data protection
    – Reduce Management time for Backup, DR, Copy Data Management
    – Eliminate dedicated backup devices

Image result for datrium architecture

If you need a lightning fast, resilient, scalable, cloud-enabled architecture, Datrium might be exactly what you need. Because in the end,  Architecture Matters.

 

SmartThings Home Automation – Laundry Alerting

I have tried to create a fully automated “Smart Home” using many technologies with integrated workflows and automation. Alerting when the Washer and Dryer have finished their cycles has been one of the most convenient automation feature for my wife and I. I can’t tell you how many times we have started the laundry, forgot about it, and had to rewash the sour wet clothes. Here is how we do it.

First, and explanation of how this works.

I have my Washing Machine and Dryer, each plugged into their own Z-Wave Power Metering Switch/Plug. This give me insight into how much energy each are using, when they are powered on vs off. I use these plugs specifically: Zooz Zen15

When we start a load of laundry (Washer or Dryer), these Zooz Power Switches sense the energy being used, and SmartThings Hub assumes (correctly) that the laundry is being ran. Since there will always be a tiny bit of power being used, even when the laundry isn’t used, it only assumes the laundry is on when power usage exceeds 10 Watts. This power usage fluctuates during the cycle, especially for the Washing Machine. So the rule I have set in place is to monitor the usage and alert my phone when the Laundry is done. It knows when the laundry is finished when the power usage drops below 8 Watts for 4 mins. BOOM! Perfect solution, and it works every time.

Here is what you will need to pull it off, and I assume if you are reading this, you are already a SmartThings user and have some idea of how the IDE works.

After you have added the “Better Laundry Monitor” device type in your SmartThings IDE, go into your SmartThings app, Marketplace, Smart Apps, and scroll down to My Apps.
See Video Below

 

FreeNAS Alerting with Amazon AWS SNS

When setting up alerting on FreeNAS 11,x, I chose to use AWS’ free SNS Service. I was a SNS virgin before going through this, so I documented the procedures below.

  1. Assuming you already have an AWS Account (even if you arent paying for any services), you can add free SNS service to the account here: Amazon AWS SNS
  2. Upon logging into the SNS Dashboard, click “Create Topic

  3. Give the topic and Name and Display Name

  4. Click “Create Subscription“. The topic ARN will already be filled out, so just select Email for Protocol, and put in the email address to receive the alerts. You will receive an email requesting you to click a link to confirm the subscription.

  5. After subscription confirmation, click on the subscription and note the Region and ARN, as this will be used in FreeNAS later.

  6. While still logged in with your AWS account, go the the AWS Identity and Access Management console (IAM). https://console.aws.amazon.com/iam/home#/home
  7. Click on the Users Menu, and then Add User. Create a Username and Select Programmatic Access as the access type.

  8. For policies for the user, select “AmazonSNSFullAccess“. (I am not sure if Full Access is required, but I didn’t have time to play around with lowest permissions needed.
  9. The final step on the AWS side, is to make note of the Access Key ID and Secret Access Key that is automatically created under the IAM user you just created.

  10. Login to your FreeNAS management console, and go to SystemAlert Services. Click Add Alert Service, and have all that AWS info ready as follows:

    Service Name: AWS-SNS
    Region: (Region found on the SNS Subscription)
    ARN: (ARN found on the SNS Subscription)
    Key ID: (Found under the AWS IAM account you created)
    Secret Key: (Found under the AWS IAM account you created)

  11. Click OK (Before you Send Test Alert), and then click Edit on the Alert Service again, and from there you can send test alert. In my case, there was about a 1 min delay before email came in.

pRDM and vRDM to VMDK Migrations

I was assisting an amazing client in moving some VMs off an older storage array and onto a newer storage platform. They had some VMs that had Physical RDMs (pRDM) attached to the VMs, and we wanted them living as VMDKs on the new SAN.
Traditionally, I have always shutdown the VM, remove the pRDM, re-add with vRDM, and then do the migration, but found an awesome write-up on a few separate ways in doing this.
(Credit of the following content goes to Cormac Hogan of VMware)

VM with Physical (Pass-Thru) RDMs (Powered On – Storage vMotion):

  • If I try to change the format to thin or thick, then no Storage vMotion allowed.
  • If I chose not to do any conversion, only the pRDM mapping file is moved from the source VMFS datastore to the destination VMFS datastore – the data stays on the original LUN.

 

VM with Virtual (non Pass-Thru) RDMs (Power On – Storage vMotion):

  • On a migrate, if I chose to covert the format in the advanced view, the vRDM is converted to a VMDK on the destination VMFS datastore.
  • If I chose not to do any conversion, only the vRDM mapping file is moved from the source VMFS datastore to the destination VMFS datastore – the data stays on the original LUN (same behaviour as pRDM)

 

VM with Physical (Pass-Thru) RDMs (Powered Off – Cold Migration):

  • On a migrate, if I chose to change the format (via the advanced view), the pRDM is converted to a VMDKon the destination VMFS datastore.
  • If I chose not to do any conversion, only the pRDM mapping file is moved from the source VMFS datastore to the destination VMFS datastore – the data stays on the original LUN

 

VM with Virtual (non Pass-Thru) RDMs (Power Off – Cold Migration):

  • On a migrate, if I chose to covert the format in the advanced view, the vRDM is converted to a VMDK on the destination VMFS datastore.
  • If I chose not to do any conversion, only the vRDM mapping file is moved from the source VMFS datastore to the destination VMFS datastore – the data stays on the original LUN (same behaviour as pRDM).

Windows Defender Error “Unexpected error. Sorry, we ran into a problem. Please try again”

Since the latest Windows 10 Creators Update, I have been seeing some issue with Windows Defender alerting me that it cannot start. When I try to start the service, I get the following error:

Unexpected error. Sorry, we ran into a problem. Please try again

 

The trick was to edit some registry settings (of course).Open Registry Editor and go to:
HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows Defender

Change DisableAntiSpyware and DisableAntiVirus values from 1 to 0

Coincidentally, I didn’t have an entry for DisableAntiVirus and had to create it

 

Update Plex Plugin on FreeNAS 11

If you are rocking your own FreeNAS storage at home or office, you’ll know that FreeNAS’ built-in plugins are hardly up to date. Updating the Plex plugin is fairly straightforward.

1. SSH to your FreeNAS
2. type: jls
3. Take the note of the Jail # of your Plex plugin
4. type:  jexec # csh (where # is the number of the jail noted in last step)
5. type:  fetch -o PMS_Updater.sh https://raw.githubusercontent.com/mstinaff/PMS_Updater/master/PMS_Updater.sh
5. type:  chmod 755 PMS_Updater.sh
6. type:  ./PMS_Updater.sh -u PlexPass_User -p PlexPass_password -a

 

vSphere Web Client Integration Plugin Not Working

When trying to manage your vSphere environment using the web client (or forced to in 6.5+), the Web Client Integration plugin is required to make use of many features the web client has to offer, like remote console, enhanced authentication, and deploying OVF appliances.

If you have downloaded and installed the plugin, but IE, Chrome, or Firefox do not activate the plugin, it can most likely be resolved by doing one of the following:

  1. Add the vCenter FQDN to the trusted site list:
    For vSphere 6.0-6.5: https://vCenter_FQDN
    For vSphere 5.5: https://vCenter_FQDN:9443
  2. Add the vCenter FQDN to the Local Intranet list (IE & Chrome)
  3. Uninstall Plugin, Clear Cache/Cookies, Reinstall Plugin, and Repeat option 1

 

How to find HPE Proliant Serial Number from Command Prompt

I was trying to find a serial number for an HP (HPE) Proliant Server, and the System Management Agent wasn’t displaying the info and I didn’t have access to the iLO. I found the following workaround from a user on a forum.

Open a commands prompt and type:

wmic /node:%computername% bios get serialnumber

To find the Serial of a remote computer, type the following:

wmic /node:HOSTNAME bios get serialnumber

 

HPE Proliant G7 Servers and vSphere 6.5 Purple Screen of Death

Upgrading VMware to ESXi 6.5 on HP G7 Servers will crash and cause you to scream and will require you to waste your time building a custom ISO that HPE could have easily done.
Best practice is to use the vendor’s custom ISO’s that have the hardware drivers integrated, so I used HPE’s latest Custom ISO.

HPE G7 Server support is being dropped by both HPE and VMware. In fact, vSphere 6.5 is supposedly the last version that will support the G7s. Knowing this info, I assumed upgrading from ESXi 6.0 to 6.5 on G7 would work, but I found out quickly that after the upgrade the hosts would “Purple Screen of Death” (PSOD) right after boot.

The Error: “PF Exception 14 in world 67667:sfcb-smx IP 0x0 addr 0x0″

The Issue: There are incompatible driver(s) in the customized ISO from HPE. Yes, there are more than one driver with issues.

The Workarounds: There are various workarounds that I have personally found to work, while others have been resolutions I have read about after I dealt with this, so I was not able to verify that they do indeed work, but I will list them nevertheless. Upgrading the firmware, BIOS, etc did not resolve the issue.
Note: All these workaround require a fresh install of ESXi. Running an Upgrade does not remove the incompatible drivers, and the host doesn’t stay alive long enough before crashing to manually remove them via SSH.

Solution 1: Use VMware’s Standard ISO Media
While this goes against many best practices, VMware doesnt offer too many vendor drivers in their ISO builds, so the offending drivers do not get installed and crash the system. While you can certainly use this method, you will want to follow-up and manually install the appropriate driver VIBs from HPE.

Solution 2: Build your own Custom ISO
This takes a bit more work, but is probably the most comprehensive path to resolution. You will basically need to remove drivers from the HPE Customized 6.5 ISO and inject those from the 6.0 ISO. The following are instructions on doing this.

Create Custom VMware ESXi Media

Prerequisites:

Instructions:

  • Launch vSphere PowerCLI

  • Add the HP ESXi 6.5 image bundle
    Add-EsxSoftwareDepot -DepotUrl C:\ESXi\HPE-6_5.zip

  • Check the Profile
    Get-EsxImageProfile

  • Copy the Profile
    New-EsxImageProfile -CloneProfile HPE-ESXi-6.5.0-OS-Release-6* -Name “G7-ESXi”


    Use “HPE Custom” for Vendor

  • Check the Profile
    Get-EsxImageProfile

  • Remove the driver from the image
    Remove-EsxSoftwarePackage G7-ESXi hpe-smx-provider

  • Add the HP ESXi 6.0 image bundle
    Add-EsxSoftwareDepot -DepotUrl C:\ESXi\HPE-6_0.zip
  • Check the Profile
    Get-EsxImageProfile

  • View both drivers in the two bundles
    Get-EsxSoftwarePackage | findstr smx

  • Add the necessary driver into the custom build
    add-esxsoftwarepackage -imageprofile G7-ESXi -softwarepackage “hpe-smx-provider 600.03.11.00.9-2768847”

  • Convert your custom bundle to ISO
    Export-EsxImageProfile -ImageProfile G7-ESXi -ExportToIso -filepath “C:\ESXi\G7-ESXi.iso”

  • Now take that ISO file that was created and use it to do a FRESH INSTALL. (Remember, upgrade will not work).