Install VMware’s PowerCLI in Windows

VMware PowerCLI is a very powerful tool to assist in automating tasks, advanced configurations and troubleshooting, etc. The following procedure can be used to install PowerCLI.
Downloading and installing PowerCLI is all done within Windows PowerShell itself.

  1. Open Windows PowerShell (Run as Admin)
  2. Run the following PowerShell Command to download the PowerCLI modules. (Path = wherever you save your PS modules). This Process may take a few mins.
    Save-Module -Name VMware.PowerCLI -Path <path>

  3. Run the following PowerShell Command to Install the PowerCLI Modules

    Install-Module -Name VMware.PowerCLI

  4. Finally, you can test to make sure the modules installed properly by running the following:
    Get-Module -ListAvailable -Name VMware*

Sophos XG Virtual Appliance Firewall Deployment – Step by Step Guide

  1. Download the XG Appliance OVF/OVA files from Sophos
  2. In vSphere, Right-Click on the Cluster or Host where you want to deploy the virtual appliance and select Deploy OVF Template

  3. Click Local FileBrowse. Select the .ovf file and the .vmdk disk files and Open

  4. Give the VM a name and select the Datacenter in which to deploy the Virtual Appliance

  5. Select the Host or Cluster in which to deploy the appliance

  6. Review Storage Details

  7. Select the Virtual Network you want assigned to the LAN interface. (You will configure WAN later)

  8. Review Settings and click Finish

  9. Login to the LAN interface using (make sure your computer or device you are logging in from has an IP on the network. Provide a new Admin password

  10. If your ISP does not provide an IP via DHCP, manually enter the IP parameters.

  11. Review finalized settings and Continue

  12. The firewall will apply configs and reboot a few times. (Takes 5 mins). Login to the firewall at the same with the new password you yet.

  13. Finally, you can configure the firewall and change LAN IP, etc once logged in.

Sophos XG Firewall Review

Sophos has been climbing the Security leaderboard of the Magic Quadrant for some time now, and we have utilized their amazing Endpoint protection within our company and with our customers. I was excited to get my hands on their XG Firewalls and takes notes of my experience with the initial deployment, configuration, and ongoing feedback.
Note- this review is based off a week or so of usage, and does not incorporate feedback over time, which is where most issues with any product usually creep up.

Aesthetics – Initial impressions of the nuts/bolts

The XG135w is a desktop form factor unit, that has the ability to be rack-mounted (mounting kit not included). It has a nice clean shell, three large omni-directional antennas, with the ability to add two additional antennas with an add-on module. It has 8 x 1GbE ports plus an additional 1GbE SFP port, which when in use, takes the place of Port 5. It has an HDMI port which I haven’t had time to try out, two USB ports and one Micro USB for console access. It feels like they have taken a really nice gaming motherboard and converted it to an awesome firewall. Its rare that you see SFP, HDMI, and Micro USB ports on a firewall, but it’s what makes the XG so unique. The expansion bay allows you to add any one of the following: SFP DSL Module, 3G/4G module, additional Wifi Radio, or additional SFP ports.
The DC power plug threw me off a little bit though, as it is a 12v banana plug that goes into the firewall itself, while the other end requires an adapter to convert it to a US or European power socket. Not a bad thing, but not what you would expect. (There are two DC ports for dual power supplies). 

White Gloss Shell

Banana Plug DC Power



Deployment was very easy, with a Setup Wizard that takes you through everything. From Power-On to Management login. It ships with default IP of, so you will need to give your laptop an IP on that subnet and then hit that IP through a web browser. You can probably take most of the defaults throughout the 5 page wizard, but the only real decisions you will need to make during setup is a New Admin Password, and whether you want to use the firewall in Route Mode or Bridge Mode. 5 min deployment couldnt be easier.


GUI and Management

Sophos has always been very good at “simplicity of management”, and the Sophos Firewall OS keeps to that style. There are basically four areas of management with the XG-
Monitor and AnalyzeOverview, Alerts, Reports
Protect: Policies, Rules, Security Features
Configure: Network Routing, VPN, etc
System: Device related management

Control Center – Overview


Do not mistake the simplistic design as a lack of features and security granularity. The XG has a LOT of pre-built policy and rule templates, as well as the ability to create your own.

Built-in Web Policies

Application Profiles


Little Gotchas and Thing to Improve

There are a few things that were confusing and more complex than they should be, which I will briefly describe.

Using LAN Ports as Switch Access Ports:
I spent an hour or so at least trying to figure out how to use Ports 3-8 on the same subnet as my LAN traffic. After much trial/error, and even reaching out to Sophos Support, it was finally resolved by a local Sophos SE (Thanks Joe!who has ran into this before. Not only do you have to bridge the interfaces together, you also need to create a LAN to LAN firewall rule allowing the traffic. I guess in hindsight you could say this is just an extra step to maintain security more than it is a software issue, but if so, they should at least document this or train their support staff on how to properly set this up.

Bridge the LAN interfaces then apply the following firewall rule

LAN to LAN Firewall Rule

Default Security Policies:
This could also be considered an extra layer of security, but many multimedia websites/services were semi-broken with the default policies of the XG. For example, NetFlix and Amazon Video would allow you to browse content, but would error out when you attempt to play the content. This also caused some issues in company website hosting services. The solution here was to use the “Allow All” web policy for all Outgoing Traffic. I am sure there is a more granular policy to use here, but with the limited testing I have had with this, that was the quick and dirty fix.


Final Thoughts

I have been VERY happy with what I have seen so far and am excited to continue digging into more. I wish I had another XG to test HA failover, and I would love to test out some of their wireless access points. I didn’t do much Wireless testing with the XG itself, since it usually doesn’t make sense to have the wireless enabled in the data center or server room, but I am very interested to see if their Access Points can replace some of the broader brands. In general, the XG is worthy of replacing most legacy vendors in the data center. The hardware is great, the security features are even better!

“Your computer can’t connect to the Remote Desktop Gateway server” error

A customer gave me access to their Remote Desktop Gateway server to do some after-hour consulting. Every time I attempted to connect from my Microsoft Surface Book, I got the following error:

Your computer can’t connect to the Remote Desktop Gateway server. Contact you network administrator for assistance.

I assumed my account was not setup correctly, but the customer was able to successfully connect with the account they assigned me. When I attempted to connect from my Desktop PC (same Windows 10 build as my Surface Book), I was able to connect successfully. The following registry edit fixed the issue for me, although I am still baffled as to why it is needed, since it doesn’t exist on my Desktop PC registry which worked from the start.

  • Open Regedit
  • Go to HKCU\Software\Microsoft\Terminal Server Client\
  • Create a new DWORD (32-bit) called: RDGClientTransport
  • Give it a Value of: 1

As soon as I added that entry, I was able to connect. No reboot required.

Infinio Accelerator: Server-Side Caching for Insane Acceleration

Server Side Caching isn’t a totally new concept, but it is a hot market right now as storage providers try and push the speed limits of their perspective platforms. The 3DXPoint water cooler talk is all the craze, even if the product isn’t available to its full potential.

Infinio is a server-side caching solution I have been benchmarking as a potential offering to customers, and I have been very impressed with the quick results. Being able to reduce Read latency (400% in my case) in as little as 15 mins, is what sold me.

Infinio Accelerator is built on three fundamental principles:

  1. The highest performance storage architecture is one where the
    hottest data is co-located with applications in the server
    As storage media has become increasingly faster, culminating in the
    ubiquity of flash devices, the network has become the new bottleneck. An
    architecture that serves I/O server-side provides performance that is
    significantly better than relying on lengthy round-trips to and from even
    the highest performing network-based storage. By serving most I/O with
    server-side speed, as well as reducing demands on centralized arrays,
    Infinio can deliver 10X the IOPS and 20X lower latency of typical storage
  2. A “memory-first” architecture is required to realize the best
    storage performance
    RAM is orders of magnitude faster than flash and SSDs, but is price prohibitive
    for most datasets. Infinio’s solution to this problem is a
    content-based architecture, whose inline deduplication enables RAM to
    cache 5X-10X more data than its physical capacity. The option of evicting
    from RAM to a server-side flash tier (which may comprise PCIe flash, SSDs,
    or NVMe devices) offers additional caching capacity. By creating a tiered
    cache such as this, Infinio makes it practical to reduce the storage
    requirements on the server side to just 10% of the dataset. Long-term
    industry trends such as storage-class memory are another indication that a
    memory-first architecture is appropriate for this application.
  3. Delivering storage performance should be 100% headache-free
    Infinio’s software enables the use of server-side RAM and flash to be
    transparent to storage environments, supporting the use of native storage features like snapshots and clones, as well as VMware integrations like
    VAAI and DRS. The introduction of Infinio begins to provide value
    immediately after a non-disruptive, no reboot, 15 minute installation. This
    is in sharp contrast to server-side flash devices used alone, which can
    provide impressive performance results, but require significant
    maintenance and cumbersome data protection.

What does Infinio do exactly?

Infinio Accelerator is a software-based server-side cache that provides high
performance to any storage system in a VMware environment. It increases
IOPS and decreases latency by caching a copy of the hottest data on serverside
resources such as RAM and flash devices. Native inline deduplication
ensures that all local storage resources are used as efficiently as possible,
reducing the cost of performance. Results can be seen instantly following the
non-disruptive, 15-minute installation that doesn’t require any downtime, data
migration, or reboots. 70% of I/O request are Reads (on average), most of your I/O Reads will come directly from super-fast Ram

How does it actually work?

Infinio is built on VMware’s VAIO (vSphere APIs for I/O Filters) framework,
which is the fastest and most secure way to intercept I/O coming from a virtual
machine. Its benefits can be realized on any storage that VMware supports; in
addition, integration with VMware features like DRS, SDRS, VAAI and vMotion
all continue to function the same way once Infinio is installed. Finally, future
storage innovation that VMware releases will be available immediately through
I/O Filter integration.

In short, Infinio is the most cost-effective and easiest way to add storage
performance to a VMware environment. By bringing performance closer to
applications, Infinio delivers:
20X decrease in latency
10X increase in throughput
Reduced storage performance costs ($/IOPS) and capacity costs ($/GB)

Final Thoughts

Honestly, there could not be an easier solution that provides as dramatic results as Server-Side caching. Deploying Ininfio when you are in a performance jam provides immediate relief, and should be part of your performance enhancing arsenal. There is a free trial as well, and remember, there is no downtime to install or uninstall Infinio in your environment.

Please reach out to myself, or your Solution Provider to learn more and test drive Infinio Accelerator. NetWize IT Solutions.

Datrium Design – Architecture Matters

Lame Joke: What do you get when you stick NVMe-based SSD onto an All-Flash Array or Hyper-Converged Node?

Genuine Answer: A Bottleneck of course!

As flash technologies advance and increase in performance, existing (and upcoming) network infrastructure cannot meet the demands of Next-Gen NAND technologies, such as 3DXPoint.
This chart compares saturation rates of 10GbE, 40GbE, and 100GbE with various flash offerings.


Datrium was founded by Ex-Founders and Principal Architects of Companies like Data Domain and VMware, so it’s safe to say they know a thing or two about architecture. Their approach to overcoming some of the shortcoming in Traditional Converged and HyperConverged (HCI) platforms boils down to the following shift in architecture design:

Move the I/O Processing to a stateless compute nodes

Architectural Overview
There are basically two components to Datrium’s Open Convergence architecture.

Compute Nodes
Computer Nodes are Servers of any brand the customer would like to use. The more RAM and Flash these servers have, the more powerful the overall architecture. Each Server Node get’s Datrium’s DVX software installed into the userspace on the hypervisor.
Every compute node is responsible for data services (Deduplication, Compression, Erasure Coding, and Encryption). These nodes pull copies of data from Data Nodes (the next component we will address shortly), and keep that data in a stateless fashion, before the data is sent to the Data Nodes.

Data Nodes
The DVX Data Nodes are Hybrid or All-Flash Disk Enclosures that are purchased from Datrium.  (You can’t use your own Data Nodes). Since all data is processed on the server nodes, there is no data processing happening at the data node layer. This allows the data nodes to keep data that is only accessed if the data copies are not available in flash/cache on the compute nodes. The data that resides on the data nodes is heavily protected for resiliency.

Open Convergence is Datrium’s marketing term for this improved architecture, but taking the marketing out of the discussion, here is how Datrium solves for business outcomes:

  1. Simpler than HyperConverged
    – Zero HCI Cluster configuration or cluster sprawl
    – Independently and Simply provision compute or storage
    – Flexibly support any mix of hosts or hypervisors
    – No vendor lock-in on compute resources. Use existing compute hardware
  2. Faster than All-Flash Arrays
    – Flash is on the server, where is performs much faster
    – No Controller Bottlenecks
    – Performance scales with each server
  3. No Backup Silos
    – One console for VM consolidation and data protection
    – Reduce Management time for Backup, DR, Copy Data Management
    – Eliminate dedicated backup devices

Image result for datrium architecture

If you need a lightning fast, resilient, scalable, cloud-enabled architecture, Datrium might be exactly what you need. Because in the end,  Architecture Matters.


SmartThings Home Automation – Laundry Alerting

I have tried to create a fully automated “Smart Home” using many technologies with integrated workflows and automation. Alerting when the Washer and Dryer have finished their cycles has been one of the most convenient automation feature for my wife and I. I can’t tell you how many times we have started the laundry, forgot about it, and had to rewash the sour wet clothes. Here is how we do it.

First, and explanation of how this works.

I have my Washing Machine and Dryer, each plugged into their own Z-Wave Power Metering Switch/Plug. This give me insight into how much energy each are using, when they are powered on vs off. I use these plugs specifically: Zooz Zen15

When we start a load of laundry (Washer or Dryer), these Zooz Power Switches sense the energy being used, and SmartThings Hub assumes (correctly) that the laundry is being ran. Since there will always be a tiny bit of power being used, even when the laundry isn’t used, it only assumes the laundry is on when power usage exceeds 10 Watts. This power usage fluctuates during the cycle, especially for the Washing Machine. So the rule I have set in place is to monitor the usage and alert my phone when the Laundry is done. It knows when the laundry is finished when the power usage drops below 8 Watts for 4 mins. BOOM! Perfect solution, and it works every time.

Here is what you will need to pull it off, and I assume if you are reading this, you are already a SmartThings user and have some idea of how the IDE works.

After you have added the “Better Laundry Monitor” device type in your SmartThings IDE, go into your SmartThings app, Marketplace, Smart Apps, and scroll down to My Apps.
See Video Below


FreeNAS Alerting with Amazon AWS SNS

When setting up alerting on FreeNAS 11,x, I chose to use AWS’ free SNS Service. I was a SNS virgin before going through this, so I documented the procedures below.

  1. Assuming you already have an AWS Account (even if you arent paying for any services), you can add free SNS service to the account here: Amazon AWS SNS
  2. Upon logging into the SNS Dashboard, click “Create Topic

  3. Give the topic and Name and Display Name

  4. Click “Create Subscription“. The topic ARN will already be filled out, so just select Email for Protocol, and put in the email address to receive the alerts. You will receive an email requesting you to click a link to confirm the subscription.

  5. After subscription confirmation, click on the subscription and note the Region and ARN, as this will be used in FreeNAS later.

  6. While still logged in with your AWS account, go the the AWS Identity and Access Management console (IAM).
  7. Click on the Users Menu, and then Add User. Create a Username and Select Programmatic Access as the access type.

  8. For policies for the user, select “AmazonSNSFullAccess“. (I am not sure if Full Access is required, but I didn’t have time to play around with lowest permissions needed.
  9. The final step on the AWS side, is to make note of the Access Key ID and Secret Access Key that is automatically created under the IAM user you just created.

  10. Login to your FreeNAS management console, and go to SystemAlert Services. Click Add Alert Service, and have all that AWS info ready as follows:Service Name: AWS-SNS
    Region: (Region found on the SNS Subscription)
    ARN: (ARN found on the SNS Subscription)
    Key ID: (Found under the AWS IAM account you created)
    Secret Key: (Found under the AWS IAM account you created)

  11. Click OK (Before you Send Test Alert), and then click Edit on the Alert Service again, and from there you can send test alert. In my case, there was about a 1 min delay before email came in.

pRDM and vRDM to VMDK Migrations

I was assisting an amazing client in moving some VMs off an older storage array and onto a newer storage platform. They had some VMs that had Physical RDMs (pRDM) attached to the VMs, and we wanted them living as VMDKs on the new SAN.
Traditionally, I have always shutdown the VM, remove the pRDM, re-add with vRDM, and then do the migration, but found an awesome write-up on a few separate ways in doing this.
(Credit of the following content goes to Cormac Hogan of VMware)

VM with Physical (Pass-Thru) RDMs (Powered On – Storage vMotion):

  • If I try to change the format to thin or thick, then no Storage vMotion allowed.
  • If I chose not to do any conversion, only the pRDM mapping file is moved from the source VMFS datastore to the destination VMFS datastore – the data stays on the original LUN.


VM with Virtual (non Pass-Thru) RDMs (Power On – Storage vMotion):

  • On a migrate, if I chose to covert the format in the advanced view, the vRDM is converted to a VMDK on the destination VMFS datastore.
  • If I chose not to do any conversion, only the vRDM mapping file is moved from the source VMFS datastore to the destination VMFS datastore – the data stays on the original LUN (same behaviour as pRDM)


VM with Physical (Pass-Thru) RDMs (Powered Off – Cold Migration):

  • On a migrate, if I chose to change the format (via the advanced view), the pRDM is converted to a VMDKon the destination VMFS datastore.
  • If I chose not to do any conversion, only the pRDM mapping file is moved from the source VMFS datastore to the destination VMFS datastore – the data stays on the original LUN


VM with Virtual (non Pass-Thru) RDMs (Power Off – Cold Migration):

  • On a migrate, if I chose to covert the format in the advanced view, the vRDM is converted to a VMDK on the destination VMFS datastore.
  • If I chose not to do any conversion, only the vRDM mapping file is moved from the source VMFS datastore to the destination VMFS datastore – the data stays on the original LUN (same behaviour as pRDM).

Windows Defender Error “Unexpected error. Sorry, we ran into a problem. Please try again”

Since the latest Windows 10 Creators Update, I have been seeing some issue with Windows Defender alerting me that it cannot start. When I try to start the service, I get the following error:

Unexpected error. Sorry, we ran into a problem. Please try again


The trick was to edit some registry settings (of course).Open Registry Editor and go to:
HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows Defender

Change DisableAntiSpyware and DisableAntiVirus values from 1 to 0

Coincidentally, I didn’t have an entry for DisableAntiVirus and had to create it