Tag Archives: Dell

VMware Storage I/O Control (SIOC) – A Blessing and a Curse

I am taking this content straight from an email I just sent a customer, so the content isn’t well polished. But the email took me long enough to write that I decided to post it here for others.

Storage I/O Control (SIOC) is a mechanism to prevent one VM to hog all the I/O resources making the other VMs wait for their I/O request to be completed. By default, it gives every VM on a Datastore fair and Equal I/O Shares. It is able to gauge and determine fairness based off latency. So if you have two VMs (VM1 and VM2) and VM1’s latency hits a specified threshold (30ms is default), then it will actually SLOW VM2’s I/O access and give the scheduler resources back to VM1 until fair sharing is equalized again. This is different than QoS, but I’m sure you see some similarities.

So that sounds great, right? (Really, it is great). But this is not very effective and can be detrimental in certain circumstances. I’ll try to explain.

First let me preface this by explaining two concepts, which  you may already be aware of.

  1. Hypervisors work via scheduled process. Every VM waits its turn to receive the CPU Cycle or Memory Page it requested until it is his turn in the scheduler.
  2. Every Volume you create and map to a host is given a LUN ID (this volume is the LUN) and each LUN has access to schedulers. All the VMs in this Volume/LUN take their turn for I/O requests. This is why best practice dictates you put a maximum of 10-15 VMs per Volume, or much less if those are resource intensive VMs. The more VMs in the LUN, the longer each VM has to wait for its I/O requests. (Note- Setting resource shares doesn’t solve this, it just guarantees one VM will have priority over another)

There are certain scenarios where SIOC can possibly make things worse. The scenario you might be running into is the following:
You have a SAN capable of tiered storage, which is really amazing when you think about how that all works. What’s even more incredible is that you are able to have different RAID types be striped across the same physical disk. (Hot data lives on 15k drives in a RAID 10 stripe, and as I becomes warm, it moves into a RAID 5 stripe across those same physical 15k drives).

Lets take our VM1 and VM2, both reside on the same LUN. We have enabled SIOC on that LUN. VM1 is a high resource VM that is crucial to your business and VM2 is just a Test/Dev Server. Most of VM1’s blocks reside in your 15k disks RAID10, but a few of its less hot blocks have moved to RAID 5, but still on those 15k drives. Again, data on VM1 is almost always hot.
VM2 on the other hand, has some of its blocks on the 15k drive, and some reside on the 7k slower drives since that data is hardly ever accessed.

One day you log into VM2, and fires up an application who’s data is on those 7k drives. That data takes longer to retrieve, naturally, since its sitting on the slowest media, and the time it takes to queue up and process that I/O request (latency) is much greater than the time its taking VM1 to process its requests.

What happens is the SIOC’s mechanism kicks in and because the latency on retrieving data for VM2 is impeding on its “fair access” functionality. So it throttles down the I/O of VM1 (your production server) to try and decrease the latency VM2 is having. You have essentially killed the performance of the VM that needs it the most. Now imagine this happening for all your VMs, VMDKs, bits, blocks, whatever you want to include, it has become a traffic nightmare. It can throttle a VM down so much, waiting for the latency to decrease on the other VMs, that everything is timing out, whereas if you weren’t using SIOC, things would be humming along as usual, and VM2 will just take its sweet time processing data from the slow drives.

I am sure you were aware of most of these concepts, and what I have described is somewhat over-simplified, but hopefully that makes sense. Sharing workloads across the same physical drives can make SICO a nightmare. If you are careful in what workloads you place in what LUN, then SIOC can be great, even on tiered storage. If you take an old EMC or Netapp where you used to carve out specific disks for specific volumes, SIOC would also be great.

Dell Compellent’s Best Practice is to use this with caution, just as other have stated as well on this feature.

 

Dell Storage Manager (DSM) Deployment

Dell Compellent’s Enterprise Manager is growing up and has been rebranded Dell Storage Manager, since it can now manager SC and PS storage. DSM is available as a VMware Appliance, and that is what we will use to deploy DSM.

First things first – You’ll need to get the download link from CoPilot, as it is not publicly available in Knowledge Center.

Once you have the DSMVirtualAppliance-16.xxxx.zip file, extract it and deploy the OVF file as you would any other appliance in VMware.

Once deployed and running, you have a few options:
1- Download the Client, Admin Guide, etc (Do this by going to https://appliance_IP)
2- Run the Setup (https://appliance_IP/setup

We are going to run the setup
Start by hitting the URL https://appliance_IP/setup

Username: config
Password: dell

 

Add you existing SC and PS Storage Systems and you’re ready to rock and roll

Dell DPACK 2.0

For those not familiar with the “Dell Performance Analysis Collection Kit” (DPACK), it is a pretty incredible tool that allows you to visualize the current storage and server workloads from the perspective of the host. It is a great tool for planning Capacity and I/O requirements, and can even be used to troubleshoot problems with your storage and storage network. DPACK measures the following:
– Disk I/0
– Throughput (Which is more important than I/O when we’re talking FLASH)
– Capacity
– Memory Consumption on your Servers
– CPU Utilization on your Servers
– Network Traffic
-Queue Depths

Version 2.0 allows real-time analysis statistics that you can view, instead of having to wait the 24 hrs you were required to wait and let it run previously. Version 2.0 gives you better views into your data for additional insight, and presents this data in a form that Executives can appreciate when you go to them with a PO Request.

Some other things to know about DPACK 2.0 are:
– Uses HTML 5 for viewing real-time data in a browser
– Generate PDFs of data collected to present to management
– DPACK compresses analyzed data and transmits to server every 5 mins
– Uses secure SSL on port 443
– DPACK will continue to run and collect data in the event it loses ability to upload
– DPACK isn’t performance impacting and can be (and should be) run during business hrs

So how do you get started with DPACK 2.0? Its best to call up your local Dell Storage Team and discuss DPACK. Netwize, is a great resource for any Dell Data Center needs, helps customers run DPACK all the time and are a great resource for you to reach out to. I work for Netwize so please feel free to reach out to us and we will get you set up.

Running Dell DPACK longer than 24hrs

If you have ever used the DELL DPACK utility to analyze your storage, you’ll find that the application only gives you the ability to scan for 24hrs. Dell does this because they claim “statistically” the DPACK results don’t vary much from any particular day to day. Having run hundreds and hundreds of DPACK scans for many customers, I find that more often than not, the results from various days of running are different enough to warrant longer monitoring times. My advice is to run your DPACK for 3-4 days. And here is how you do it

First download the DPACK tool from http://dell.com/dpack
Extract the Software
Open a Command Prompt and change directory to the extracted DPACK folder
Run the following command: dellpack.exe /extended
CMD_Prompt__BLUR2_.png

Now you should be able to change the monitoring duration:
Time_Duration.png

Brocade Fibre Channel Zoning – Dell Compellent

There are good step by step zoning documents out on the internet, so I assume this post will be a success. This post will explain how to do Fibre Channel Zoning using any type of Brocade Fibre Channel Switch. In this case, I am zoning in a Dell Compellent SAN, but these steps basically apply for any type of SAN.

Fibre Channel Zoning for Dell Compellent

After Installing your FC Switch, Login to it by going to the IP address in a web browser. It requires a specific version of Java and I have found it works best in Firefox than any other browser.

Once logged into the Switch, you should be presented with the Main Switch Admin page that will look something like this. (Each model varies slightly):

Click Configure at the top of the Screen and Choose “Zone Admin”. A new Window will appear and look like this:

Here is where all the magic happens. In FC Zoning, the goal is to create “VLAN-Like” objects called zones that contain the WWNs of your Server and Storage HBAs.

Since I am configuring this for a Compellent SAN, the first thing I need to do is create an Alias for all the Physical WWNs. To do this, I click on the Alias Tab and Select the “New Alias” Button.

Give your Alias a descriptive name, like SAN_Phy_WWNs_ Alias.

Expand the WWN’s on the lefthand side. Keep this window on the right side of your screen with the Compellent Storage Center GUI opened on the lefthand side with the Fibre Channel IO cards expanded so you can see their WWNs.

Add all the Physical WWNs you see in the switch that match up with the Physical WWNs on the Compellent SAN. (Physical WWNs on Compellent are the Green objects).
If you have a two port card, you will only see two Physical WWN’s (Per switch).
After you have added the two Physical WWNs to this alias you created, you will need to do this exact same thing on your other switch, only this time you will use the OTHER Compellent Physical WWNs you see in the list.

When finished, create a new alias and call is something like “SAN_Virt_WWNs_Alias”.
This time you will follow the same steps as above but you will be adding the Virtual WWNs of the Compellent into this alias. The Virtual WWNs are the ones in blue. Again, if you have a two port FC card, there should only be two WWN’s PER SWITCH. Repeat this process on your other switch for the other Virtual WWNs.

Next we create two Zones. One Zone that includes the Alias of the Physical WWNs and one zone that contains the Alias of the Virtual WWNs. TO do this, click on the Zone tab and select new Zone.

Name the Zones something like “SAN_Virt_WWNs” and “SAN_Phys_WWNs”.
In one zone add JUST the “SAN_Virtual_WWN_Alias” Alias, and in a new Zone and JUST “SAN_Phys_WWNs_Alias”

Now for the Servers- When you plug in a server into the FC switch, you will see a new WWN.

You need to go to the Alias Tab and create a new Alias and name is something like: “ServerName”.
Expand the WWN and add the Second-Level WWN object to this Alias.

Next, go to the Zone tab and Create a new zone, something like “Servername+SAN_WWNs”.
Add the Server Alias you created PLUS the “SAN_Virtual_WWNs” Alias.
You will need to make sure each Server you connect to the SAN has It’s server alias + The SAN’s Virtual WWN’s. 

Finally, click on the Zone Config Tab and create a new Zone Config. Add all the Zones you created into this Zone Config Tab. This is basically a big file will all your settings.

Click on Save Config at the top and wait about 30 seconds for the changes to be saved. You’ll see a success message in the bottom log screen.
The select Enable Config. Wait another 30 seconds for the settings to be enabled and take effect.

 

To recap, these are the aliases and zones you will need to create:

Compellent_Phy_WWNs: Alias
Compellent_Virt_WWNs: Alias

Compellent_Phy_Alias: Zone
Compellent_Virt_Alias: Zone

ServerWWN+Compellent_Virt_WWN: Zone

Add all those to your zone config.

If you found this article to be helpful, please support us by visiting our sponsors’ websites. 

Compellent Dual Controller to Single Controller Conversion

I went on an install the other day and had a client try and upgrade from a Series 20-Dual Controller SAN to a Series 40-Single Controller Array. It was a test lab of theirs, and they didn’t feel like they needed an additional controller.

Turns out, you cannot do an “upgrade” like this. The client will need Dell to provide them a Single Controller License instead of the old Dual Controller License. And because you cannot upgrade, you have to setup the new Array like you would for a new client, and then do a Thin Import from the older Array.

Crazy, I know, but I guess that is how it has to be done.

If you found this article to be helpful, please support us by visiting our sponsors’ websites.