Tag Archives: Compellent

VMware Storage I/O Control (SIOC) – A Blessing and a Curse

I am taking this content straight from an email I just sent a customer, so the content isn’t well polished. But the email took me long enough to write that I decided to post it here for others.

Storage I/O Control (SIOC) is a mechanism to prevent one VM to hog all the I/O resources making the other VMs wait for their I/O request to be completed. By default, it gives every VM on a Datastore fair and Equal I/O Shares. It is able to gauge and determine fairness based off latency. So if you have two VMs (VM1 and VM2) and VM1’s latency hits a specified threshold (30ms is default), then it will actually SLOW VM2’s I/O access and give the scheduler resources back to VM1 until fair sharing is equalized again. This is different than QoS, but I’m sure you see some similarities.

So that sounds great, right? (Really, it is great). But this is not very effective and can be detrimental in certain circumstances. I’ll try to explain.

First let me preface this by explaining two concepts, which  you may already be aware of.

  1. Hypervisors work via scheduled process. Every VM waits its turn to receive the CPU Cycle or Memory Page it requested until it is his turn in the scheduler.
  2. Every Volume you create and map to a host is given a LUN ID (this volume is the LUN) and each LUN has access to schedulers. All the VMs in this Volume/LUN take their turn for I/O requests. This is why best practice dictates you put a maximum of 10-15 VMs per Volume, or much less if those are resource intensive VMs. The more VMs in the LUN, the longer each VM has to wait for its I/O requests. (Note- Setting resource shares doesn’t solve this, it just guarantees one VM will have priority over another)

There are certain scenarios where SIOC can possibly make things worse. The scenario you might be running into is the following:
You have a SAN capable of tiered storage, which is really amazing when you think about how that all works. What’s even more incredible is that you are able to have different RAID types be striped across the same physical disk. (Hot data lives on 15k drives in a RAID 10 stripe, and as I becomes warm, it moves into a RAID 5 stripe across those same physical 15k drives).

Lets take our VM1 and VM2, both reside on the same LUN. We have enabled SIOC on that LUN. VM1 is a high resource VM that is crucial to your business and VM2 is just a Test/Dev Server. Most of VM1’s blocks reside in your 15k disks RAID10, but a few of its less hot blocks have moved to RAID 5, but still on those 15k drives. Again, data on VM1 is almost always hot.
VM2 on the other hand, has some of its blocks on the 15k drive, and some reside on the 7k slower drives since that data is hardly ever accessed.

One day you log into VM2, and fires up an application who’s data is on those 7k drives. That data takes longer to retrieve, naturally, since its sitting on the slowest media, and the time it takes to queue up and process that I/O request (latency) is much greater than the time its taking VM1 to process its requests.

What happens is the SIOC’s mechanism kicks in and because the latency on retrieving data for VM2 is impeding on its “fair access” functionality. So it throttles down the I/O of VM1 (your production server) to try and decrease the latency VM2 is having. You have essentially killed the performance of the VM that needs it the most. Now imagine this happening for all your VMs, VMDKs, bits, blocks, whatever you want to include, it has become a traffic nightmare. It can throttle a VM down so much, waiting for the latency to decrease on the other VMs, that everything is timing out, whereas if you weren’t using SIOC, things would be humming along as usual, and VM2 will just take its sweet time processing data from the slow drives.

I am sure you were aware of most of these concepts, and what I have described is somewhat over-simplified, but hopefully that makes sense. Sharing workloads across the same physical drives can make SICO a nightmare. If you are careful in what workloads you place in what LUN, then SIOC can be great, even on tiered storage. If you take an old EMC or Netapp where you used to carve out specific disks for specific volumes, SIOC would also be great.

Dell Compellent’s Best Practice is to use this with caution, just as other have stated as well on this feature.

 

Dell Storage Manager (DSM) Deployment

Dell Compellent’s Enterprise Manager is growing up and has been rebranded Dell Storage Manager, since it can now manager SC and PS storage. DSM is available as a VMware Appliance, and that is what we will use to deploy DSM.

First things first – You’ll need to get the download link from CoPilot, as it is not publicly available in Knowledge Center.

Once you have the DSMVirtualAppliance-16.xxxx.zip file, extract it and deploy the OVF file as you would any other appliance in VMware.

Once deployed and running, you have a few options:
1- Download the Client, Admin Guide, etc (Do this by going to https://appliance_IP)
2- Run the Setup (https://appliance_IP/setup

We are going to run the setup
Start by hitting the URL https://appliance_IP/setup

Username: config
Password: dell

 

Add you existing SC and PS Storage Systems and you’re ready to rock and roll

Active Directory Integration Enterprise Manager

If I had to list the top 10 questions asked from new Compellent customers after a deployment, if the ability to login via Active Directory credentials is available would certainly be one. The answer is yes. And luckily nowadays, it’s an easy yes. In the past, it would have been easy to lie and say it’s not possible due to the complexity of the setup requirements, but now it is super straightforward. If you are looking to for AD authentication, here we go..

Prereqs
– Each Controller should have a FQDN
– Each Controller should have an A Record in DNS
– Each Controller’s A Record should have Reverse Lookup and PRT

I am assuming most can do the basic DNS prereqs which is why I am not outlining those, but I may add those to the step-by-step guide in the future.

Step 1 – Make Sure each Controller has DNS entries to your internal AD DNS Server

Open Storage Center

Expand Controllers

Right Click on Controller and select Properties

Click IP Tab and go to DNS – Make sure your internal DNS servers are entered there

Repeat this step for the other controller

Step 2 – Configure AD Authentication Services

In Storage Center, go to Storage ManagementSystem – AccessConfigure Authentication

Enable External Directory Services and enter the FQDN of each controller, separated by spaces

  • In the Directory Type dropdown, choose Active Directory.
  • In the URI field, make sure the FQDN name of the AD Domain Server(s) are entered. Each FQDN should be prefaced by “ldap://” and names should be separated by spaces. i.e.: “ldap://JS24.EXLab.local ldap://JS25.EXLab.local” Note: Storage Center AD Integration is not site aware, meaning it cannot automatically detect a domain and associated domain controllers To use a specific domain controller it must be defined in the URI field. Storage Center will try to authenticate to domain controllers in the order they are defined in this field. If a domain controller becomes inaccessible, Storage Center will try the next domain controller in the list.
  • Note: Storage Center AD Integration supports authentication against a ReadOnly Domain Controller (RODC).
  • In the Server Connection Timeout field enter 30
  • In the Base DN field enter the canonical name of the domain. For example, if your domain is EXLab.local, the canonical name is “dc=EXLab,dc=local”.
  • (Optional) In the Relative Base field enter the canonical location of where the Storage Center Active Directory object should be created. Default is CN=Computers.
  • In the Storage Center Hostname field enter the Storage Center name followed by the domain name. This will be the FQDN of the Storage Center (i.e. SC22.EXLab.local).
  • In the LDAP Domain field enter the name of the domain (i.e. EXLab.local).
  • In the Auth Bind Username field enter the AD service account with rights to search the directory created prior to setup. The format of this field is username@domain (i.e. User_SrchOnly@EXLab.local).
  • In the Auth Bind Password field enter service account password.

Test – If test fails, troubleshoot DNS, the Continue

Configure Kerberos Authentication
The values displayed will be the default values, and in most cases, can be left as is. If the defaults are modified, all values should be entered in UPPERCASE.

  • In the Domain Realms field enter the domain name (i.e. EXLAB.LOCAL)
  • In the KDC Hostname field specify a Kerberos server (this is usually a domain controller).
  • In the Password Renew Rate (Days) field leave the value at 15
  • Continue

Enter credentials for a domain user that has rights to join objects to the domain. This one-time operation does not require a service account

Click Join Now and then Finish Now

Brocade Fibre Channel Zoning – Dell Compellent

There are good step by step zoning documents out on the internet, so I assume this post will be a success. This post will explain how to do Fibre Channel Zoning using any type of Brocade Fibre Channel Switch. In this case, I am zoning in a Dell Compellent SAN, but these steps basically apply for any type of SAN.

Fibre Channel Zoning for Dell Compellent

After Installing your FC Switch, Login to it by going to the IP address in a web browser. It requires a specific version of Java and I have found it works best in Firefox than any other browser.

Once logged into the Switch, you should be presented with the Main Switch Admin page that will look something like this. (Each model varies slightly):

Click Configure at the top of the Screen and Choose “Zone Admin”. A new Window will appear and look like this:

Here is where all the magic happens. In FC Zoning, the goal is to create “VLAN-Like” objects called zones that contain the WWNs of your Server and Storage HBAs.

Since I am configuring this for a Compellent SAN, the first thing I need to do is create an Alias for all the Physical WWNs. To do this, I click on the Alias Tab and Select the “New Alias” Button.

Give your Alias a descriptive name, like SAN_Phy_WWNs_ Alias.

Expand the WWN’s on the lefthand side. Keep this window on the right side of your screen with the Compellent Storage Center GUI opened on the lefthand side with the Fibre Channel IO cards expanded so you can see their WWNs.

Add all the Physical WWNs you see in the switch that match up with the Physical WWNs on the Compellent SAN. (Physical WWNs on Compellent are the Green objects).
If you have a two port card, you will only see two Physical WWN’s (Per switch).
After you have added the two Physical WWNs to this alias you created, you will need to do this exact same thing on your other switch, only this time you will use the OTHER Compellent Physical WWNs you see in the list.

When finished, create a new alias and call is something like “SAN_Virt_WWNs_Alias”.
This time you will follow the same steps as above but you will be adding the Virtual WWNs of the Compellent into this alias. The Virtual WWNs are the ones in blue. Again, if you have a two port FC card, there should only be two WWN’s PER SWITCH. Repeat this process on your other switch for the other Virtual WWNs.

Next we create two Zones. One Zone that includes the Alias of the Physical WWNs and one zone that contains the Alias of the Virtual WWNs. TO do this, click on the Zone tab and select new Zone.

Name the Zones something like “SAN_Virt_WWNs” and “SAN_Phys_WWNs”.
In one zone add JUST the “SAN_Virtual_WWN_Alias” Alias, and in a new Zone and JUST “SAN_Phys_WWNs_Alias”

Now for the Servers- When you plug in a server into the FC switch, you will see a new WWN.

You need to go to the Alias Tab and create a new Alias and name is something like: “ServerName”.
Expand the WWN and add the Second-Level WWN object to this Alias.

Next, go to the Zone tab and Create a new zone, something like “Servername+SAN_WWNs”.
Add the Server Alias you created PLUS the “SAN_Virtual_WWNs” Alias.
You will need to make sure each Server you connect to the SAN has It’s server alias + The SAN’s Virtual WWN’s. 

Finally, click on the Zone Config Tab and create a new Zone Config. Add all the Zones you created into this Zone Config Tab. This is basically a big file will all your settings.

Click on Save Config at the top and wait about 30 seconds for the changes to be saved. You’ll see a success message in the bottom log screen.
The select Enable Config. Wait another 30 seconds for the settings to be enabled and take effect.

 

To recap, these are the aliases and zones you will need to create:

Compellent_Phy_WWNs: Alias
Compellent_Virt_WWNs: Alias

Compellent_Phy_Alias: Zone
Compellent_Virt_Alias: Zone

ServerWWN+Compellent_Virt_WWN: Zone

Add all those to your zone config.

If you found this article to be helpful, please support us by visiting our sponsors’ websites. 

Dell Compellent Thin Import

This is a Step-by-Step Guide I found fromhttp://workinghardinit.wordpress.com/tag/thin-import/
He did a great job in outlining the Compellent Thin Import Process.

 

A Hidden Gem in Compellent

As you might well know I’m in the process of doing a multi site SAN replacement project to modernize the infrastructure at a non disclosed organization. The purpose is to have a modern, feature reach, reliable and affordable storage solution that can provide the Windows Server 2012 roll out with modern features (ODX, SMI-S, …).

One of the nifty things you can do with a Compellent SAN is migrations from LUNs of the old SAN to the Compellent SAN with absolute minimal downtime. For us this has proven a real good way of migrating away from 2 HP EVA 8000 SANs to our new DELL Compellent environment. We use it to migrate file servers, Exchange 2010 DAG Member servers (zero downtime),  Hyper-V clusters, SQL Servers, etc. It’s nothing less than a hidden gem not enough people are aware off and it comes with the SAN. I was told that it was hard & not worth the effort by some … well clearly they never used and as such don’t know it. Or they work for competitors and want to keep this hidden Winking smile.

The Process

You have to set up the zoning on all SANs involved to all fabrics. This needs to be done right of course but I won’t be discussing this here. I want to focus on the process of what you can do. This is not a comprehensive how to. It depends on your environment and I can’t write you a migration manual without digging into that. And I can’t do that for free anyway. I need to eat & pay bills as well Winking smile

Basically you add your target Compellent SAN as a host to your legacy SAN (in our case HP EVA 8000) with an operating system type of “Unknown”. This will provide us with a path to expose EVA LUNs to our Compellent SAN.

image

Depending on what server LUNs you are migrating this is when you might have some short downtime for that LUN. If you have shared nothing storage like in an Exchange 2010 or a SQL Server 2012 DAG you can do this without any downtime at all.

Stop any IO to the LUN if you can (suspend copies, shut down data bases, virtual machines) and take CSVs or disks offline. Do what is needed to prevent any application and data issue, this varies.

What we then do is we unpresent the LUN of a server on the legacy SAN.

image

After a rescan of the disks on the server you’ll see that disk/LUN disappear.

This same LUN we then present to the Compellent host we added above.

image

We then “Scan for Disks” in the Compellent Controller GUI. This will detect the LUN as an unassigned disk. That unassigned disk can be mapped to an “External Device” which we name after the LUN to keep things clear (“Classify Disk as External Device” in the picture below).

image

Then we right click that External Device and choose to “Restore Volume from External Device”.

image

This kicks off replication from the EVA LUN mapped to the Compellent target LUN. We can now map that replica to the host as you can see in this picture.

image

After this rescan the disks on the server and voila, the server sees the LUN again. Bring the disk/CSV back online and you’re good to go.

image

All the downtime you’ll have is at a well defined moment in time that you choose. You can do this one LUN at the time or multiple LUNs at once. Just don’t over do it with the number of concurrent migrations. Keep an eye on the CPU usage of your controllers.

After the replication has completed the Compellent SAN will transparently map the destination LUN to the server and remove the mapping for the replica.

image

The next step is that the mirror is reversed. That means that while this replica exists the data written to the Compellent LUN is also mirrored to the old SAN LUN until you break the mirror.

image

Once you decide you’re done replicating and don’t want to keep both LUNs in sync anymore, you break the mirror.

image

You delete the remaining replica disk and you release the external disk.

image

Now you unpresent the LUN from the Compellent host on your old SAN.

image

After a rescan your disks will be shown as down in unassigned disks and you can delete them there. This completes the clean up after a LUN migration.

image

Conclusion

When set up properly it works very well. Sure it takes some experimenting to deal with some intricacies, but once you figure all that out you’re good to go and are ready to deal with any hiccups that might occur. The main take away is that this provides for minimal downtime at a moment that you choose. You get this out of the box with your Compellent. That’s a pretty good deal I say!

If you found this article to be helpful, please support us by visiting our sponsors’ websites. 

Compellent Dual Controller to Single Controller Conversion

I went on an install the other day and had a client try and upgrade from a Series 20-Dual Controller SAN to a Series 40-Single Controller Array. It was a test lab of theirs, and they didn’t feel like they needed an additional controller.

Turns out, you cannot do an “upgrade” like this. The client will need Dell to provide them a Single Controller License instead of the old Dual Controller License. And because you cannot upgrade, you have to setup the new Array like you would for a new client, and then do a Thin Import from the older Array.

Crazy, I know, but I guess that is how it has to be done.

If you found this article to be helpful, please support us by visiting our sponsors’ websites.