Category Archives: Storage

Running Dell DPACK longer than 24hrs

If you have ever used the DELL DPACK utility to analyze your storage, you’ll find that the application only gives you the ability to scan for 24hrs. Dell does this because they claim “statistically” the DPACK results don’t vary much from any particular day to day. Having run hundreds and hundreds of DPACK scans for many customers, I find that more often than not, the results from various days of running are different enough to warrant longer monitoring times. My advice is to run your DPACK for 3-4 days. And here is how you do it

First download the DPACK tool from http://dell.com/dpack
Extract the Software
Open a Command Prompt and change directory to the extracted DPACK folder
Run the following command: dellpack.exe /extended
CMD_Prompt__BLUR2_.png

Now you should be able to change the monitoring duration:
Time_Duration.png

VMware vSphere ISCSI Port Bindings – You’re Probably Doing it Wrong

As a Consultant, I have the opportunity of seeing a lot of different operating environments from a variety of customers. At a high-level, most customers have the same data center infrastructure (Servers, Storage, Running Virtualization, etc). Although the configurations of these environments vary, I see one configuration mistake made by many of these customers – “ISCSI Port Binding”.

For those unfamiliar with ISCSI Port Binding, Port Binding binds/glues the ISCSI Initiator interface on the ESXi Host to a vmknic to allow for ISCSI multipathing. Binding itself technically  doesn’t  “allow multipathing”, just having multiple adapters can do that. But if you have Multiple Adapters/VMkernel ports for ISCSI used in the SAME subnet/broadcast domain, it will allow multiple paths to the ISCSI array that broadcasts one single IP Address.

Why do I need to bind my Initiator to a VMkernel anyway?
When you have multiple ISCSI Adaptors on the same subnet, there is really no control on where data flows or how to control data broadcasts of the adapters. You literally flood that network with rouge packets.
* Note: I am trying to make this easy to understand for those that don’t have a deep technical experience on this subject. And in doing so, I am only telling half-truths here to keep things simple. Don’t call me out on this 🙂

When should you enable ISCSI Port Binding?

ISCSI Port Binding is ONLY used when you have multiple VMKernels on the SAME subnet.

Pictured above, you can see there are multiple VMkernel ports on the same subnet and broadcast domain. You MUST use port binding! If you do not, you may experience the following:
– Unable to see ISCSI Storage on ESXi
– Paths to storage are reported as Dead
– Loss of Path Redundancy Errors
ISCSI Port Binding bypasses some vSwitch Functionality. No Data Path, No Acceleration.
Array Target ports must reside in the same Broadcast Domain & Subnet as the VMkernel port
All VMkernel ports used for ISCSI must reside in the same broadcast domain & subnet
All VMkernel ports used for ISCSI must reside in the same vSwitch

When should you NOT enable ISCSI Port Binding?

Do not enable Port Binding if Array Target ports are in a different broadcast domain & subnet
ISCSI VMkernel ports exist in different broadcast domain, Subnet an/or vSwitch
Routing is required to reach the array
If LACP/Link Aggregation is used on ESXi host uplinks to the pSwitch

In the above example, you should NOT use Port Binding. In doing so, you may experience:
– Rescan times take longer than usual
– Incorrect number of paths per device are seen
– Unable to see any storage from the array

So why do I say you are probably doing it wrong? Most storage arrays use the second example as a best practice for multipathing to the array. Most customers follow those best practices and use two VMkernel Ports on different subnets to connect to their arrays. But most people still enable port binding!
If you are guilty of this, you can easily remove the existing Port Bindings. Doing so will cause a temporary loss to your storage, so make sure all VMs are shutdown, and you have a maintenance window.

Now you know!

 

 

Exagrid Root Username and Password

Do you know your way around linux and would like to do some advanced troubleshooting on your ExaGrid? You’re going to need an SSH username and password in order to gain access. Fortunately for you, we have the credentials. So putty into your ExaGrid and login with the following:

Username: root
Password: inflection

If you found this article to be helpful, please support us by visiting our sponsors’ websites. 

Calculate SAN Disk Performance

Need an easy, free tool to calculate disk performance on your SAN? CrystalDiskMark might be just the thing you need. Its super lightweight and runs a variety of tests with options you can choose.

How to interpret the test?

Sequential: reads/writes whatever file size you choose when you start the test sequentially. That is to say it starts writing on a sector and then writes the next part on the adjacent sector and so on. This is fastest because the head on a hdd doesn’t have to move about a lot as all the sectors are adjacent.

512k: CDM read/writes to random sectors on the drive, but it reads/writes 512KB of data at a random point, then moves to the next random point. This is faster than 4k because there’s more data read/written with less movement of the head.

4k: The same as above but instead of reading/writing the test data in 512KB ‘chunks’ it reads/writes in 4KB chunks.

4kQD32: The same as 4K but there are more requests for the data sent to the HDD controller. I’m told that some HDDs increase performance when this happens because of the way their controller logic works but I think this mostly applies to SSDs not mechanical drives.

Its a great way to see the throughput of your hardware. You can download it from here: http://crystalmark.info/download/index-e.html#CrystalDiskMark

If you found this article to be helpful, please support us by visiting our sponsors’ websites. 

Install and Configure Hyper-V 2012 on Dell EqualLogic

On attempting my first install of Hyper-V 2012, the web and forums have very little help on installing and configuring Hyper-V 2012. There seems to be many holes and empty steps left in posts by other users. Hopefully this post will make this install go a little smoother for you.

——————————————

Here is the Setup:
Server with 4 Physical NICs
NIC1: Client/Network
NIC2: vMotion
NIC3: ISCSI
NIC4: ISCSI
Local Hard Drive with Raid1 Mirror for Hyper-V Install
EqualLogic 6100 running 6.0.2 firmware

First thing first, we start off with install Windows Server 2012 (Pretty Straight-Forward)

After you install Windows- install all the latest updates, assign IP info to primary NIC, join to domain, etc and whatever, its time to install a couple of features.

The First Feature we install is the Hyper-V Feature:

On the Server Manager Dashboard, click “Add Roles and Features”

Click Next


Click Next


Select Hyper-V and Click Install (taking all the defaults) and Reboot the Server.

On each of my Hyper-V Server, hit the Windows Key and Type: ISCSI
You should see ISCSI Initiator come up and prompt you to enable it. Please do it
Once enabled, click the Configuration Tab on the ISCSI Initiator and Copy the IQN Name for the server:

This IQN is unique to this server and covers all of the NICs installed in this server

 

When the server is done Rebooting, go back into Roles and Features

If you found this article to be helpful, please support us by visiting our sponsors’ websites. 

Brocade Fibre Channel Zoning

So you want to learn how to zone you fibre channel switches? This post will describe how to do zoning through any Brocade Fibre Channel Switch.

After installing your FC Switch and getting in an IP, Login to it by going to the IP address. It requires a specific version of Java and I have found it works best in Firefox than any other browser.

Once logged into the Switch, you should be presented with the Main Switch Admin page that will look something like this. (Each model varies slightly):

Click Configure at the top of the Screen and Choose “Zone Admin”. A new Window will appear and look like this:

Here is where all the magic happens. In FC Zoning, the goal is to create “VLAN-Like” objects called zones that contain the WWNs of your HBAs and Storage.

First off, lets zone in your SAN. Make sure the only cables plugged into your Fibre Channel Switch, are those from your SAN. (This will make explaining things easier).
We need to create an Alias for the WWNs of your SAN. To do this, I click on the Alias Tab and Select the “New Alias’ Button.

Give your Alias a descriptive name, like SAN_WWNs_ Alias.

Expand the WWN’s on the lefthand side. You’ll want to click the + on the WWNs to view the Second-Level Object. Add those objects to this new Alias you created. (See the image above or below for reference).

After you have the SAN Alias created and have added the WWNs, click on the “ZONE” tab.
Now we will create a new Zone and add the Alias we just created to the zone.
Click the “New Zone” Button and give it a name like “SAN_ WWNs_Zone”.
Expand the Aliases on the left and add the Alias you created to this zone.
If you have a two port FC card, there should only be two WWN’s per switch. Repeat this process on your other switch.

Now for the Servers- We will want to plug the servers in, one at a time, zone each server, and then plug in the next server. This will help us identify which server is which.
When you plug in a server into the FC switch, you will see a new WWN.

You need to go to the Alias Tab and create a new Alias and name is something like: “ServerName”.
Expand the WWN and add the Second-Level WWN object to this Alias.

Next, go to the “ZONE” tab and Create a new zone, something like “Servername+SAN_WWNs”.
Add the Server Alias you created PLUS the “SAN_WWNs_Alias”.
Again, you will add the server Alias and the SAN Alias into this Zone.

Finally, click on the Zone Config Tab and create a new Zone Config. Add all the Zones you created into this Zone Config Tab. This is basically a big file will all your settings.

Click on Save Config at the top and wait about 30 seconds for the changes to be saved. You’ll see a success message in the bottom log screen.
The select Enable Config. Wait another 30 seconds for the settings to be enabled and take effect.

If you found this article to be helpful, please support us by visiting our sponsors’ websites. 

Brocade Fibre Channel Zoning – Dell Compellent

There are good step by step zoning documents out on the internet, so I assume this post will be a success. This post will explain how to do Fibre Channel Zoning using any type of Brocade Fibre Channel Switch. In this case, I am zoning in a Dell Compellent SAN, but these steps basically apply for any type of SAN.

Fibre Channel Zoning for Dell Compellent

After Installing your FC Switch, Login to it by going to the IP address in a web browser. It requires a specific version of Java and I have found it works best in Firefox than any other browser.

Once logged into the Switch, you should be presented with the Main Switch Admin page that will look something like this. (Each model varies slightly):

Click Configure at the top of the Screen and Choose “Zone Admin”. A new Window will appear and look like this:

Here is where all the magic happens. In FC Zoning, the goal is to create “VLAN-Like” objects called zones that contain the WWNs of your Server and Storage HBAs.

Since I am configuring this for a Compellent SAN, the first thing I need to do is create an Alias for all the Physical WWNs. To do this, I click on the Alias Tab and Select the “New Alias” Button.

Give your Alias a descriptive name, like SAN_Phy_WWNs_ Alias.

Expand the WWN’s on the lefthand side. Keep this window on the right side of your screen with the Compellent Storage Center GUI opened on the lefthand side with the Fibre Channel IO cards expanded so you can see their WWNs.

Add all the Physical WWNs you see in the switch that match up with the Physical WWNs on the Compellent SAN. (Physical WWNs on Compellent are the Green objects).
If you have a two port card, you will only see two Physical WWN’s (Per switch).
After you have added the two Physical WWNs to this alias you created, you will need to do this exact same thing on your other switch, only this time you will use the OTHER Compellent Physical WWNs you see in the list.

When finished, create a new alias and call is something like “SAN_Virt_WWNs_Alias”.
This time you will follow the same steps as above but you will be adding the Virtual WWNs of the Compellent into this alias. The Virtual WWNs are the ones in blue. Again, if you have a two port FC card, there should only be two WWN’s PER SWITCH. Repeat this process on your other switch for the other Virtual WWNs.

Next we create two Zones. One Zone that includes the Alias of the Physical WWNs and one zone that contains the Alias of the Virtual WWNs. TO do this, click on the Zone tab and select new Zone.

Name the Zones something like “SAN_Virt_WWNs” and “SAN_Phys_WWNs”.
In one zone add JUST the “SAN_Virtual_WWN_Alias” Alias, and in a new Zone and JUST “SAN_Phys_WWNs_Alias”

Now for the Servers- When you plug in a server into the FC switch, you will see a new WWN.

You need to go to the Alias Tab and create a new Alias and name is something like: “ServerName”.
Expand the WWN and add the Second-Level WWN object to this Alias.

Next, go to the Zone tab and Create a new zone, something like “Servername+SAN_WWNs”.
Add the Server Alias you created PLUS the “SAN_Virtual_WWNs” Alias.
You will need to make sure each Server you connect to the SAN has It’s server alias + The SAN’s Virtual WWN’s. 

Finally, click on the Zone Config Tab and create a new Zone Config. Add all the Zones you created into this Zone Config Tab. This is basically a big file will all your settings.

Click on Save Config at the top and wait about 30 seconds for the changes to be saved. You’ll see a success message in the bottom log screen.
The select Enable Config. Wait another 30 seconds for the settings to be enabled and take effect.

 

To recap, these are the aliases and zones you will need to create:

Compellent_Phy_WWNs: Alias
Compellent_Virt_WWNs: Alias

Compellent_Phy_Alias: Zone
Compellent_Virt_Alias: Zone

ServerWWN+Compellent_Virt_WWN: Zone

Add all those to your zone config.

If you found this article to be helpful, please support us by visiting our sponsors’ websites. 

Fibre Channel vs ISCSI

In the beginning there was Fibre Channel (FC), and it was good. If you wanted a true SAN — versus shared direct-attached SCSI storage — FC is what you got. But FC was terribly expensive, requiring dedicated switches and host bus adapters, and it was difficult to support in geographically distributed environments. Then, around six or seven years ago, iSCSI hit the SMB market in a big way and slowly began its climb into the enterprise.

The intervening time has seen a lot of ill-informed wrangling about which one is better. Sometimes, the iSCSI-vs.-FC debate has reached the level of a religious war.

This battle been a result of two main factors: First, the storage market was split between big incumbent storage vendors who had made a heavy investment in FC marketing against younger vendors with low-cost, iSCSI-only offerings. Second, admins tend to like what they know and distrust what they don’t. If you’ve run FC SANs for years, you are likely to believe that iSCSI is a slow, unreliable architecture and would sooner die than run a critical service on it. If you’ve run iSCSI SANs, you probably think FC SANs are massively expensive and a bear to set up and manage. Neither is entirely true.

Now that we’re about a year down the pike after the ratification of the FCoE (FC over Ethernet) standard, things aren’t much better. Many buyers still don’t understand the differences between the iSCSI and Fiber Channel standards. Though the topic could easily fill a book, here’s a quick rundown.

The fundamentals of FC
FC is a dedicated storage networking architecture that was standardized in 1994. Today, it is generally implemented with dedicated HBAs (host bus adapters) and switches — which is the main reason FC is considered more expensive than other storage networking technologies.

As for performance, it’s hard to beat the low latency and high throughput of FC, because FC was built from the ground up to handle storage traffic. The processing cycles required to generate and interpret FCP (Fibre Channel protocol) frames are offloaded entirely to dedicated low-latency HBAs. This frees the server’s CPU to handle applications rather than talk to storage

FC is available in 1Gbps, 2Gbps, 4Gbps, 8Gbps, 10Gbps, and 20Gbps speeds. Switches and devices that support 1Gbps, 2Gbps, 4Gbps, and 8Gbps speeds are generally backward compatible with their slower brethren, while the 10Gbps and 20Gbps devices are not, due to the fact that they use a different frame encoding mechanism (these two are generally used for interswitch links).

In addition, FCP is also optimized to handle storage traffic. Unlike protocols that run on top of TCP/IP, FCP is a significantly thinner, single-purpose protocol that generally results in a lower switching latency. It also includes a built-in flow control mechanism that ensures data isn’t sent to a device (either storage or server) that isn’t ready to accept it. In my experience, you can’t achieve the same low interconnect latency with any other storage protocol in existence today.

Yet FC and FCP have drawbacks — and not just high cost. One is that supporting storage interconnectivity over long distances can be expensive. If you want to configure replication to a secondary array at a remote site, either you’re lucky enough to afford dark fiber (if it’s available) or you’ll need to purchase expensive FCIP distance gateways.

In addition, managing a FC infrastructure requires a specialized skill set, which may make administrator experience an issue. For example, FC zoning makes heavy use of long hexadecimal World Wide Node and Port names (similar to MAC addresses in Ethernet), which can be a pain to manage if frequent changes are made to the fabric.

The nitty-gritty on iSCSI
iSCSI is a storage networking protocol built on top of the TCP/IP networking protocol. Ratified as a standard in 2004, iSCSI’s greatest claim to fame is that it runs over the same network equipment that run the rest of the enterprise network. It does not specifically require any extra hardware, which makes it comparatively inexpensive to implement.

From a performance perspective, iSCSI lags behind FC/FCP. But when iSCSI is implemented properly, the difference boils down to a few milliseconds of additional latency due to the overhead required to encapsulate SCSI commands within the general-purpose TCP/IP networking protocol. This can make a huge difference for extremely high transactional I/O loads and is the source of most claims that iSCSI is unfit for use in the enterprise. Such workloads are rare outside of the Fortune 500, however, so in most cases the performance delta is much narrower.

iSCSI also places a larger load on the CPU of the server. Though hardware iSCSI HBAs do exist, most iSCSI implementations use a software initiator — essentially loading the server’s processor with the task of creating, sending, and interpreting storage commands. This also has been used as an effective argument against iSCSI. However, given the fact that servers today often ship with significantly more CPU resources than most applications can hope to use, the cases where this makes any kind of substantive difference are few and far between.

iSCSI can hold its own with FC in terms of throughput through the use of multiple 1Gbps Ethernet or 10Gbps Ethernet links. It also benefits from being TCP/IP in that it can be used over great distances through existing WAN links. This usage scenario is usually limited to SAN-to-SAN replication, but is significantly easier and less expensive to implement than FC-only alternatives.

Aside from savings through reduced infrastructural costs, many enterprises find iSCSI much easier to deploy. Much of the skill set required to implement iSCSI overlaps with that of general network operation. This makes iSCSI extremely attractive to smaller enterprises with limited IT staffing and largely explains its popularity in that segment.

This ease of deployment is a double-edged sword. Because iSCSI is easy to implement, it is also easy to implement incorrectly. Failure to implement using dedicated network interfaces, to ensure support for switching features such as flow control and jumbo framing, and to implement multipath I/O are common mistakes which can result in lackluster performance. Stories abound on Internet forums of unsuccessful iSCSI deployments that could have been avoided because of these factors.

Fiber Channel over IP
FCoIP (Fiber Channel over Internet Protocol) is a niche protocol that was ratified in 2004. It is a standard for encapsulating FCP frames within TCP/IP packets so that they can be shipped over a TCP/IP network. It is almost exclusively used for bridging FC fabrics at multiple sites to enable SAN-to-SAN replication and backup over long distances.

Due to the inefficiency of fragmenting large FC frames into multiple TCP/IP packets (WAN circuits typically don’t support packets over 1,500 bytes), it is not built to be low latency. Instead, it is built to allow geographically separated Fibre Channel fabrics to be linked when dark fiber isn’t available to do so with native FCP. FCIP is almost always found in FC distance gateways — essentially FC/FCP-to-FCIP bridges — and is rarely if ever used natively by storage devices as a server to storage access method.

Fibre Channel over Ethernet
FCoE (Fibre Channel over Ethernet) is the newest storage networking protocol of the bunch. Ratified as a standard in June of last year, FCoE is the Fibre Channel community’s answer to the benefits of iSCSI. Like iSCSI, FCoE uses standard multipurpose Ethernet networks to connect servers with storage. Unlike iSCSI, it does not run over TCP/IP — it is its own Ethernet protocol occupying a space next to IP in the OSI model.

This differential is important to understand as it has both good and bad results. The good is that, even though FCoE runs over the same general-purpose switches that iSCSI does, it experiences significantly lower end-to-end latency due to the fact that the TCP/IP header doesn’t need to be created and interpreted. The bad is that it cannot be routed over a TCP/IP WAN. Like FC, FCoE can only run over a local network and requires a bridge to connect to a remote fabric.

On the server side, most FCoE implementations make use of 10Gbps Ethernet FCoE CNAs (Converged Network Adapters), which can both act as network adapters and FCoE HBAs — offloading the work of talking to storage similar to the way that FC HBAs do. This is an important point as the requirement for a separate FC HBA was often a good reason to avoid FC altogether. As time goes on, servers may commonly ship with FCoE-capable CNAs built in, essentially removing this as a cost factor entirely.

FCoE’s primary benefits can be realized when it is implemented as an extension of a pre-existing Fiber Channel network. Despite having a different physical transport mechanism, which requires a few extra steps to implement, FCoE can use the same management tools as FC, and much of the experience gained in operating an FC fabric can be applied to its configuration and maintenance.

Putting it all together
There’s no doubt that the debate between FC and iSCSI will continue to rage. Both architectures are great for certain tasks. However, saying that FC is good for enterprise while iSCSI is good for SMB is no longer an acceptable answer. The availability of FCoE goes a long way toward eating into iSCSI’s cost and convergence argument while the increasing prevalence of 10Gbps Ethernet and increasing server CPU performance eats into FC’s performance argument.

Whatever technology you decide to implement for your organization, try not to get sucked into the religious war and do your homework before you buy. You may be surprised by what you find.

If you found this article to be helpful, please support us by visiting our sponsors’ websites. 

Understanding Dell DPACK

The Dell DPACK Tool is a unique agentless tool that collect performance statistics of servers (Physical and Virtual) and displays them in an easy to read report. Key metrics in this report include Throughput, Average IO Size, IOPS, Latency, Read/Write Ratio, Peak Queue Depth, Total Capacity, CPU and Memory Usage and much more. Running this tool against your servers adds NO overhead to your servers and provides a wealth of information.

See this sample report:

Dell DPACK Report

Dell DPACK Report

Data collected through this tool is crucial in sizing SAN storage for your organization.
If you would like a free report on what your environment looks like, along with recommendations, please contact Netwize here and request this free service: http://www.netwize.net/contact-us/

If you found this article to be helpful, please support us by visiting our sponsors’ websites. 

How many VMs per DataStore should I have?

Although there are no steadfast rules for how many virtual machines should be placed on a datastore due to the scalability enhancements of VMFS-5, a good conservative approach is to place anywhere between 15-25 virtual machines on each.

The reasoning behind keeping a limited number of Virtual Machines and/or VMDK files per datastore is due to potential I/O contention, queue depth contention, or Legacy SCSI reservation conflicts that may degrade system performance.

This is why I suggest limiting your datastore size to 500GB-700GB each, because this helps limit the total number of virtual machines that can be placed on each datastore.

If you found this article to be helpful, please support us by visiting our sponsors’ websites.