Tag Archives: SAN

Dell DPACK 2.0

For those not familiar with the “Dell Performance Analysis Collection Kit” (DPACK), it is a pretty incredible tool that allows you to visualize the current storage and server workloads from the perspective of the host. It is a great tool for planning Capacity and I/O requirements, and can even be used to troubleshoot problems with your storage and storage network. DPACK measures the following:
– Disk I/0
– Throughput (Which is more important than I/O when we’re talking FLASH)
– Capacity
– Memory Consumption on your Servers
– CPU Utilization on your Servers
– Network Traffic
-Queue Depths

Version 2.0 allows real-time analysis statistics that you can view, instead of having to wait the 24 hrs you were required to wait and let it run previously. Version 2.0 gives you better views into your data for additional insight, and presents this data in a form that Executives can appreciate when you go to them with a PO Request.

Some other things to know about DPACK 2.0 are:
– Uses HTML 5 for viewing real-time data in a browser
– Generate PDFs of data collected to present to management
– DPACK compresses analyzed data and transmits to server every 5 mins
– Uses secure SSL on port 443
– DPACK will continue to run and collect data in the event it loses ability to upload
– DPACK isn’t performance impacting and can be (and should be) run during business hrs

So how do you get started with DPACK 2.0? Its best to call up your local Dell Storage Team and discuss DPACK. Netwize, is a great resource for any Dell Data Center needs, helps customers run DPACK all the time and are a great resource for you to reach out to. I work for Netwize so please feel free to reach out to us and we will get you set up.

Understanding Dell DPACK

The Dell DPACK Tool is a unique agentless tool that collect performance statistics of servers (Physical and Virtual) and displays them in an easy to read report. Key metrics in this report include Throughput, Average IO Size, IOPS, Latency, Read/Write Ratio, Peak Queue Depth, Total Capacity, CPU and Memory Usage and much more. Running this tool against your servers adds NO overhead to your servers and provides a wealth of information.

See this sample report:

Dell DPACK Report

Dell DPACK Report

Data collected through this tool is crucial in sizing SAN storage for your organization.
If you would like a free report on what your environment looks like, along with recommendations, please contact Netwize here and request this free service: http://www.netwize.net/contact-us/

If you found this article to be helpful, please support us by visiting our sponsors’ websites. 

Reclaim “white space” – HP Lefthand SAN

Post from TeleData:

This is always one of the challenges (and limitations) to thin provisioning.

The technology used to provide thin provisioning in SAN/iQ is really more of a “high water mark”.Once a block has been marked as “written” you cannot recover unused space by deleting data from the volume. There is no communication facility in which the OS would “tell” the SAN that,
“Hey that block of data we were using yesterday, is now empty and you can have it back.”Since there is no way to “tell” the SAN the space is empty, those blocks of data, once written to cannot be reclaimed.

Your only option is to create a NEW volume, and migrate the data to the new volume, and then delete the old volume.

This can be challenging with direct native iSCSI mounted volumes, but if you are using a virtual machine (with virtual disks) you can reclaim storage by creating a new VMFS datastore, using sdelete to zero out unused space (within the Windows OS), then performing a storage migration and choosing “thin” provisioning on the virtual disk.

While still requiring a new (VMFS) volume, the virtualized disk can be left intact avoiding any reconfiguration within the Windows server itself.

The result would NOT be different if you had chosen thick vs thin. The blocks are still marked as used and a “high water mark” is still maintained. The only difference is when you mark it “thick” SAN/iQ reserves the entire space, and it cannot be used to provision to other volumes/snapshots.

This is why you can dynamically switch between thin and thick provisioning within the CMC.

If you found this article to be helpful, please support us by visiting our sponsors’ websites. 

Dell Compellent Thin Import

This is a Step-by-Step Guide I found fromhttp://workinghardinit.wordpress.com/tag/thin-import/
He did a great job in outlining the Compellent Thin Import Process.

 

A Hidden Gem in Compellent

As you might well know I’m in the process of doing a multi site SAN replacement project to modernize the infrastructure at a non disclosed organization. The purpose is to have a modern, feature reach, reliable and affordable storage solution that can provide the Windows Server 2012 roll out with modern features (ODX, SMI-S, …).

One of the nifty things you can do with a Compellent SAN is migrations from LUNs of the old SAN to the Compellent SAN with absolute minimal downtime. For us this has proven a real good way of migrating away from 2 HP EVA 8000 SANs to our new DELL Compellent environment. We use it to migrate file servers, Exchange 2010 DAG Member servers (zero downtime),  Hyper-V clusters, SQL Servers, etc. It’s nothing less than a hidden gem not enough people are aware off and it comes with the SAN. I was told that it was hard & not worth the effort by some … well clearly they never used and as such don’t know it. Or they work for competitors and want to keep this hidden Winking smile.

The Process

You have to set up the zoning on all SANs involved to all fabrics. This needs to be done right of course but I won’t be discussing this here. I want to focus on the process of what you can do. This is not a comprehensive how to. It depends on your environment and I can’t write you a migration manual without digging into that. And I can’t do that for free anyway. I need to eat & pay bills as well Winking smile

Basically you add your target Compellent SAN as a host to your legacy SAN (in our case HP EVA 8000) with an operating system type of “Unknown”. This will provide us with a path to expose EVA LUNs to our Compellent SAN.

image

Depending on what server LUNs you are migrating this is when you might have some short downtime for that LUN. If you have shared nothing storage like in an Exchange 2010 or a SQL Server 2012 DAG you can do this without any downtime at all.

Stop any IO to the LUN if you can (suspend copies, shut down data bases, virtual machines) and take CSVs or disks offline. Do what is needed to prevent any application and data issue, this varies.

What we then do is we unpresent the LUN of a server on the legacy SAN.

image

After a rescan of the disks on the server you’ll see that disk/LUN disappear.

This same LUN we then present to the Compellent host we added above.

image

We then “Scan for Disks” in the Compellent Controller GUI. This will detect the LUN as an unassigned disk. That unassigned disk can be mapped to an “External Device” which we name after the LUN to keep things clear (“Classify Disk as External Device” in the picture below).

image

Then we right click that External Device and choose to “Restore Volume from External Device”.

image

This kicks off replication from the EVA LUN mapped to the Compellent target LUN. We can now map that replica to the host as you can see in this picture.

image

After this rescan the disks on the server and voila, the server sees the LUN again. Bring the disk/CSV back online and you’re good to go.

image

All the downtime you’ll have is at a well defined moment in time that you choose. You can do this one LUN at the time or multiple LUNs at once. Just don’t over do it with the number of concurrent migrations. Keep an eye on the CPU usage of your controllers.

After the replication has completed the Compellent SAN will transparently map the destination LUN to the server and remove the mapping for the replica.

image

The next step is that the mirror is reversed. That means that while this replica exists the data written to the Compellent LUN is also mirrored to the old SAN LUN until you break the mirror.

image

Once you decide you’re done replicating and don’t want to keep both LUNs in sync anymore, you break the mirror.

image

You delete the remaining replica disk and you release the external disk.

image

Now you unpresent the LUN from the Compellent host on your old SAN.

image

After a rescan your disks will be shown as down in unassigned disks and you can delete them there. This completes the clean up after a LUN migration.

image

Conclusion

When set up properly it works very well. Sure it takes some experimenting to deal with some intricacies, but once you figure all that out you’re good to go and are ready to deal with any hiccups that might occur. The main take away is that this provides for minimal downtime at a moment that you choose. You get this out of the box with your Compellent. That’s a pretty good deal I say!

If you found this article to be helpful, please support us by visiting our sponsors’ websites. 

Compellent Dual Controller to Single Controller Conversion

I went on an install the other day and had a client try and upgrade from a Series 20-Dual Controller SAN to a Series 40-Single Controller Array. It was a test lab of theirs, and they didn’t feel like they needed an additional controller.

Turns out, you cannot do an “upgrade” like this. The client will need Dell to provide them a Single Controller License instead of the old Dual Controller License. And because you cannot upgrade, you have to setup the new Array like you would for a new client, and then do a Thin Import from the older Array.

Crazy, I know, but I guess that is how it has to be done.

If you found this article to be helpful, please support us by visiting our sponsors’ websites.