Category Archives: Uncategorized

Exporting VMware Logs for Analysis

Sometimes there are issues that arise with your VMware environment that require advanced troubleshooting from VMware Technical Support. Sending them your VMware logs preemptively or upon request is a great way to get to the bottom of an issue.
To get those logs, just do the following.

– Open vSphere (vCenter)
– Click File – Export – Export System Logs

– Select all System Logs

– Choose a location to Download Them
– And Watch the Progress of the Download

It may take awhile to gather and export all the logs, but once finished, you can FTP the logs to VMware Support for further analysis!

If you found this article to be helpful, please support us by visiting our sponsors’ websites. 

Step by Step Configuration of 2 node Hyper-V Cluster in Windows Server 2012 R2

* Material taken from my own testing as well as http://alexappleton.net.

Although the features presented in Hyper-V replica give you a great setup, there are many reasons to still want a failover cluster.  This won’t be a comparison between the benefits of Hyper-V replica vs failover clustering.  This will be a guide on configuring a Hyper-V cluster in Windows Server 2012.  Part one will cover the initial configuration and setup of the servers and storage appliance.

The scope:
2-node Hyper-V failover cluster with iSCSI shared storage for small scalable highly available network.

Equipment:
2 -HP ProLiant DL360p Gen8 Server
-64GB RAM
-8 1Gb Ethernet NIC, (4-port 331FLR Adapter, 4-Port 331T Adapter)
-2 146GB SAS 15K drives

HP StorageWorks P2000 MSA
-1.7TB RAW storage

Background:

When sizing you environment you need to take into consideration how many VM’s you are going to need.  This specific environment only required 4 virtual machines to start with, so it didn’t make sense to go with Datacenter.  Windows Server 2012 differs from previous versions in that there is no difference between versions.  With versions prior to 2012 if you needed failover clustering you had to go with Enterprise level licensing or above, standard didn’t give you the option to add the failover clustering feature (even though you could go with the free Hyper-V Server version which did support failover clustering).  This has changed in 2012.  No longer do you have to buy specific editions to get roles or features, all editions include the same feature set.  However, when purchasing your server license you need to cost out your VM requirements.  Server 2012 Standard includes two virtual use licenses, while Datacenter includes unlimited.  The free Hyper-V Server doesn’t include any.  Virtual use licenses are only allowed so long as the host server is not running any other role other than Hyper-V.   Because there is no difference in feature set, you can start off with standard and look to move to datacenter if you happen to scale out in the future.  Although I see no purpose in changing editions, you can convert a standard edition installation to datacenter by entering the following command at the command prompt:

dism /online /set-edition:ServerDatacenter /productkey:48HP8-DN98B-MYWDG-T2DCC-8W83P /AcceptEULA

I have found issues when trying to use a volume license key during the above dism command.  The key above is a well-documented key, which always works for me.  After the upgrade is completed I enter my MAK or KMS key to activate the server since the key above will only give you a trial.

Next thing you are going to need to determine is whether or not you want to go with GUI or Non-GUI (core).  Again, thankfully Microsoft has given us the option to switch between both versions with a powershell entry so you don’t need to stress over which one:

To go “core”: Get-WindowsFeature *gui* | Uninstall-WindowsFeature –Restart
To go “GUI”:  Get-WindowsFeature Server-Gui-Mgmt-Infra, Server-Gui-Shell | Install-WindowsFeature –restart

Get Started:

Install your Windows Operating system on each of the nodes, but don’t add any features or roles just yet.  We will do that at a later stage.

Each server has a total of 8 NIC’s and they will be used for the following:

1 – Dedicated for management of the nodes, and heartbeat
1 – Dedicated for Hyper-V live migration
2 – To connect to the shared storage appliance directly
4 – For virtual machine network connections

We are going multipath I/O to connect to the shared storage appliance.  Of the NIC’s dedicated to the VM’s we will create a team for redundancy.  Always keep redundancy in mind.  We have two 4-port adapters, so we will use one NIC from each for SAN connectivity, and when creating a team we will use one NIC from each of the adapters as well.

The P2000 MSA has two controller cards, with 4 1Gb Ethernet ports on each controller.  We will connect the Controller as follows:

Two iSCSI host ports will connect to the dedicated NICs on each of the Hyper-V hosts.  Use CAT6 cables for this since they are certified for 1Gbps network traffic.  Try to keep redundancy in mind here, so connect one port from one controller card to a single nic port on the 331FLR, and the second controller card to a single NIC port on the 331T:

On our hyper-V nodes we are going to have to configure the connecting Ethernet adapters with the specified subnet that co-relates to the SAN.  I tend to use 172.16.1.1, 172.16.2.1, 172.16.3.1 and 172.16.4.1 to connect. When configuring your server adapters be sure to uncheck the option to register the adapter in DNS so you don’t end up populating your DNS database with errant entries for your host
servers.  See for example:

From each server ping the host interfaces to ensure connectivity.

HP used to ship a network configuration utility with their Windows Servers.  This is not supported yet in Windows Server 2012, however the NIC’s I am using are all Broadcom.  A quick look on Broadcom’s website led me to the Windows Management Application BACS.  This utility allows you to fine tune the network adapter settings, what we need this for is to hard set the MTU on the adapters connecting to the SAN to 9000.  There is a netsh command that will do it as well, but I found it to be unreliable when testing and it rarely stuck.

Download and install the Broadcom Management Applications Installeron each of your hyper-v nodes.  Once installed, there should be a management application called Broadcom Advanced Control suite.  This is where we want to set the jumbo frame MTU to 9000.  This management application does run in the non-gui version of Windows Server, and you can also connect to remote hosts using the utility as well.  You need to make sure you have the right adapter here, and if you are dealing with 8 NICs like I am this can get confusing so take your time here.  Luckily enough you can see the configuration of the NIC in the
application’s window:

Verify connectivity to the SAN after you set the MTU.  Send a large packet size when pinging the associated IP addresses of the SAN ports using a ping command such as:

ping 172.16.1.10 –f –l 6000 

If you don’t get a successful reply here then revisit your settings until you get it right.

Network Teaming

You could create a network team in the Broadcom utility as well, however, in testing I encountered there to be issues using the Broadcom utility.  The team created fine, but didn’t initialize on one server.  Removing the errant team proved to be a major hassle.  Windows Server 2012 includes NIC teaming function, so I prefer to configure the team on the server directly using the Windows configuration.  Again, since I am dealing with two different network cards, I typically create a team using one nic port from either card on the server.

The new NIC teaming management interface can be invoked through server manager, or by running lbfoadmin.exe from command prompt or run box.  To create a new team highlight the NICs involved by holding control down while clicking on each.  Once highlighted, right click the group and choose the
option “Add to New Team”

This will bring up the new team dialog.  Enter a name that will be used for the team.  Try to stay consistent across your nodes here so remember the name you use.  I typically go with “Hyper-V External#”.

We have three additional options under “Additional properties”

Teaming mode is typically set to switch independent.  Using this mode you don’t have to worry about configuring your network switches.  As the name implies, the nics can be plugged into different switches, so long as they have a link light they will work on the team.  Static teaming requires you to configure the network switch as well.  Finally, LACP is based on link aggregation which requires you to have a switch that supports this feature.  The benefit of LACP is that you can dynamically reconfigure the team by adding or removing individual NIC’s without losing network communication on the team.

Load balancing mode should be set to Hyper-V switch port.  Virtual machines in Hyper-V will have their own unique MAC addresses that will be different than the physical adapter.  When load balancing mode is set to Hyper-V switch port, traffic to the VM will be well-balanced across the teamed NICs.

Standby adapter is used when you want to assign a standby adapter to the team.  Selecting the option here will give you a list of all adapters in the team.  You can assign one of the team members as a standby adapter.  The standby adapter is like a hot spare, it is not used by the team unless another member in the team fails.  It’s important to note her that standby adapters are only permitted when teaming mode is set to switch independent.

There is a lot to be learned regarding NIC teaming in Server 2012, and it is a very exciting feature.  You can also configure teams inside of virtual machines as well.  To read more, download the teaming documentation provided by Microsoft here: http://www.microsoft.com/en-us/download/details.aspx?id=30160

Once we have the network team in place it will be time to install the necessary roles and features to your nodes.  Another fantastic new feature in Server 2012 is the ability to manage multiple servers by means of server groups.  I won’t go into detail here, but if you are using Server 2012 you should investigate using Server Groups when managing multiple servers with similar roles on them.  In my case, I always create a server group called “Hyper-V Nodes”, assigning the individual servers from the server pool to the server group.

Adding the roles and features:

Invoke the add roles and features wizard by opening server manager, and choosing the manage option in
the top right, then “Add Roles and Features”

We want to add the Hyper-V role, and the failover clustering and multipath i/o feature to each of the nodes.  You will be prompted to select your network adapter to be used for Hyper-V.  Don’t have to worry about setting this option at the moment, I prefer to do this after installing the role.  You will also be prompted to configure live migration, since we are using a cluster here this is not required.  Live Migration feature here is for shared nothing (non-SAN) setups.  Finally, you will be prompted to configure your default stores for virtual machine configuration files and VHD files.  Since we will be attaching SAN storage we don’t need to be concerned about this step at this moment.  Click next to get through the wizard and Finish to install the roles and features.  Installation will require reboot to complete, and will actually take two reboots before the Hyper-V role is completely installed.

This covers part one of the installation.  At this point we should have everything plugged in, initial configuration of the SAN completed, and initial configuration of the Hyper-V nodes complete as well.  In part two we will be configuring the iSCSI initiator, and bringing up the failover cluster.

—————————————————————————————————————————

I realized that in my prior post for configuration of a 2 node Hyper-V cluster that I did not include the steps necessary for configuring the HP Storage Works P2000.  So here they are:

There are two controllers on this unit.  This is for redundancy.  If one controller fails, the SAN will remain operational on the redundant controller.  My specific unit has 4 iSCSI ports for host connectivity, directly to the nodes.  I am utilizing MPIO here, so I have two links from each server (on separate network adapters) to the SAN.  As follows:

The cables I use to connect the links are standard CAT6 Ethernet cables.

You also want to plug both management ports into the network.  Out of the box, both management ports should obtain an address via DHCP.   Now, there is no need to use a CAT6 cable to plug the management ports in, so go ahead and use a standard CAT5e cable instead.  You can also configure the device via command line using the CLI by interfacing with the USB connection located on each of the management controllers.  I have never had to use this for anything other than when the network port is not responding.  This interface is a USB mini connection located just to the left of the Ethernet management port, and a cable is included with the unit.

Once plugged into your Windows PC, the device comes up as a USB to serial adapter and is given a COM port assignment.  You will have to install the drivers to get the device to be recognized, drivers are not included with the Windows binaries.

I won’t be covering the CLI interface, all configuration will be conducted via the web based graphic console.

The web based console is accessed via your favourite Internet browser.  I typically use Google Chrome as I have ran into issues logging into the console with later versions of Internet Explorer.  The default username is manage, password !manage.

Once logged in, launch the initial configuration wizard by clicking Configuration – Configuration Wizard at the top:

This will l launch the basic settings configuration wizard.  This wizard should hopefully be self-explanatory so I won’t go into many details here.

For this example I will be creating a single VDisk encompassing the entire drive space available.  To do this, click Provisioning – Create Vdisk:

Use your best judgements on what RAID level you want here.  For my example here I am going to be building a RAID 5 on 5x450GB drives:

Now I am going to be creating two separate volumes:  One for the CSV file storage, and the other for Qurorum.  The Quorum volume will be 1GB in size for the disk witness required since we have 2 nodes, and the CSV volume will encompass the remaining space.  To create the volume click on the VDisk created above, and then click Provisioning – Create Volume.  I don’t like to MAP the volumes initially, rather explicitly mapping them to the nodes after connecting them to the SAN:

In part 1 we added the roles, configured the NIC’s connecting for both Hyper-V VM access and SAN connections and prepped the servers.  Now we need to connect the nodes to the SAN by means of the iSCSI initiator.

Our targets on the P2000 are 172.16.1.10, 172.16.2.10, 172.16.3.10, and 172.16.4.10 for ports 1 and 2 on each controller.  As you recall from step one, the servers are directly connected without a switch in the middle.

To launch the iSCSI initiator just type “iSCSI” in the start screen:

I typically pin this to the start screen.

When you launch the iSCSI initiator for the first time you will presented with an option to start the service and make the service auto start.  Choose yes:

I don’t typically like using the Quick Connect option on the target screen, rather configure each connection separately.  Click on the Discovery Tab in the iSCSI Initiator Properties screen, then Discover Portal:

Next, we want to input the IP address of the SAN NIC that we are connecting to, then click on the advanced button.

Select the Initiator IP that will be connecting to the target:

Then do this again for the second connection to the SAN.  When finished you should have two entries:

Now, back on the target tab your target should be listed as Inactive.  Click on the connect button, then in the window that opens click on the “Enable Multi-Path” button:

Now it should show connected:

Complete the same tasks on the other node as well.

Now, before we can attach a volume from the SAN we are going to have to MAP the LUN explicitly to each of the nodes.  So, we are going to have to open the web management utility for the P2000 again.  Once in, if we expand the Hosts in the left pane we should now see our two nodes listed (I have omitted server names in this screenshot):

We need to map the two volumes created on the SAN to each of the nodes.  Right click on the volume, selecting Provisioning – Explicit Mappings

Then choose the node, click the Map check box, give the LUN a unique number, check the ports assigned to the LUN on the SAN and apply the changes:

Assign the same LUN number to the other node and complete the same explicit mapping to the other node.  Then complete the same procedure for the other volume.  I used LUN number 0 for the Quorum Volume, and LUN number 1 for the CSV Volume.

Jump back to the nodes, back into the iSCSI initiator and click on the Volumes and Devices tab, press the Auto Configure button and our volumes should show up here:

Complete the same procedure on the second node as well.  If you are having difficulty with the volumes showing up sometimes a disconnect and reconnect is required.(don’t forget to check the “Enable Multi-Path” option)

Now we want to enable multipath for iSCSI.  Fire up the MPIO utility from the start screen:

Click on the Discover Multi-Paths tab, then check off the box “Add support for iSCSI devices” and finally the Add button:

The server will prompt for a reboot.  So go ahead and let it reboot.  Don’t forget to complete the same tasks on the second node.

After the reboot we are going to want to fire up disk management and configure the two SAN volumes on the node, making sure each node can see and connect to them.  When initializing your CSV volume I would suggest making this a GPT disk rather than an MBR one, since you are likely to go above the 2TB limit imposed with MBR.

I format both volumes with NTFS, and give them a drive letter for now:

After configuring the volumes on the first node, I typically offline the disks, then on-line the disks on the second node to be sure everything is connected and working correctly.  Don’t get worried about the drive letters assigned to the volumes, this doesn’t matter.

Getting there slowly!

Next, before we create the cluster I always like to assign the Hyper-V External NICs in the Hyper-V configuration.  Fire up Hyper-V Manager, selecting “Virtual Switch Manager” in the action pane.  We are going to create the external Virtual Switches using the adapters we assigned for the Hyper-V VM’s.  I always dedicate the network adapters to the virtual switch, un-checking the option “Allow management operating system to share this network adapter”.

At this point we have completed all the prerequisite steps required to fire up the cluster.  Now we will form the cluster.

Fire up Fail over Cluster Manager from the start screen:

Once opened, select the option in the action pane to create cluster.  This will fire up the wizard to form our cluster.  The wizard should be self-explanatory, so walk through the steps required.  Make sure you run the cluster validation tests, selecting the default option to run all tests.  This is the best time to be running this test, since it will take the cluster disks offline.  You don’t want to have this cluster in production finding issues wrong with it, having to run the cluster validation tests bringing the cluster down.  If we run into any issues here we can address them now before the system is in production.

The P2000 on Windows Server 2012 will create a warning about validating storage spaces persistent reservation.  This warning can be safely ignored as noted here.

Hopefully when you run the validation tests you will get all Success (other than the note above).  If not, trace back through the steps and make sure you are not missing anything.  Once you get a successful validation save the report and store it if you need to reference it for future support.

Finish walking through the wizard to create your cluster.  Assign a cluster name and static IP address to your cluster as requested from the wizard.

That should do it.   If you got this far you made it.  Congratulations!

—————————————————————————————————————-

A few asked me to elaborate more on configuring the cluster.  Sorry I didn’t go into too much detail during Part 2.  I’ll explain further here.

When you open up Failover Cluster Manager you have the option in the action pane to create a cluster.  Click on this to fire up the wizard:

The initial configuration screen can be skipped, and the second screen will prompt you to input the server names of the cluster nodes:

When you add the servers it will verify the failover cluster service is running on the node.  If everything is good, the wizard will allow you to add the server.  Once the servers are added, proceed to the next step.

The next step is very important.  Not only is this step required for Microsoft to ever support you if you run into any issues, but it also validates that everything you have done thus far is correct and setup properly for the cluster to operate.  Not quite sure why they give you the option to skip the tests, but I would highly recommend against this.  The warning is pretty straight forward as well:

The next portion of the cluster configuration that comes up is the validation wizard.  Like I mentioned above, do not skip this portion.   Run all tests as recommended by the wizard:

The tests will take a few minutes to run, so go grab a coffee while waiting.  Once completed, you shouldn’t have any errors.  However, as I mentioned in part 2 there is a known issue when using the P2000 with the “Validate Storage Spaces Persistent Reservation” test so you will get a warning here relating to this but you shouldn’t have any other warnings if things are setup correctly.

View the report and save it somewhere as a reference that you ran it in case Microsoft support wants to see it.

When you click finish you will be asked to enter your name for the cluster, as well as the IP address for the cluster.  Enter these parameters in and click next:

Then finish up the wizard and form the cluster.

Now, there are several things we must do after the cluster is up and running to completely configure it.  I’ll go over each aspect now.

Cluster Shared Volumes:

This should be a given.  I won’t go into much detail here, sparing you the time.  If you need to read up on what a cluster shared volume is please read up on it here:

http://blogs.msdn.com/b/clustering/archive/2013/12/02/10473247.aspx

To enable the cluster shared volume navigate to storage, then disks.  Then select your storage disk, right clicking it and choosing the option “Add to Cluster Shared Volumes”

I like to rename the disks here as well, but this is not a necessary step.

Now that we have enabled Cluster Shared Volumes we should change the default path in Hyper-V manager on both nodes to reflect this.  The path should be C:\ClusterStorage\Volume1 on both nodes.  I like to keep the remaining path as well for simplicity:

Don’t forget to do this on both nodes.

Live Migration:

I dedicate a NIC for live migration.  I have always done this on recommendation that if we saturate the network link for managing the server with live migration traffic that we could cause a failover situation where heartbeat is lost.  To dedicate the network adapter for live migration you right click the Networks option in failover cluster manager, choosing Live Migration Settings.  I rename my networks in the list first so that they are more easily understood other than “Cluster Network X”

Cluster Aware Updating:

Cluster aware updating is a fantastic feature introduced in 2012 that allows for automatic updating of your cluster nodes without taking down the workloads they are servicing.  What happens with Hyper-V is that the VM roles are live migrated to another node, once all roles are off the node then updating is completed and the node is rebooted.  Then the same process happens on the other node.  There is a little bit of work to set this up, and you should have a WSUS server on your network, but the setup is worth the effort.

To enable Cluster-Aware Updating choose the option on the initial failover cluster manager page

This will launch the management window where you can configure the options for the cluster.  Click on the “Configure cluster self-updating options” in the cluster actions pane.  This will launch the wizard to let you configure this option.

Before you walk through this wizard there is one necessary step you should complete first.  I like to place my Hyper-V nodes, and the cluster computer object in their OU within Active Directory.  I then typically grant full control over that OU to the Cluster computer object.  I find if you don’t complete this step that sometimes you will get errors in the failover cluster manager, as well as issues with Cluster-Aware updating.

The Cluster-Aware updating wizard is pretty straight forward.  The only thing you need to determine is when you want it to run.  There is no need to check off the “I have a pre-staged computer object for the CAU clustered role” as this will be created during the setup.  I don’t typically change any options from the default here, I haven’t found any reason to do so yet.  I’ll also do a first run to make sure that this is working correctly.

Tweaking:

The following are some tweaks and best practices I also do to ensure the best performance and reliability on the cluster configuration:

Disable all networking protocols on the iSCSI NICs used, with the exception of Internet Protocol Version 4/6.  This is to reduce the amount of chatter that occurs on the NICs.  We want to dedicate
these network adapters strictly for iSCSI traffic, so there is no need for anything outside of the IP protocols.

  1. Change the binding of the NICs, putting the management NIC of the node at the top of the list.
  2. Disable RDP Printer mapping on the hosts to remove any chance of a printer driver causing issues with stability.  You can do this via local policy, group policy, or registry.  Google how to do this.
  3. Configure exclusions in your anti-virus software based on the following article:
    http://social.technet.microsoft.com/wiki/contents/articles/2179.hyper-v-anti-virus-exclusions-for-hyper-v-hosts.aspx
  4. Review the following article on performance tuning for Hyper-V servers:
    http://msdn.microsoft.com/en-us/library/windows/hardware/dn567657.aspx

    If you found this article to be helpful, please support us by visiting our sponsors’ websites. 

Microsoft KMS Client Setup Keys Reference

Windows Server 2012 R2 and Windows 8.1 Client Setup Keys

 

Operating system edition KMS Client Setup Key
Windows 8.1 Professional GCRJD-8NW9H-F2CDX-CCM8D-9D6T9
Windows 8.1 Professional N HMCNV-VVBFX-7HMBH-CTY9B-B4FXY
Windows 8.1 Enterprise MHF9N-XY6XB-WVXMC-BTDCT-MKKG7
Windows 8.1 Enterprise N TT4HM-HN7YT-62K67-RGRQJ-JFFXW
Windows Server 2012 R2 Server Standard D2N9P-3P6X9-2R39C-7RTCD-MDVJX
Windows Server 2012 R2 Datacenter W3GGN-FT8W3-Y4M27-J84CP-Q3VJ9
Windows Server 2012 R2 Essentials KNC87-3J2TX-XB4WP-VCPJV-M4FWM

Windows Server 2012 and Windows 8 Client Setup Keys

 

Operating system edition KMS Client Setup Key
Windows 8 Professional NG4HW-VH26C-733KW-K6F98-J8CK4
Windows 8 Professional N XCVCF-2NXM9-723PB-MHCB7-2RYQQ
Windows 8 Enterprise 32JNW-9KQ84-P47T8-D8GGY-CWCK7
Windows 8 Enterprise N JMNMF-RHW7P-DMY6X-RF3DR-X2BQT
Windows Server 2012 BN3D2-R7TKB-3YPBD-8DRP2-27GG4
Windows Server 2012 N 8N2M2-HWPGY-7PGT9-HGDD8-GVGGY
Windows Server 2012 Single Language 2WN2H-YGCQR-KFX6K-CD6TF-84YXQ
Windows Server 2012 Country Specific 4K36P-JN4VD-GDC6V-KDT89-DYFKP
Windows Server 2012 Server Standard XC9B7-NBPP2-83J2H-RHMBY-92BT4
Windows Server 2012 MultiPoint Standard HM7DN-YVMH3-46JC3-XYTG7-CYQJJ
Windows Server 2012 MultiPoint Premium XNH6W-2V9GX-RGJ4K-Y8X6F-QGJ2G
Windows Server 2012 Datacenter 48HP8-DN98B-MYWDG-T2DCC-8W83P

Windows 7 and Windows Server 2008 R2

 

Operating system edition KMS Client Setup Key
Windows 7 Professional FJ82H-XT6CR-J8D7P-XQJJ2-GPDD4
Windows 7 Professional N MRPKT-YTG23-K7D7T-X2JMM-QY7MG
Windows 7 Professional E W82YF-2Q76Y-63HXB-FGJG9-GF7QX
Windows 7 Enterprise 33PXH-7Y6KF-2VJC9-XBBR8-HVTHH
Windows 7 Enterprise N YDRBP-3D83W-TY26F-D46B2-XCKRJ
Windows 7 Enterprise E C29WB-22CC8-VJ326-GHFJW-H9DH4
Windows Server 2008 R2 Web 6TPJF-RBVHG-WBW2R-86QPH-6RTM4
Windows Server 2008 R2 HPC edition TT8MH-CG224-D3D7Q-498W2-9QCTX
Windows Server 2008 R2 Standard YC6KT-GKW9T-YTKYR-T4X34-R7VHC
Windows Server 2008 R2 Enterprise 489J6-VHDMP-X63PK-3K798-CPX3Y
Windows Server 2008 R2 Datacenter 74YFP-3QFB3-KQT8W-PMXWJ-7M648
Windows Server 2008 R2 for Itanium-based Systems GT63C-RJFQ3-4GMB6-BRFB9-CB83V

Windows Vista and Windows Server 2008

 

Operating system edition KMS Client Setup Key
Windows Vista Business YFKBB-PQJJV-G996G-VWGXY-2V3X8
Windows Vista Business N HMBQG-8H2RH-C77VX-27R82-VMQBT
Windows Vista Enterprise VKK3X-68KWM-X2YGT-QR4M6-4BWMV
Windows Vista Enterprise N VTC42-BM838-43QHV-84HX6-XJXKV
Windows Web Server 2008 WYR28-R7TFJ-3X2YQ-YCY4H-M249D
Windows Server 2008 Standard TM24T-X9RMF-VWXK6-X8JC9-BFGM2
Windows Server 2008 Standard without Hyper-V W7VD6-7JFBR-RX26B-YKQ3Y-6FFFJ
Windows Server 2008 Enterprise YQGMW-MPWTJ-34KDK-48M3W-X4Q6V
Windows Server 2008 Enterprise without Hyper-V 39BXF-X8Q23-P2WWT-38T2F-G3FPG
Windows Server 2008 HPC RCTX3-KWVHP-BR6TB-RB6DM-6X7HP
Windows Server 2008 Datacenter 7M67G-PC374-GR742-YH8V4-TCBY3
Windows Server 2008 Datacenter without Hyper-V 22XQ2-VRXRG-P8D42-K34TD-G3QQC
Windows Server 2008 for Itanium-Based Systems 4DWFP-JF3DJ-B7DTH-78FJB-PDRHK

If you found this article to be helpful, please support us by visiting our sponsors’ websites. 

Find HP Server Serial Numbers via iLO

I was trying to find some serial numbers of our HP servers the other day, but was not onsite to view the actual sticker with the Serial info. I searched for a way to find the serial number for warranty purposes, but everyone online said I needed the actual sticker. I found another way!

1. Login to server iLO
2. Click on the “Administration” Tab
3. Click on “Management” on the left navigation pane
4. Click on “View XML Reply”
5. The first part of the text output is your serial. Normally the serial starts with USE

If you found this article to be helpful, please support us by visiting our sponsors’ websites. 

Find Server Service Tag via VMware

If you need to find the service tag of an ESXi server without physically being present at the server, try this.

Enable SSH on host and use the following command:

/sbin/esxcli hardware platform get

 

There you go!

If you found this article to be helpful, please support us by visiting our sponsors’ websites. 

Dell Compellent Thin Import

This is a Step-by-Step Guide I found fromhttp://workinghardinit.wordpress.com/tag/thin-import/
He did a great job in outlining the Compellent Thin Import Process.

 

A Hidden Gem in Compellent

As you might well know I’m in the process of doing a multi site SAN replacement project to modernize the infrastructure at a non disclosed organization. The purpose is to have a modern, feature reach, reliable and affordable storage solution that can provide the Windows Server 2012 roll out with modern features (ODX, SMI-S, …).

One of the nifty things you can do with a Compellent SAN is migrations from LUNs of the old SAN to the Compellent SAN with absolute minimal downtime. For us this has proven a real good way of migrating away from 2 HP EVA 8000 SANs to our new DELL Compellent environment. We use it to migrate file servers, Exchange 2010 DAG Member servers (zero downtime),  Hyper-V clusters, SQL Servers, etc. It’s nothing less than a hidden gem not enough people are aware off and it comes with the SAN. I was told that it was hard & not worth the effort by some … well clearly they never used and as such don’t know it. Or they work for competitors and want to keep this hidden Winking smile.

The Process

You have to set up the zoning on all SANs involved to all fabrics. This needs to be done right of course but I won’t be discussing this here. I want to focus on the process of what you can do. This is not a comprehensive how to. It depends on your environment and I can’t write you a migration manual without digging into that. And I can’t do that for free anyway. I need to eat & pay bills as well Winking smile

Basically you add your target Compellent SAN as a host to your legacy SAN (in our case HP EVA 8000) with an operating system type of “Unknown”. This will provide us with a path to expose EVA LUNs to our Compellent SAN.

image

Depending on what server LUNs you are migrating this is when you might have some short downtime for that LUN. If you have shared nothing storage like in an Exchange 2010 or a SQL Server 2012 DAG you can do this without any downtime at all.

Stop any IO to the LUN if you can (suspend copies, shut down data bases, virtual machines) and take CSVs or disks offline. Do what is needed to prevent any application and data issue, this varies.

What we then do is we unpresent the LUN of a server on the legacy SAN.

image

After a rescan of the disks on the server you’ll see that disk/LUN disappear.

This same LUN we then present to the Compellent host we added above.

image

We then “Scan for Disks” in the Compellent Controller GUI. This will detect the LUN as an unassigned disk. That unassigned disk can be mapped to an “External Device” which we name after the LUN to keep things clear (“Classify Disk as External Device” in the picture below).

image

Then we right click that External Device and choose to “Restore Volume from External Device”.

image

This kicks off replication from the EVA LUN mapped to the Compellent target LUN. We can now map that replica to the host as you can see in this picture.

image

After this rescan the disks on the server and voila, the server sees the LUN again. Bring the disk/CSV back online and you’re good to go.

image

All the downtime you’ll have is at a well defined moment in time that you choose. You can do this one LUN at the time or multiple LUNs at once. Just don’t over do it with the number of concurrent migrations. Keep an eye on the CPU usage of your controllers.

After the replication has completed the Compellent SAN will transparently map the destination LUN to the server and remove the mapping for the replica.

image

The next step is that the mirror is reversed. That means that while this replica exists the data written to the Compellent LUN is also mirrored to the old SAN LUN until you break the mirror.

image

Once you decide you’re done replicating and don’t want to keep both LUNs in sync anymore, you break the mirror.

image

You delete the remaining replica disk and you release the external disk.

image

Now you unpresent the LUN from the Compellent host on your old SAN.

image

After a rescan your disks will be shown as down in unassigned disks and you can delete them there. This completes the clean up after a LUN migration.

image

Conclusion

When set up properly it works very well. Sure it takes some experimenting to deal with some intricacies, but once you figure all that out you’re good to go and are ready to deal with any hiccups that might occur. The main take away is that this provides for minimal downtime at a moment that you choose. You get this out of the box with your Compellent. That’s a pretty good deal I say!

If you found this article to be helpful, please support us by visiting our sponsors’ websites. 

EqualLogic Connections

Setting up my first EqualLogics from scratch. The client puchased a PS4100 and PS6100. These are both 1GBE Controllers.

What is unique about the EqualLogic, is each controller does not have to physically connect to each other to expand storage. It just needs to be plugged into the same storage switch as the other EqualLogic and added to the original group via the Management Page.

Here is a sample Cabling of the PS6100
Capture

As you can see, it is cabled for both vertical and horizontal failover.

If you found this article to be helpful, please support us by visiting our sponsors’ websites.