Building a Hyper-V Cluster on two W2K8 Nodes
In this first part we'll focus on getting the infrastructure in place. We'll talk about the systems making up the nodes and the shared storage we need to build. Once that is in place, we'll focus in the next article on how to get Hyper-V clustered on this infrastructure. Note that we are not building a cluster –in- Hyper-V, as many people have written about; we are building a physical Fail-over cluster with two nodes, and on that we run Hyper-V.
My goals
The moment I read that Hyper-V was
cluster-aware, I knew I would not rest until I had tried building a failover cluster running Hyper-V. It's one of those exciting technologies you just hàve to test out for yourself, not being satisfied with simply reading about it. (yes; that is a hint ;-) In fact, I had the sneaky feeling that Robert Larson (the MS blog in the link above) in his ten-step plan may have cut a few corners to expedite things.
First of all let's take a moment to realize the importance of hyper-V being cluster aware… In the past you would have several virtual machines running one some big iron, but once the big iron failed, being the single point of failure; all your machines would go down hard. As tempting as virtual machines are, this is completely unacceptable to most people or businesses. Fail-over clustering adds this nice and secure feeling of having a backup system, removing that single point of failure.
It must be nice for guys like Robert Larson who works for Microsoft consulting services, to have all this hardware readily available to build his labs with. Unfortunately I don't have such spare hardware, and no big budgets to spend. Still, for this project some dedicated hard and software is needed and I set my budget not to go over 1500 Euro, and use whatever spare or free equipment or software I can get, without compromising too much for performance reasons.
Figure 1. The bill (and configuration details) of one of the nodes
I believe that technologies such as clustering and virtual machines are ready to reach out from the enterprise level into the mainstream. You could argue that I'm following Google's strategy of using combining cheaper hardware with intelligent software; quantity in hardware for redundancy, and quality in software for availability.
In order to prove that I want to build something I can actually use; not just a simple proof of concept. While doing that I will still attempt to remain within budget. With that in mind I decided to buy some hardware to build a few identical systems to create my cluster with (rather than buying pre-built –more expensive- systems). A bit risky, as at the time of writing it's still a bit of a guess if the motherboard you are buying has a bios supporting VT. More on how to figure that out can be found
here. For that reason alone you may want to consider buying more expensive machines from the MS recommended hardware list (see previous link), if you are building this for a business. All in all I have not spent much more than 1400 Euro's on two nice dual code 2.66 Ghz systems with 6Gigs of memory, leaving me with 100 Euro to buy a Gigabit switch to build my SAN backbone with and still remain within budget.
Figure 2. The two nodes; MSI Neo2 FIR main boards, Core2 Duo E6750 CPU's
This should be sufficient to build my lab with, in fact a small size business would do pretty well running a similar setup. (It's no enterprise hardware, but carefully selected and much better than what I see at some small businesses!) I would like to be able to run 3 to 4 servers on this Hyper-V cluster. Of course the real interesting part will be finding out the stability and performance of these systems. Once I'm satisfied with the design I'll make some baseline measurements using various tools such as performance monitor and see how things evolve… this should be fun!
When money is a factor running several virtual machines on one cluster sounds like a great way to save some cash and still have redundancy. A few things have changed with Windows 2008 clustering though; where in the past it was possibly to simply share a SCSI hard-drive that option is now no longer available. For more details on the requirements for a Hyper-V cluster you might want to check out
Microsoft's step-by-step guide. The article you are reading now however is written in a much more entertaining style, I even have pictures and everything! So be sure to return once you've read that, to get a more down-to-earth perspective on building a Hyper-V cluster. The shared storage options left include dedicated hardware solutions such as fiber based SAN or something a little bit closer to our budgetary needs: iSCSI
More on iSCSI.
iSCSI is often called an emerging technology, and while I'm writing this I wonder if you can still call it 'emerging'; it has been around for years now. Strangely enough this great technology has not gained as much traction as it should have, but rest assured; with Hyper-V and Windows 2008 Failover clustering now advocating this, and with iSCSI client (initiator)software natively available in Windows 2003 server, Windows 2008 server, Vista and separate
downloadable Windows XP I'm sure in the next few years this technology is going to break-through widely. Before it was a nice technology available to us if you needed some LAN based storage, but now we actually are seeing some very compelling technology that gains greatly buy the use of iSCSI. As mentioned above, it competes with the much more expensive fiber channel which is used at the higher end of the market. The advantages for iSCSI are that it is using technology (Ethernet, preferably Gigabit), commonly available, this means most IT people already have a sound knowledge of the basic technologies needed to maintain the SAN. Not all IT people have similar experience with fiber channel Switched fabric, HBA's (the fiber channel Host Bus Adapter) and the enterprise solutions associated with that technology. As mentioned the initiator software is natively available in the latest Microsoft operating systems, making the entry level to SAN even more accessible. These are the factors that will drive iSCSI to become a mainstream technology. When building enterprise solutions however; Fiber Channel in combination with a dedicated (hardware solution) NAS is still king, but at a price smaller businesses will not be willing to pay. For them: iSCSI is the alternative.
So what is this iSCSI? Simply put, it's a fake hard drive that you can share from a server (target) on the network somewhere, and then link to from a client (initiator). On the client the initiator software makes the 'network disk' look as if it's a real hardware disk in your system. Instead of sharing a network drive on file or network level as you normally do with a file-server, you are now sharing a disk at block level, and this opens up a whole new realm of possibilities!
Figure 3. Would you believe these last two disks aren't real? The node sure thinks they are!
It's even possible for (for example) thin clients to boot from iSCSI drives, provided the right software is in place. Ideally one would have a NAS (Network Attached Storage) system with some RAID configuration for redundancy and/or performance. Even better: get the devices or servers sharing the iSCSI resources on their own fast (Gigabit) network and you'll have your very own SAN (Storage Area Network)… I could probably throw some more acronyms at you, but I'll leave it at this for now. I decided to build a small SAN on Gigabit Ethernet to allow some performance for my iSCSI drives. Gigabit Ethernet switches are cheap now, so it should not be a big drain on the budget. If you decide to go this route, make sure to read up on
jumbo fames especially if you have decided to build something more permanent than a lab situation. For now, we need to build a SAN solution!
How about Open Source?
Since we know we need iSCSI to remain on budget, we simply have to figure out what NAS to get! There are quite some options, and NAS's can come as a hardware solution, or simply software installed on (usually) windows server. Typically a NAS shares its disks using several protocols, such as SMB, NFS etc. Since we only need iSCSI we might as well only look into that. For my solution I went the server with iSCSI software route, since I happen to have some spare (older) servers anyway. You'll find there are quite some iSCSI solutions out there, open as well as closed source.
I'll throw in the spoiler right here..: If you where thinking of using open source solutions like OpenFiler (current version 2.3) or FreeNAS (current version 0.69b3), think again. Starting with Windows 2008, the cluster software requires your iSCSI target software to support something called "persistent reservation commands". These are based on the SCSI Primary commands-3 specs (
SPC-3), and most open source software I know of (with the possible exception of OpenSolaris) is based on SCSI-2, and does not support these reservation commands. OpenFiler for example is based on
IET . The FreeNAS target is based on the NetBSD iSCSI target by Alistair Crooks and ported to FreeBSD, and –also- does not support these command. So far for Linux and BSD…
[update May 2009]
Good news! There are a few new options available now that support persistent reservervation. First of all, as you may have read from the comments on this blog the latest
nightly builds of FreeNAS now seem to work fine. Additionally, if you happen to be an MSDN or Technet subscriber (as I am ;-) you can get the brand-new Windows Storage Server 2008, with the 3.2 iSCSI Target, which of course works fine too...
[/update]
[update Oct 2009]
FreeNAS 0.7.0 RC2 (as of writing just out) now uses ISTGT, and according to this page:
iSCSI Target for FreeBSD 7.x
it supports SPC-3...!
[/update]
If you are simply looking for a nice NAS for home use, and would like to use iSCSI to only have a few systems connect to; look no further; OpenFiler as well as FreeNAS work admirably to help you accomplish this goal. For clustering purposes however, we need to look further…
Windows iSCSI Targets
Good news on the Microsoft based front, there are two options (unfortunately not free ones) that do support the persistent reservation commands and that also support concurrent connections to the shared disk (keep in mind that for a cluster several nodes need to have access to the disk!).
Windows based iSCSI targets that support SCSI-3 persistent reservation and concurrent connections are "StarWind" from
Rocket Division Software, or Microsoft's own iSCSI Software Target 3.1.
Notice that although there is a free evaluation version (the personal version) offered by Rocket Division Software, this won't work when building a cluster, as it only allows one (single) connection. A single node cluster seems a bit of a waste of time… The alternative is a 30 day evaluation version that does support concurrent connections however and would be suitable... For our purposes the 'Server' version of StarWind would be sufficient, since it support two concurrent connections, and we have only 2 nodes anyway…
I opted to use the Microsoft 90 day evaluation version of the iSCSI Target, as it gives me more time to play around with my cluster. Both solutions support SCSI-3 persistent reservation, and both support multiple concurrent connections. Both implementations of the target software are very easy to install and configure, using a very intuitive interface. Jose Barreto has done an excellent job of
describing how to configure the Microsoft software.
The Microsoft iSCSI software Target is hard to find, as it is a OEM solution that is sold along (as an option) with
Windows Storage Server ; a Windows 2003 server version typically sold in combination with specialized hardware as a OEM solution by vendors such as HP, Dell etc. You can't even find it on Technet.
The 90 day Microsoft evaluation version can be downloaded from:
http://files.download-ss.com/storageserver.iso however.
This is the software intended for Windows 2003 Storage server, but it seems to work quite well on Windows 2003 server SP2.
Figure 4. The Microsoft iSCSI Target Management interface
One thing that was not too intuitive was the way access is granted to the initiators, which was much more straightforward in StarWind.
[update October 4rth]
Rocket Division Software graciously sponsored this article by providing a licensed copy of their server version, which has now taken the place of the earlier mentioned 90 day Evaluation version of the Microsoft target. First thing I noticed was that the logging off and on to the Target is much faster. I have not done any speed tests between the two yet, but if I find some time I might just do that. The StarWind interface is very intuitive, I did however find a few options presented in the GUI that made me wonder if Rocket Division Software have read the MS User Interface Style Guide; maybe intuitive, but definitely not standard. The iSCSI engine however has been running without a hitch for the last few days, and right now; that's my only concern.
Figure 5. More options are possible with regards to what type of virtual storage is used and snapshots of such storage in the professional and enterprise version.
We have now basically built the following setup:
Figure 6. The NAS layout
As you can see the storage network is physically separated from the LAN. The hartbeat is nothing more than a simple crosscable between the two node servers.
Now that we have the iSCSI Target up and running, we need to connect the iSCSI client (the 'initiator'). So we go take a look at the control panel of the nodes.
Figure 7. The iSCSI initiator in the control panel!
Figure 8. Initiator as found in the control panel of the Windows 2008 node.
First thing we do is Discovery, simply provide the IP number of the iSCSI Target you have set up.
If the discovery went just fine on your initiator, but there are no targets, make sure to check the iSCSI Initiators tab on the properties of your Targets (on the 'server' or SAN); to test, simply add the FQDN (Fully Qualified Domain Name) of one of the initiator machines. For example: one of my nodes; node1.servercare.nl. You can also add the iSCSI Qualified Name (IQN) which is quite similar to a DNS name, with some extra numbers added. In fact if you want to go overboard; go and run
the ISNS server that can be downloaded from Microsoft. It's not required for our purposes though.
Figure 9. The MS ISNS server kind of work like a DNS for iSCSI targets and initiators
Now that the Discovery has been done, the Targets tab should show you your targets, you can simply logon to get the 'drives' you need.
Figure 10. As you can see the targets are there now!
You'll find that now, when you go to the disk manager on de nodes there are new disks to be found. Simply make them active basic disks and format them using NTFS. Keep in mind that for this lab environment we have chosen to implement iSCSI software on a Windows 2003 server that may have other functionality. If you decide to build a more permanent solution I urge you to seriously consider a NAS that is of a more dedicated nature with all the enterprise redundancy (double power supplies, RAID levels, UPS etc) in place. Believe me: once you have a number of virtual machines running from this NAS you do not want to have this storage go down.. ever… Having to reboot it because of some other software running on it would be very annoying to say the least.
Validating the cluster environment
Now that we have added two new disks to the two Windows 2008 servers making up the nodes of what will become our cluster, next step is to get the Failover Clustering feature installed (note it is a feature, not a role!). Simply do this using the Server Manager.
Figure 11. Add Features in the Server Manager
Once this software is installed, you'll find the amongst the administrator tools you now can find the Failover Cluster Management MMC.
.
Figure 12. The Failover Cluster Manager as found in the administrative tools.
One of the options you can choose is Validate a Configuration… This will test both nodes.
Figure 13. This is what happens if you use FreeNAS or OpenFiler…
After the test you get a very lengthy report with all the conclusions. Below you can see the part that shows that at the second attempt we have used the Microsoft iSCSI Target software.
Figure 14. The Cluster Validation report.
Once you have a report looking like the above, you have been successful at creating a Hyper-V cluster infrastructure, we are now ready to build the cluster and get Hyper-V installed! More on how to do that in the
next article