نمایش نتایج: از شماره 1 تا 3 از مجموع 3

موضوع: Building a Hyper-V Cluster on two W2K8 Nodes

  
  1. #1
    نام حقيقي: 1234

    مدیر بازنشسته
    تاریخ عضویت
    Jul 2009
    محل سکونت
    5678
    نوشته
    5,634
    سپاسگزاری شده
    2513
    سپاسگزاری کرده
    272

    Building a Hyper-V Cluster on two W2K8 Nodes

    Building a Hyper-V Cluster on two W2K8 Nodes

    In this first part we'll focus on getting the infrastructure in place. We'll talk about the systems making up the nodes and the shared storage we need to build. Once that is in place, we'll focus in the next article on how to get Hyper-V clustered on this infrastructure. Note that we are not building a cluster –in- Hyper-V, as many people have written about; we are building a physical Fail-over cluster with two nodes, and on that we run Hyper-V.

    My goals

    The moment I read that Hyper-V was cluster-aware, I knew I would not rest until I had tried building a failover cluster running Hyper-V. It's one of those exciting technologies you just hàve to test out for yourself, not being satisfied with simply reading about it. (yes; that is a hint ;-) In fact, I had the sneaky feeling that Robert Larson (the MS blog in the link above) in his ten-step plan may have cut a few corners to expedite things.
    First of all let's take a moment to realize the importance of hyper-V being cluster aware… In the past you would have several virtual machines running one some big iron, but once the big iron failed, being the single point of failure; all your machines would go down hard. As tempting as virtual machines are, this is completely unacceptable to most people or businesses. Fail-over clustering adds this nice and secure feeling of having a backup system, removing that single point of failure.
    It must be nice for guys like Robert Larson who works for Microsoft consulting services, to have all this hardware readily available to build his labs with. Unfortunately I don't have such spare hardware, and no big budgets to spend. Still, for this project some dedicated hard and software is needed and I set my budget not to go over 1500 Euro, and use whatever spare or free equipment or software I can get, without compromising too much for performance reasons.

    Figure 1. The bill (and configuration details) of one of the nodes
    I believe that technologies such as clustering and virtual machines are ready to reach out from the enterprise level into the mainstream. You could argue that I'm following Google's strategy of using combining cheaper hardware with intelligent software; quantity in hardware for redundancy, and quality in software for availability.
    In order to prove that I want to build something I can actually use; not just a simple proof of concept. While doing that I will still attempt to remain within budget. With that in mind I decided to buy some hardware to build a few identical systems to create my cluster with (rather than buying pre-built –more expensive- systems). A bit risky, as at the time of writing it's still a bit of a guess if the motherboard you are buying has a bios supporting VT. More on how to figure that out can be found here. For that reason alone you may want to consider buying more expensive machines from the MS recommended hardware list (see previous link), if you are building this for a business. All in all I have not spent much more than 1400 Euro's on two nice dual code 2.66 Ghz systems with 6Gigs of memory, leaving me with 100 Euro to buy a Gigabit switch to build my SAN backbone with and still remain within budget.

    Figure 2. The two nodes; MSI Neo2 FIR main boards, Core2 Duo E6750 CPU's
    This should be sufficient to build my lab with, in fact a small size business would do pretty well running a similar setup. (It's no enterprise hardware, but carefully selected and much better than what I see at some small businesses!) I would like to be able to run 3 to 4 servers on this Hyper-V cluster. Of course the real interesting part will be finding out the stability and performance of these systems. Once I'm satisfied with the design I'll make some baseline measurements using various tools such as performance monitor and see how things evolve… this should be fun!
    When money is a factor running several virtual machines on one cluster sounds like a great way to save some cash and still have redundancy. A few things have changed with Windows 2008 clustering though; where in the past it was possibly to simply share a SCSI hard-drive that option is now no longer available. For more details on the requirements for a Hyper-V cluster you might want to check out Microsoft's step-by-step guide. The article you are reading now however is written in a much more entertaining style, I even have pictures and everything! So be sure to return once you've read that, to get a more down-to-earth perspective on building a Hyper-V cluster. The shared storage options left include dedicated hardware solutions such as fiber based SAN or something a little bit closer to our budgetary needs: iSCSI
    More on iSCSI.

    iSCSI is often called an emerging technology, and while I'm writing this I wonder if you can still call it 'emerging'; it has been around for years now. Strangely enough this great technology has not gained as much traction as it should have, but rest assured; with Hyper-V and Windows 2008 Failover clustering now advocating this, and with iSCSI client (initiator)software natively available in Windows 2003 server, Windows 2008 server, Vista and separate downloadable Windows XP I'm sure in the next few years this technology is going to break-through widely. Before it was a nice technology available to us if you needed some LAN based storage, but now we actually are seeing some very compelling technology that gains greatly buy the use of iSCSI. As mentioned above, it competes with the much more expensive fiber channel which is used at the higher end of the market. The advantages for iSCSI are that it is using technology (Ethernet, preferably Gigabit), commonly available, this means most IT people already have a sound knowledge of the basic technologies needed to maintain the SAN. Not all IT people have similar experience with fiber channel Switched fabric, HBA's (the fiber channel Host Bus Adapter) and the enterprise solutions associated with that technology. As mentioned the initiator software is natively available in the latest Microsoft operating systems, making the entry level to SAN even more accessible. These are the factors that will drive iSCSI to become a mainstream technology. When building enterprise solutions however; Fiber Channel in combination with a dedicated (hardware solution) NAS is still king, but at a price smaller businesses will not be willing to pay. For them: iSCSI is the alternative.
    So what is this iSCSI? Simply put, it's a fake hard drive that you can share from a server (target) on the network somewhere, and then link to from a client (initiator). On the client the initiator software makes the 'network disk' look as if it's a real hardware disk in your system. Instead of sharing a network drive on file or network level as you normally do with a file-server, you are now sharing a disk at block level, and this opens up a whole new realm of possibilities!

    Figure 3. Would you believe these last two disks aren't real? The node sure thinks they are!
    It's even possible for (for example) thin clients to boot from iSCSI drives, provided the right software is in place. Ideally one would have a NAS (Network Attached Storage) system with some RAID configuration for redundancy and/or performance. Even better: get the devices or servers sharing the iSCSI resources on their own fast (Gigabit) network and you'll have your very own SAN (Storage Area Network)… I could probably throw some more acronyms at you, but I'll leave it at this for now. I decided to build a small SAN on Gigabit Ethernet to allow some performance for my iSCSI drives. Gigabit Ethernet switches are cheap now, so it should not be a big drain on the budget. If you decide to go this route, make sure to read up on jumbo fames especially if you have decided to build something more permanent than a lab situation. For now, we need to build a SAN solution!
    How about Open Source?

    Since we know we need iSCSI to remain on budget, we simply have to figure out what NAS to get! There are quite some options, and NAS's can come as a hardware solution, or simply software installed on (usually) windows server. Typically a NAS shares its disks using several protocols, such as SMB, NFS etc. Since we only need iSCSI we might as well only look into that. For my solution I went the server with iSCSI software route, since I happen to have some spare (older) servers anyway. You'll find there are quite some iSCSI solutions out there, open as well as closed source.
    I'll throw in the spoiler right here..: If you where thinking of using open source solutions like OpenFiler (current version 2.3) or FreeNAS (current version 0.69b3), think again. Starting with Windows 2008, the cluster software requires your iSCSI target software to support something called "persistent reservation commands". These are based on the SCSI Primary commands-3 specs (SPC-3), and most open source software I know of (with the possible exception of OpenSolaris) is based on SCSI-2, and does not support these reservation commands. OpenFiler for example is based on IET . The FreeNAS target is based on the NetBSD iSCSI target by Alistair Crooks and ported to FreeBSD, and –also- does not support these command. So far for Linux and BSD…
    [update May 2009]
    Good news! There are a few new options available now that support persistent reservervation. First of all, as you may have read from the comments on this blog the latest nightly builds of FreeNAS now seem to work fine. Additionally, if you happen to be an MSDN or Technet subscriber (as I am ;-) you can get the brand-new Windows Storage Server 2008, with the 3.2 iSCSI Target, which of course works fine too...
    [/update]
    [update Oct 2009]
    FreeNAS 0.7.0 RC2 (as of writing just out) now uses ISTGT, and according to this page: iSCSI Target for FreeBSD 7.x
    it supports SPC-3...!
    [/update]
    If you are simply looking for a nice NAS for home use, and would like to use iSCSI to only have a few systems connect to; look no further; OpenFiler as well as FreeNAS work admirably to help you accomplish this goal. For clustering purposes however, we need to look further…
    Windows iSCSI Targets

    Good news on the Microsoft based front, there are two options (unfortunately not free ones) that do support the persistent reservation commands and that also support concurrent connections to the shared disk (keep in mind that for a cluster several nodes need to have access to the disk!).
    Windows based iSCSI targets that support SCSI-3 persistent reservation and concurrent connections are "StarWind" from Rocket Division Software, or Microsoft's own iSCSI Software Target 3.1.
    Notice that although there is a free evaluation version (the personal version) offered by Rocket Division Software, this won't work when building a cluster, as it only allows one (single) connection. A single node cluster seems a bit of a waste of time… The alternative is a 30 day evaluation version that does support concurrent connections however and would be suitable... For our purposes the 'Server' version of StarWind would be sufficient, since it support two concurrent connections, and we have only 2 nodes anyway…
    I opted to use the Microsoft 90 day evaluation version of the iSCSI Target, as it gives me more time to play around with my cluster. Both solutions support SCSI-3 persistent reservation, and both support multiple concurrent connections. Both implementations of the target software are very easy to install and configure, using a very intuitive interface. Jose Barreto has done an excellent job of describing how to configure the Microsoft software.
    The Microsoft iSCSI software Target is hard to find, as it is a OEM solution that is sold along (as an option) with Windows Storage Server ; a Windows 2003 server version typically sold in combination with specialized hardware as a OEM solution by vendors such as HP, Dell etc. You can't even find it on Technet.
    The 90 day Microsoft evaluation version can be downloaded from: http://files.download-ss.com/storageserver.iso however.
    This is the software intended for Windows 2003 Storage server, but it seems to work quite well on Windows 2003 server SP2.

    Figure 4. The Microsoft iSCSI Target Management interface
    One thing that was not too intuitive was the way access is granted to the initiators, which was much more straightforward in StarWind.
    [update October 4rth]
    Rocket Division Software graciously sponsored this article by providing a licensed copy of their server version, which has now taken the place of the earlier mentioned 90 day Evaluation version of the Microsoft target. First thing I noticed was that the logging off and on to the Target is much faster. I have not done any speed tests between the two yet, but if I find some time I might just do that. The StarWind interface is very intuitive, I did however find a few options presented in the GUI that made me wonder if Rocket Division Software have read the MS User Interface Style Guide; maybe intuitive, but definitely not standard. The iSCSI engine however has been running without a hitch for the last few days, and right now; that's my only concern.

    Figure 5. More options are possible with regards to what type of virtual storage is used and snapshots of such storage in the professional and enterprise version.

    We have now basically built the following setup:

    Figure 6. The NAS layout
    As you can see the storage network is physically separated from the LAN. The hartbeat is nothing more than a simple crosscable between the two node servers.
    Now that we have the iSCSI Target up and running, we need to connect the iSCSI client (the 'initiator'). So we go take a look at the control panel of the nodes.

    Figure 7. The iSCSI initiator in the control panel!

    Figure 8. Initiator as found in the control panel of the Windows 2008 node.
    First thing we do is Discovery, simply provide the IP number of the iSCSI Target you have set up.
    If the discovery went just fine on your initiator, but there are no targets, make sure to check the iSCSI Initiators tab on the properties of your Targets (on the 'server' or SAN); to test, simply add the FQDN (Fully Qualified Domain Name) of one of the initiator machines. For example: one of my nodes; node1.servercare.nl. You can also add the iSCSI Qualified Name (IQN) which is quite similar to a DNS name, with some extra numbers added. In fact if you want to go overboard; go and run the ISNS server that can be downloaded from Microsoft. It's not required for our purposes though.

    Figure 9. The MS ISNS server kind of work like a DNS for iSCSI targets and initiators
    Now that the Discovery has been done, the Targets tab should show you your targets, you can simply logon to get the 'drives' you need.

    Figure 10. As you can see the targets are there now!
    You'll find that now, when you go to the disk manager on de nodes there are new disks to be found. Simply make them active basic disks and format them using NTFS. Keep in mind that for this lab environment we have chosen to implement iSCSI software on a Windows 2003 server that may have other functionality. If you decide to build a more permanent solution I urge you to seriously consider a NAS that is of a more dedicated nature with all the enterprise redundancy (double power supplies, RAID levels, UPS etc) in place. Believe me: once you have a number of virtual machines running from this NAS you do not want to have this storage go down.. ever… Having to reboot it because of some other software running on it would be very annoying to say the least.
    Validating the cluster environment

    Now that we have added two new disks to the two Windows 2008 servers making up the nodes of what will become our cluster, next step is to get the Failover Clustering feature installed (note it is a feature, not a role!). Simply do this using the Server Manager.

    Figure 11. Add Features in the Server Manager
    Once this software is installed, you'll find the amongst the administrator tools you now can find the Failover Cluster Management MMC.
    .
    Figure 12. The Failover Cluster Manager as found in the administrative tools.
    One of the options you can choose is Validate a Configuration… This will test both nodes.

    Figure 13. This is what happens if you use FreeNAS or OpenFiler…
    After the test you get a very lengthy report with all the conclusions. Below you can see the part that shows that at the second attempt we have used the Microsoft iSCSI Target software.

    Figure 14. The Cluster Validation report.
    Once you have a report looking like the above, you have been successful at creating a Hyper-V cluster infrastructure, we are now ready to build the cluster and get Hyper-V installed! More on how to do that in the next article






    موضوعات مشابه:
    ویرایش توسط patris1 : 2010-02-25 در ساعت 01:58 AM

  2. #2
    نام حقيقي: 1234

    مدیر بازنشسته
    تاریخ عضویت
    Jul 2009
    محل سکونت
    5678
    نوشته
    5,634
    سپاسگزاری شده
    2513
    سپاسگزاری کرده
    272
    Building a Hyper-V Cluster on two W2K8 Nodes, part 2

    In the previous
    article we have prepared the infrastructure to build a cluster. We used iSCSI to build a SAN, a gigabit switch to build a NAS and connected the two nodes, conveniently called node1 and node2, to the server hosting the Rocket Division Software "StarWind" iSCSI Target software. The next step is to build ourselves a failover cluster, which in Windows 2008 is deceptively easy, provided you are using the Windows 2008 Enterprise or Datacenter versions, which support clustering.
    Creating a cluster

    To ensure that our infrastructure supports a failover cluster we have tested it using the Failover Cluster Manager, a feature we have installed earlier. In the Failover Cluster Manager we have already validated the configuration, and the report has told us everything is fine.
    Next step is to actually go ahead and build the thing already! To do that we only need to know the node names, the name we want to give to the cluster (in our case it will be "cluster1") and of course an IP address we have selected to give out to the cluster.
    To create the cluster, we simply select "Create a cluster"... yes, it is –that- obvious!

    Next we'll tell the software the names of the servers we want to be our nodes in the cluster.

    Then we'll give the cluster a name, and provide it with a fixed IP address

    Simply click "next" and there we go!

    Undo

    What if you created a cluster and something went wrong?
    In our case we had forgotten to format the shares storage drives we provided the cluster with. Interestingly enough the cluster validation did not see this as a problem!
    The result is that we do not have a witness disk! (Similar to what used to be called the Quorum).

    This means we need to destroy this cluster and start over! The cluster management manager gives you the menu option to do just that as seen in the picture below:

    Now we can simply repeat the steps laid out earlier, and do not forget to have the disks formatted this time!
    You'll notice by the way that when for example on node1 you would have created a text file on the shared disk, this might not show up on node two. That's actually to be expected; the disk may be a shared resource, but NTFS is not a cluster aware file system, so any chances made to the disk from node1 will not show up on node 2, even though you are writing to the same shared storage. It's actually the cluster software on the nodes that makes sure things get picked up correctly. If you would want to be able to write to the same shared storage from different initiators and be able to 'pick up' each others changes, you would need to have a file system like MelioFS by Sanbolic on your shared disk. For our purposes however, this will not be needed.
    Now that we have made sure the disks are OK, the cluster is successfully created.

    You'll find that on the Witness disk (in our case the "Q:" drive) a number of files are created; these contain a copy of the cluster configuration. That's why it needed to be NTFS formatted; the cluster software wanted to store that information there! Since this is a two-node cluster, the quorum configuration will be Node and Disk Majority, (default for an even number of nodes). This means that the nodes as well as the witness disk contain copies of the cluster configuration. The cluster has quorum as long as the majority (two out of three) of these copies are accessible. 'Having quorum' means that the number of elements that must be online for the cluster to remain running, are available.
    If that last piece of techno-talk caught your interest, you'll love this

    Look what happened to our witness disk!
    WE HAVE A CLUSTER!

    Now let's get Hyper-V going!
    First we make a machine, and make sure the Z: (shared through iSCSI) drive is selected as the location to store the Virtual Machine:

    Some default selections later we have our first Virtual Machine on the first node running.
    This means we can now go to 'Failover Cluster Management' and make the virtual machine 'highly available'. We do this by going to Services and Applications and then either by right clicking or selection of the option from the Action column on the right; 'Configure a Service or Application'.

    Now we get to select the earlier created machine. Note that we can only select machines that are not already 'highly available'!

    Congratulations, you are now running your first Hyper-V virtual Machine on a Windows 2008 Failover Cluster!
    Let's take a quick look at the settings for our virtual Server1

    We need to ensure that the Cluster Service is the one bossing around this server, not the Hyper-V virtual management service, so change this to 'nothing' on each highly available machine. Another thing to keep an eye open is things like mounted ISO's or DVD drives that are not shared resources… They can create issues on a cluster.
    Keep in mind that management of the Virtual Machine is now largely done through the Failover Cluster Management MMC, as the Virtual Machine is not highly available you cannot simply stop and start the machine as you would in the Hyper-V MMC!
    For now, have fun playing with your Clustered virtual machines, the first thing you might want to try is move the machine for one node to the other. I leave it to you to figure that one out on your own… Good luck



    ویرایش توسط patris1 : 2010-02-25 در ساعت 01:58 AM

  3. #3
    نام حقيقي: 1234

    مدیر بازنشسته
    تاریخ عضویت
    Jul 2009
    محل سکونت
    5678
    نوشته
    5,634
    سپاسگزاری شده
    2513
    سپاسگزاری کرده
    272
    Hyper-V Cluster Shared Volumes

    If you have built your Hyper-V cluster on Windows 2008, you might want to reconsider rebuilding it on Windows 2008 R2, there are basically two reasons that really jump out to entice you to make this decision:
    First of all there's the live migration everybody has been talking about, but secondly and much less visible in the mainstream media but in my opinion at least as important; Cluster Shared Volumes.
    In fact I would go as far as to say that live migration is nowhere without CSV (yes it's confusing isn't it.. we used to refer to Comma Separated Values with that acronym!).
    Why Cluster Shared Volumes

    What's so special about CSV ? Simply put; it allows multiple nodes to access the same volume at the same time. Kind of what specialized file-systems such as used by MelioFS, NetApp or Sanbolic allow you to do.
    You might have read the article I wrote some time ago about the issues I had when building my previous cluster on Windows 2008. If you didn't here's a quick recap: I bumped into the need to have several LUN's for the different machines I wanted to distribute between my nodes. Each node could only access one iSCSI lun at the time. Therefore I was basically forced to create many LUN's so that I could fail over one particular machine without having to take the other machines with me, since they were also using storage on that LUN.
    Live migration and CSV's

    I referred to live migration as one of the reasons to implement CSV, the reason for this is the time that a failover from one node to the other node will take. If this is done with volumes that are not CSV, then the nodes will respectively need to first dismount and then mount the volume. Once shared volumes are used the nodes will be able to fail over immediately, since they have simultaneous access to them.
    If you were to do a 'ping-t' to the cluster this results in it being unresponsive for 3 or 4 pings (which as I'm sure you'll agree can feel like 'forever'!) versus 1 ping when using CSV's… Big difference!
    TechNet lists a number of other advantages, but to me the above are the most important.
    Creating CSV's

    First we enable the shared volumes from the main page from the Failover Cluster Manager:

    You'll notice that in the Failover Cluster Manager there now is a node 'Cluster Shared Volumes'.

    If you right click on that it will allow you to 'add storage'. That is, if you actually have storage available. In my case I simply made an iSCSI disk available that I've created on my Windows 2008 Storage server.
    To do this I used the iSCSI Target software on the Storage Server and created a new disk. As an interesting side note: Windows 2008 Storage server will create VHD's as the storage unit to tie to a target. So essentially I will be crating VHD's within VHD's when I make VM's!

    For people who would like to learn more about iSCSI; take a quick look at my article about building the Hyper-V cluster .
    Once we've made the new disk available to the cluster by adding the disk using the iSCSI client in the control panel on the nodes, we'll be able to put it online, initialize it, make it a simple volume, and quick format it… (this only needs to be done on one node b.t.w.)
    Now we're ready to take it up in the cluster, we do this by simply adding it as a regular disk:

    Once we click "Add a disk" we'll be able to select the disk we want

    Notice that the disk we just added still shows up as a local disk (F in the storage overview.

    Another thing you can see here is that the Cluster Shared Volume also appears to be local! In fact, it's on the system drive, in our case the C: drive. Don't worry, that's supposed to happen… More on this later.
    We now have our disk available to the cluster... remember that both nodes need to be able to 'see' the iSCSI disk, I've made it easy on myself by simply adding the virtual drives in the Storage Manager to a Target that I had already added to my nodes (in my case it was called T2). If you decided to create a new target, make so that the target is accessible and initiated from both nodes.
    Next step is to simply add the storage we just made available in the cluster to the shared storage:

    Select "Add storage" and select the Cluster Disk we want to have shared.

    Now we see the new shared disk available to the cluster:

    I've taken the liberty to rename the shared disk to "Cluster Disk 2 data disk"
    Once this is done, you'll also notice that the disk that formally was seen as a local F: drive, now seems to link to the earlier pointed out system (C: ) drive. This works like a mount point , where a disk is made available to the system through a folder.
    If we look here, we'll see that the available volumes (we've got two in our example) are listed in that folder:

    As mentioned; both nodes will be able to write there, but never use this accessibility besides for the Hyper-V cluster. Manually making changes here will lead to problems. The Technet article specifically states:

    • No files should be created or copied to a Cluster Shared Volume by an administrator, user, or application unless the files will be used by the Hyper-V role or other technologies specified by Microsoft. Failure to adhere to this instruction could result in data corruption or data loss on shared volumes. This instruction also applies to files that are created or copied to the \ClusterStorage folder, or subfolders of it, on the nodes

    Now that you have shared volumes available you'll be able to create Hyper-V VM's on them.
    One pointer I'd like to leave you with; make sure to change the default Hyper-V settings to point to this new location, as you'll find that creating new machines, even though pointing to the correct location during creation might result in errors. This is fixed by simply making the clusterstorage default.







    ویرایش توسط patris1 : 2010-02-25 در ساعت 02:01 AM

کلمات کلیدی در جستجوها:

Building a Hyper-V Cluster on two W2K8 Nodes

hyper-v file server vhd isci lun

openfiler hyper-v cluster

freenas spc3

freenas iscsi csv

cluster share volume and quorum

freenas cluster shared volume

freenas for w2k8 clustering

how to iscsi3 freenas hyper v

freenas iscsi hyper-v quorum

freenas csv

iet hyper-v performancehyper-v server cluster shared volumes freenas 2 nodes setupcant create the connection with the target hyperv cluster nodeopenfiler vhdnetbsd-iscsi persistent reservationsistgt SCSI-3 Persistent Reservationsfailover clusteropenfiler2 node hyper v server with freenas san serveropenfiler hyper v csvfreenas iscsi cluster hyper-v csvhyperv csv openfiler can not bring disk onlinefreenas windows server clusteringopenfiler hyper-v csvhyper-v pointing to freenas

برچسب برای این موضوع

مجوز های ارسال و ویرایش

  • شما نمی توانید موضوع جدید ارسال کنید
  • شما نمی توانید به پست ها پاسخ دهید
  • شما نمی توانید فایل پیوست ضمیمه کنید
  • شما نمی توانید پست های خود را ویرایش کنید
  •