نمایش نتایج: از شماره 1 تا 3 از مجموع 3

موضوع: Uncovering Exchange 2010 Database Availability Groups DAGs

  
  1. #1
    نام حقيقي: 1234

    مدیر بازنشسته
    تاریخ عضویت
    Jul 2009
    محل سکونت
    5678
    نوشته
    5,634
    سپاسگزاری شده
    2513
    سپاسگزاری کرده
    272

    Uncovering Exchange 2010 Database Availability Groups DAGs

    کد:
    http://www.msexchange.org/articles_tutorials/exchange-server-2010/high-availability-recovery/uncovering-exchange-2010-database-availability-groups-dags-part1.html


    Henrik Walther


    PART-1


    A Bit of History

    Let us begin with a walk down memory lane. Prior to the release of Exchange 2007, the high availability and disaster recovery features included with the Exchange Server product were quite limited. Previous versions of Exchange (Exchange 2003 and earlier) could take advantage of Microsoft Cluster Services (MSCS), but this only provided redundancy at the hardware level since the nodes shared the same storage subsystem. If the active cluster node suddenly became unavailable, the Exchange Virtual Server (EVS) and any relevant cluster resources would fail over to the passive node and the end users could then continue their work.
    But you wouldn’t want the storage subsystem to fail as it was a single point of failure. In order to achieve redundancy at the storage level, organizations were forced to invest in replication solutions provided by third-party software vendors and/or storage hardware vendors. Since solutions provided by third party are not supported by Microsoft and typically are quite expensive to implement, the Exchange Product group wanted to provide better high availability and disaster recovery features natively in the Exchange Server product.
    Most of us probably agree that with the release of Exchange 2007 those visions became a reality! Exchange 2007 gave us a whole sleeve of brand new high availability and disaster recovery features such as Local Continuous Replication (LCR), which targeted small organizations and Cluster Continuous Replication (CCR) which targeted medium and large organizations. Later on (with Exchange 2007 SP1) came Standby Continuous Replication (SCR), targeted at organizations of pretty much all sizes. All three features used a new asynchronous replication technology, which worked by shipping log files to a passive copy of a storage group and after inspection replaying these into this passive copy.
    Although LCR provided redundancy at the storage level, the feature never really got much attention. The reason behind this was that since the storage group copies had to be stored on a volume local to the Mailbox server, it presented a single point of failure at the hardware level. Since Exchange 2007 was released, CCR has been a huge success. The interesting thing about CCR was that it combined the new asynchronous replication technology introduced by Exchange 2007 with Windows Failover Clustering technology, thereby providing redundancy at the hardware level as well as at the storage level, providing a true high availability solution without any single point of failures.
    CCR cluster nodes could be located in separate datacenters in order to provide site-level redundancy, but since CCR was not developed with site resiliency in mind, there were too many complexities involved with a multi-site CCR cluster solution (for details on multi-site CCR cluster deployment take a look at a previous article series of mine). This made the Exchange Product group think about how they could provide a built-in feature geared towards offering site resilience functionality with Exchange 2007.
    When Exchange 2007 SP1 was released we got exactly that. A feature called Standby Continuous Replication (SCR) which made it possible to ship log files to another Exchange 2007 Mailbox Server. Because SCR did not require Windows Failover Clustering, the log files could be shipped from both clustered and non-clustered Mailbox servers (the SCR source) to clustered and non-clustered mailbox servers (SCR target). What was really interesting with SCR was that you could specify a log replay lag time of up to 7 days, which made it possible to fix most database/store related issues before they hit the SCR target located in another datacenter.
    Note:
    Exchange 2007 Service Pack 2 is mainly a service pack that prepares an existing Exchange 2007 organization for deployment of Exchange 2010, so we did not see any additional changes or improvements to high availability or site resilience functionality in this service pack.
    With features such as CCR and SCR already at our disposal, one would think we wouldn’t need any major improvements or changes in regards to the high availability and site resilience story with the latest and greatest version of Exchange server to date - Exchange 2010 right?
    Well, the Exchange Product Group has been busy with developing Exchange 2010 the last couple of years, and actually a significant part of the time has been spend on improving native high availability and site resilience features significantly.


    High Availability Changes in Exchange Server 2010?

    With Exchange 2010, we no longer have the concept of Local Continuous Replication (LCR), Single Copy Clusters (SCC), Cluster Continuous Replication (CCR) or Standby Continuous Replication (SCR) for that matter. WHAT!? I hear some of you yell! Yes I am not kidding here. But to be more specific, only LCR and SCC have been removed from the Exchange Server product. CCR and SCR have been combined and have evolved into a more unified high availability framework in which the new Database Availability Group (DAG) act as the base component. This means that no matter if you are going to deploy a local or site-level highly available or disaster recoverable solution, you use a DAG. To make myself clear, with Exchange 2010, your one and only method to protect mailbox databases is by using DAG.

    Figure 1: Mailbox databases protected by DAG
    The primary component in a DAG is a new component called Active Manager. Instead of the Exchange cluster resource DLL (exres.dll) and the associated cluster service resources that were required when clustering Exchange 2007 and previous versions of Exchange, Exchange 2010 now relies on the Active Manager to manage switch-overs and fail-overs (*-overs) between mailbox servers part of a DAG. Active Manager runs on all Mailbox servers in a given DAG. We have 2 active manager roles, the Primary Active Manager (PAM) and the Standby Active Manager (SAM). For a detailed explanation of the PAM and the SAM roles, please see the relevant Exchange 2010 Online documentation over at Microsoft TechNet.
    So what’s interesting about a DAG then? Well there are many things; the most notable are listed in the following:
    Limited dependency on Windows Failover Clustering - A DAG only uses a limited set of the clustering features provided by the Windows Failover Clustering component. DAG uses the cluster database, heartbeat, and file share witness functionality. With Exchange 2007 (and earlier versions), Exchange were an application operated by a Windows Failover Cluster. This is no longer the case with Exchange 2010. The Exchange cluster resource DLL (exres.dll) and all the cluster resources it created when it was registered have been removed from the Exchange 2010 code.

    Figure 2: DAG still relies on cluster database, heartbeat and replication of the Windows Failover Cluster component

    Figure 3: No Exchange Cluster Resources in the Windows Failover Cluster
    Incremental deployment - Because DAGs still use some of the WFC components such as the cluster database, heartbeat and file share witness functionality, Windows Server 2008 SP2 or R2 Enterprise edition is required in order to be able to configure Exchange 2010 Mailbox servers in a DAG. But Exchange 2010 supports an incremental deployment approach meaning that you don’t need to form a cluster prior to installing Exchange 2010. You can install the Exchange 2010 Mailbox servers, and then create a DAG and add the servers and any databases to the DAG when needed.
    Co-existence with other Exchange roles - With CCR you could not install other Exchange Server roles on the mailbox servers (cluster nodes) that were protected using CCR. With DAG, a mailbox server part of a DAG can have other Exchange roles installed. This is especially beneficial for small organizations. Because now that a DAG protected Mailbox server can co-exist with other Exchange roles, it also means that you can have a fully redundant Exchange 2010 solution with only two machines dedicated as Exchange servers. Of course, you need to configure a file share witness, but this could be any in your environment. The file share witness does not need to run the same Windows version as the Exchange 2010 servers. It just needs to run Windows server 2003 or later. Another thing you should bear in mind is that if you go down the path where you use two Exchange 2010 servers, and you want to have a fully redundant solution, you must use an external hardware or software based load balancing solution for Client Access services.
    Managed 100% via Exchange tools - With CCR in Exchange 2007, you had to configure and manage CCR clusters using a combination of Exchange and cluster management tools. With DAGs in Exchange 2010, you no longer need to use cluster management tools for either the initial configuration or management. You can manage DAGs fully using the Exchange Management tools. This means that the Exchange administrators within an enterprise no longer need to have cluster experience (although this still could be a good idea).

    Figure 4: DAG, replication networks, and FSW settings etc. managed via Exchange tools
    Replication at the database level - In order to support the new DAG feature, databases in Exchange 2010 has now been moved to the organizational level instead of the server level where they existed in Exchange 2007 and earlier versions. This also means Exchange 2010 does not have the concept of storage groups any longer. Now there are databases and a log stream associated with each database. One drawback of CCR was that if only one database failed on the active node, a fail-over of all active databases existing on the clustered mailbox server were moved to the passive CCR node. This meant that all users on that had a mailbox stored on the respective CMS were affected.

    Figure 5: Databases on the organization level
    Support for up to 16 members in each DAG - Now that you can add up to 16 Mailbox servers to a DAG and potentially have 16 copies of each Mailbox database, Exchange 2010 had to support a larger number of mailbox databases than Exchange 2007 did. So the maximum limit has now been upped from 50 to 100 Mailbox database in the Exchange 2010 Enterprise edition. However, the Standard edition still only supports up to 5 databases per Mailbox server.
    Switch/Fail-overs much quicker than in the past - Because of the improvement made with Exchange 2010 DAG, we will now experience much quicker switches and fail-overs (*-overs) between mailbox database copies. They will typically occur in under 30 seconds, compared to several minutes with CCR in Exchange 2007. In addition, because Outlook MAPI clients now connect to the RPC Client Access service on the Client Access Servers, end users will rarely notice a *-over occurred. You can read more about the RPC Client Access service in a previous articles series of mine here.
    Go backup-less with +3 DB copies - When having 3 or more copies of a mailbox database, it is programmed to backup-less. This means that you basically enable circular logging on all mailbox databases protected by DAG, and no longer perform backups as we know them. This thinking of course requires enterprise organizations to change their mindset in regards to how they think mailbox databases should be protected.
    Support for DAG members in separate AD sites - Unlike CCR cluster nodes, you can have DAG member servers located in different Active Directory sites. This should be a welcome addition to those of you who do not have the option of using the same AD site across physical locations (datacenters). It should be noted though, that you cannot place Mailbox servers protected by the same DAG in different domains within your Active Directory forest.
    Log shipping via TCP - In Exchange 2007, the Microsoft Exchange Replication Service copied log files to the passive database copy (LCR), passive cluster node (CCR) or SCR target over Server Message Block (SMB), which meant you needed to open port 445 in any firewall between the CCR cluster nodes (typically when deploying multisite CCR clusters) and/or SCR source and targets. Those of you who work for or with a large enterprise organization know that convincing network administrators to open port 445/TCP between two datacenters is far from a trivial exercise. With Exchange 2010 DAG, the asynchronous replication technology no longer relies on SMB. Exchange 2010 uses TCP/IP for log file copying and seeding and, even better, it provides the option of specifying which port you want to use for log file replication. By default, DAG uses port 64327, but you can specify another port if required.
    Log file compression - with Exchange 2010 DAGs you can enable compression for seeding and replication over one or more networks in a DAG. This is a property of the DAG itself, not a DAG network. The default setting is InterSubnetOnly and has the same settings available as those of the network encryption property.
    Log file encryption - Exchange 2010 DAG supports the use of encryption whereas log files in Exchange 2007 are copied over an unencrypted channel (unless IPsec has been configured). More specifically, DAG leverages the encryption capabilities of Windows Server 2008—that is, DAG uses Kerberos authentication between each Mailbox server member of the respective DAG. Network encryption is a property of the DAG itself, not the DAG network. Settings for a DAG's network encryption property are: Disabled (network encryption not in use), Enabled (network encryption enabled for seeding and replication on all networks existing in a DAG), InterSubnetOnly (the default setting meaning network encryption in use on DAG networks on the same subnet), and SeedOnly (network encryption in use for seeding on all networks in a DAG).
    Up to 14 day lagged copies - With Standby Continuous Replication which were included in Exchange 2007 SP1, the concept of lagged database copies were introduced. With this feature we could delay the time for when log files that were copied to the SCR target should be replayed into the databases on the SCR target. We also had the option of specifying a so called truncation lag time, which was an option which allowed us to delay the time for when log files that had been copied to the SCR target and replayed into the cop of the database should be truncated. With both options we could specify a lag time of up to 7 days. With Exchange 2010 DAG, we can now specify a truncation lag time of up to 14 days, which is extra interesting when you choose to go backup-less.
    Seeding from a DB copy - Unlike CCR in Exchange 2007, we can now perform a seed by specifying a database copy as the source database. This means that a new seed or a re-seed of an existing mailbox database no longer has any impact on the active database copy.

    Figure 6: Seeding from a selective source server
    Public folder databases not protected by DAG - Unlike CCR in Exchange 2007, we cannot protect public folder databases using DAG. Public folder databases must be protected using traditional public folder replication mechanisms. The positive thing about this is that we no longer are limited to only one public folder store in the Exchange organization, if it is stored on a DAG member server. With CCR we could only have one public folder store, when it was protected by CCR.
    Improved Transport Dumpster - The transport dumpster we know from Exchange 2007 has also been improved, so that messages are re-delivered even when a lossy database failover occurs between databases copies stored in different Active Directory sites. In addition to this, when all messages have been replicated to all database copies, they will be deleted from the transport dumpster.
    So with this part one of this multi-part article ends. In the next part, we will begin deploying DAG in our lab environment and look at the various DAG specific setting and so forth. Until then, have a nice one.





    موضوعات مشابه:

  2. #2
    نام حقيقي: 1234

    مدیر بازنشسته
    تاریخ عضویت
    Jul 2009
    محل سکونت
    5678
    نوشته
    5,634
    سپاسگزاری شده
    2513
    سپاسگزاری کرده
    272
    کد:
    http://www.msexchange.org/articles_tutorials/exchange-server-2010/high-availability-recovery/uncovering-exchange-2010-database-availability-groups-dags-part2.html
    PART-2

    Introduction

    In the first part of this multi-part article uncovering Exchange 2010 Database Availability Groups (DAGs); we had a look at what Exchange 2007 and earlier versions provided when it comes to native high availability functionality for Mailbox servers.
    In this part of this multi-part series, I will provide the steps necessary in order to prepare two servers to be used in a two DAG member solution. Both Exchange 2010 servers are placed in the same Active Directory site in the same datacenter. Multi-site DAG deployments will be uncovered in a separate article here on MSExchange.org in a future article, since this type of deployment includes many considerations and additional details. Also, even though these servers will be configured as multi-role Exchange 2010 servers, I will not uncover redundancy configuration for the Hub Transport and Client Access server roles. Redundancy for the Client Access server role and the new RPC Client Access service has already been covered in a previous articles series of mine (can be found here). Redundancy in regards to the Hub Transport server will be uncovered in a future article.


    Test Environment

    The lab environment used as the basis for this multi-part article consists of one Windows 2008 R2 domain controller, one Windows Server 2008 R2 file server (will be used as witness server), and 2 domain-joined Windows 2008 R2 Enterprise edition servers on which Exchange 2010 will be installed. We use the Enterprise edition because DAG requires the Enterprise edition. This is because like CCR and SCR, DAG still utilizes part of the Windows Failover Cluster component (heartbeat, file share witness, and cluster database). The Exchange 2010 Client Access, Hub Transport and Mailbox server roles will be installed on both of the Windows Server 2008 R2 servers. Furthermore, note that the Exchange 2010 DAG functionality itself is not dependent on Exchange 2010 Enterprise edition (only Windows Server 2008 R2 Enterprise edition). This means that you can use the standard edition of Exchange 2010 (or the evaluation version) if you are going to follow along in your own lab. The standard edition just limits you to a total of 5 databases (including active and passive copies) on each Exchange 2010 server.
    Each Exchange 2010 server has 2 network cards installed, one connected to the production network and another connected to an isolated heartbeat/replication network. Although it’s a general rule of thumb to use at least two NICs in each DAG member (one for MAPI access and one or more for replication, seeding and heartbeat purposes), it is worth mentioning that Microsoft supports using a single NIC for both MAPI access, replication, seeding, and heartbeats. This is good to know for the small organizations that want to utilize DAG.
    As one of my intentions with this articles series is to demonstrate the support for deploying DAGs in an incremental fashion (a method explained in more detail in part 1), I will not install the Windows Failover Cluster component on any of the servers prior to creating the DAG.
    Configuring Network settings

    As already mentioned, two NICs have been installed in each server. Let us look at the way each NIC has been configured. To do so, open Network Connections. Here we have the NICs listed, and as you can see we have a PROD (connecting to the production network and providing MAPI connectivity) and a REPLICATION (connected to an isolated network) interface.

    Figure 1: Network connections
    Let us first open the property page of the PROD interface. Here it is typically fine to leave the default settings as is. Optionally, you can uncheck QoS Packet Scheduler and Internet Protocol Version 6 (TCP/IP v6).

    Figure 2: Properties of PROD interface
    Open the property page of Internet Protocol Version 4 (TCP/IPv4). Here we have a static IP address configured as well as the other necessary settings (default gateway, subnet mask, and DNS server).

    Figure 3: TCP/IP Version 4 Properties for the PROD interface
    When you have configured the NIC correspondingly, close the property page by clicking OK twice.
    It’s time to configure the network settings for the “REPLICATION” interface, so let us open the property page of the “REPLICATION” NIC. Uncheck “Client for Microsoft Networks” and “File and Printer Sharing for Microsoft Networks” as shown in Figure 4. In addition, you may optionally uncheck QoS Packet Scheduler” and Internet Protocol Version 6 (TCP/IPv6).

    Figure 4: Properties for the REPLICATION interface
    Now open property page of “Internet Protocol Version 4 (TCP/IPv4)” and enter an IP address and subnet mask on the isolated replication subnet. Since this NIC solely is used for replication, seeding and heartbeats, you should not specify any default gateway or DNS servers.
    Note:
    If routing on the “REPLICATION” interface for some reason is necessary between the two servers, you should use static routes instead of specifying a default gateway.

    Figure 5: TCP/IP Version 4 properties for the REPLICATION interface
    Now click “Advanced” and uncheck “Register this connection’s addresses in DNS” and then click “OK” twice.

    Figure 6: Advanced TCP/IP Properties for REPLICATION interface
    Now that we have configured each NIC, we must make sure the “PROD” NIC is listed first on the binding order list. To bring up the binding order list, you must press the ALT key, and then select Advanced > Advanced Settings.

    Figure 7: Selecting Advanced Settings in the Network Connection menu

    If not already the case, move the PROD NIC to the top as shown in Figure 8.

    Figure 8: Binding order for the network interfaces
    Click OK and close the Network Connections window.
    Note:
    You should of course make sure the above steps are performed on both servers.


    Preparing Storage

    In the specific lab used in this article series, I have created a total four virtual disks (2x30GB for logs and 2x100 for active/passive database copies) for each server. To partition the disks used for mailbox databases and logs, open the Windows 2008 Server Manager and expand Storage and then select Disk Management. Now you should right-click on each LUN and then select Online in the context menu.
    Assign each LUN (disk) identically on each server as shown in Figure 10 below.

    Figure 9: LUNs (disk) on each lab server
    Installing Exchange 2010 Server roles

    Before we can install the Exchange 2010 Server roles on the two Windows 2008 R2 servers, we must first make sure the required features and components have been installed. The following components should be installed on all Exchange 2010 server roles:

    • NET-Framework
    • RSAT-ADDS
    • Web-Server
    • Web-Basic-Auth
    • Web-Windows-Auth
    • Web-Metabase
    • Web-Net-Ext
    • Web-Lgcy-Mgmt-Console
    • WAS-Process-Model
    • RSAT-Web-Server
    • Web-ISAPI-Ext
    • Web-Digest-Auth
    • Web-Dyn-Compression
    • NET-HTTP-Activation
    • RPC-Over-HTTP-Proxy

    To install the above features, first open an elevated Windows PowerShell window and type:
    Import-Module ServerManager

    Figure 10: Importing the Server Manager module to Windows PowerShell

    Then type the following command to installed the required features:
    Add-WindowsFeature NET-Framework,RSAT-ADDS,Web-Server,Web-Basic-Auth,Web-Windows-Auth,Web-Metabase,Web-Net-Ext,Web-Lgcy-Mgmt-Console,WAS-Process-Model,RSAT-Web-Server,Web-ISAPI-Ext,Web-Digest-Auth,Web-Dyn-Compression,NET-HTTP-Activation,RPC-Over-HTTP-Proxy –Restart

    Figure 11: Installed required Windows Server 2008 R2 features
    After the servers have rebooted, we must open an elevated Windows PowerShell window again, and set the service to start automatically. This can be accomplished with the following command:
    Set-Service NetTcpPortSharing –StartupType Automatic

    Figure 12: Setting the NetTcpPortSharing feature to start automatically
    Since we are going to install both the Hub Transport and Mailbox server role on these servers, we must also install the Microsoft Filter Pack.
    We’re now ready to install the Exchange 2010 roles on the servers. So mount the Exchange 2010 media and then run Setup. This brings us to the splash screen, where we should click on Step3: Choose Exchange language option so that we can install necessary languages on the servers.

    Figure 13: Selecting Step 3: Choose Exchange language option
    When step 3 has been completed, we can continue with Step 4: Install Microsoft Exchange.

    Figure 14: Selecting Step 4: Install Microsoft Exchange
    On the “Introduction” page, click Next.

    Figure 15: Introduction page5
    Accept the “License Agreement” and click Next.

    Figure 16: License Agreement page
    Select whether you want to enable “Error Reporting” or not, and then click Next.

    Figure 17: Error Reporting page
    Since we want to install Exchange 2010 server roles included in a typical installation, select this option and click Next.
    Note:
    Unlike Exchange 2007, you no longer specify whether you are installing a clustered mailbox server, since the DAG functioanlity is configured when you create a DAG after setup has completed.

    Figure 18: Installation Type page
    We will now face a new page that was not included in the Exchange 2007 Setup wizard. We have the option of specifiying whether this server will be Internet-facing. In large organizations (LORGs), you typically have one Internet-facing Active Directory site and all Client Access servers in this site should usually have this option enabled. Since both servers in our little lab will be Internet-facing, we will enable the option and specify the FQDN through which Exchange client services such as Outlook Web App (OWA), Exchange activeSync (EAS), and Outlook Anywhere will be will be accessed.
    When you have done so, click Next.

    Figure 19: Configuring Client Access server external domain
    Now choose whether you want to participate in the customer experience improvement program or not, then click Next.
    Note:
    You must also install a subject alternative names (SAN) certificate in order to get client services working properly. However, this is outside the scope of this article series.

    Figure 20: Customer Experience Improvement Program page
    The readiness check will now begin, and hopefully you will not face any errors that will block you from proceeding. When possible, click Install.

    Figure 21: Readiness Check page
    The Exchange server roles are now being installed.

    Figure 22: Installing Exchange 2010 server roles
    When setup (hopefully) has completed successfully, click Finish.

    Figure 23: Exchange 2010 Server roles installed successfully

    So with this part two of this multi-part article ends. In the next part, we will create and configure the DAG as well as look at the various DAG specific setting and so forth. Until then, have a nice one.





  3. #3
    نام حقيقي: 1234

    مدیر بازنشسته
    تاریخ عضویت
    Jul 2009
    محل سکونت
    5678
    نوشته
    5,634
    سپاسگزاری شده
    2513
    سپاسگزاری کرده
    272
    کد:
    http://www.msexchange.org/articles_tutorials/exchange-server-2010/high-availability-recovery/uncovering-exchange-2010-database-availability-groups-dags-part3.html
    PART-3


    Introduction

    In the second part of this multi-part article uncovering Exchange 2010 Database Availability Groups (DAGs), we prepared the two servers and installed Exchange 2010 on both.
    In this article, we will continue where we left off. We will move the databases to the LUNs attached to each server, create the DAG and test that it works as expected.


    Changing the Exchange Database paths

    So with Exchange 2010 installed on the servers, the first step is to move and rename each mailbox database. To do so launch the Exchange Management console and navigate to the Mailbox node under the Organization Configuration work center. The first tab here is Database Management, which should be selected. Under this tab, right-click on each mailbox database and select Move Database path in the context menu as shown in Figure 1.

    Figure 1: Move Database Path
    In the Move Database Path wizard, change the path for the database and logs, so that they are placed in the LUNs we created in part 1. I also suggest you change the name of each database to MDB01.edb and MDB02.edb as shown in Figure 2.
    When done, click Move.

    Figure 2: Changing the path for the database and log files
    When they have been moved, click Finish to exit the wizard (Figure 3).

    Figure 3: Database and log file path changed successfully
    Now right-click on the databases and this time select Properties. Change the name to the name of the edb file itself (in this case to MDB01 and MDB02), and then click OK.

    Figure 4: Changing the name of the databases
    That was better. This makes it a little easier to specify database names etc. when using the Exchange Management Shell.

    Figure 5: Database names and paths changes
    Adding the Exchange Trusted Subsystem group to Non-Exchange Servers

    Since we only have two Exchange 2010 servers in our organization, we will not use an Exchange 2010 Hub Transport server (as is the recommended server role to use for the witness server) as the witness server, but instead a traditional Windows Server 2008 R2 File server. This means that we must add the Exchange Trusted Subsystem group to the local administrators group on the file server. To do so, log on to the fileserver and open the Server Manager. Expand Configuration > Local Users and Groups, and then open Properties for the Administrators group.

    Figure 6: Finding the local administrators group on the Windows Server 2008 R2 file server
    Enter the Exchange Trusted Subsystem group in the text box as shown in Figure 7, then click OK.

    Figure 7: Entering the Exchange Trusted Subsystem group
    Click OK again.

    Figure 8: Property page of the Administrators group
    This step is necessary only when using a non-Exchange server as the witness server. By the way, it is not recommended to use a domain controller as the witness server because this way you grant the Exchange Trusted Subsystem group many permissions in the Active Directory domain.
    Creating the Database Availability Group

    With that we are actually ready to create the DAG itself. This can both be done via the Exchange Management Console or the Exchange Management Shell. In this article we will use the console. So under the Mailbox node, select the Database Availability Group tab, then right-click somewhere in the white area. In the context menu, select New Database Availability Group as shown in Figure 9.

    Figure 9: Selecting New Database Availability Group in the context menu
    In the Database Availability Group wizard, enter a name for the new DAG. Also, specify the witness server and the witness directory on that server (Figure 10). When you have done so, click New.

    Figure 10: Specifying the DAG name as well as witness server and directory
    On the Completion page, we get a warning that the specified witness server is not an Exchange Server. You can ignore this.
    Click Finish to exit the wizard (Figure 11).

    Figure 11: Database Availability Group wizard – Completion page
    Now that we have created the DAG we can move on and add the two Mailbox servers as member servers. To do so, right-click on the newly created DAG and select Manage Database Availability Group Membership in the context menu as shown in Figure 12 below.

    Figure 12: Selecting Manage Database Availability Group Membership
    This opens the Manage Database Availability Group Membership wizard, where we click Add.

    Figure 13: Manage Database Availability Group Membership wizard
    Now select the two servers and click OK.

    Figure 14: Adding member servers to the Database Availability Group
    Click Manage.

    Figure 15: Member servers added to the Database Availability Group
    The failover clustering component will now be installed on each server. Then the DAG will be created and configured accordingly. This can take several minutes so have patience.

    Figure 16: Waiting for the Manage Database Availability Group Membership wizard to complete
    If you do not have DHCP available on the network while the member servers are being added to the DAG, you will get the warning shown in Figure 17.
    Note:
    DHCP assgined addresses are fully supported for DAG purposes. Actually the Exchange Product group are of the belief most customers will find it a good idea to use a DHCP assigned address for the DAGs. Well, personally I like to give it a static IP address, but that’s just me.

    Figure 17: Member servers added to the DAG
    This is fine, we will add a static IP address to the DAG right away. The consecvences of not having an IP address configured for the DAG is that the cluster core resources cannot be brought online as shown in Figure 18.

    Figure 18: Cluster core resources offline in Failover Clustering console
    One of the things that are missing from the GUI is the option of assigning a static IP address to the DAG, so we need to perform this task via the Exchange Management Shell. So let's launch the shell and enter the following command:
    Get-DatabaseAvailabilityGroup | FL
    This shows us, as expected, no IP address is configured for the DAG.

    Figure 19: No IP address assigned to the DAG

    To assign a static IP address, we need to use the Set-DatabaseAvailabilityGroup cmdlet with the DatabaseAvailabilityGroupIpAddresses parameter:
    Set-DatabaseAvailabilityGroup DAG1 –DatabaseAvailabilityGroupIpAddresses 192.168.2.194

    Figure 20: Static IP address assigned to the DAG
    Now that we have assigned the IP address to the DAG, the cluster core resources can be brought online.

    Figure 21: Cluster core resources online in the Failover Clustering console
    Adding Mailbox Database Copies

    Okay it is time to add database copies to the two mailbox databases as a DAG otherwise does not really make any sense. To do so, select the Database Management tab under the Mailbox node in the Organization Configuration workcenter. Here you right-click on each database and select Add Mailbox Database Copy in the context menu (Figure 22).

    Figure 22: Adding database copies to each Mailbox database
    In the Add Mailbox Database Copy wizard, click Browse.

    Figure 23: Add Mailbox Database Copy wizard
    Now select the mailbox server and click OK.

    Figure 24: Selecting the mailbox server that should store a database copy
    Back in the Add Mailbox Database Copy wizard wizard, click Add.

    Figure 25: Adding the Database copy to the other Mailbox server
    When the wizard has completed successfully, click Finish to exit.

    Figure 26: Database copy added with success
    As you can see in the Exchange Management Console (Figure 27), we now have a healthy copy of the active database.

    Figure 27: Healthy copy of active database
    If we log on to the Mailbox server to which the database copy was added, we can also see that the database has been seeded (Figure 28) and the log files has been replicated (Figure 29).

    Figure 28: Log files replicated to the Mailbox server holding the passive database copy

    Figure 29: Database seeded to the other Mailbox server
    Now perform the same steps for the other Mailbox database.
    We have reached the end of part 3. In the next part we will have a look at how you re-seed databases, how database fail/switch-overs (*-overs) work and much more. See you then!





کلمات کلیدی در جستجوها:

Incremental seeding of database encountered an error

failed to open a log truncation context to source server

Incremental seeding of database encountered an error. A full reseed is required

failed to open a log truncation context to source

1

exchange 2010 high availability

Set-DatabaseAvailabilityGroup -Identity DAG1 -DatacenterActivationMode DagOnly

Incremental seeding of database

Exchange 2010 DAG with SAN

Incremental seeding of database encountered an error A full reseed is required. Error Incremental reseed check failed because

incremental seeding of database encountered an error. a full reseed is required. error: incremental reseed check failed because

a full reseed is required

exchange 2010 log truncation context -experts-exchange

exchange 2010 failed to open a log truncation context to source server

how to reseed dag

Incremental seeding encountered an error while seeding database

dag ms exchange 10

Error: Failed to open a log truncation context to source server

rename database availability group

exchange 2010 add mailbox database copy log truncation context

3

database availability group ipv4 addresses

add-mailboxdatabasecopy -identity mdb02 -mailboxserver amwindows 2008 r2 enable error reportingmultisite dagdeployments

برچسب برای این موضوع

مجوز های ارسال و ویرایش

  • شما نمی توانید موضوع جدید ارسال کنید
  • شما نمی توانید به پست ها پاسخ دهید
  • شما نمی توانید فایل پیوست ضمیمه کنید
  • شما نمی توانید پست های خود را ویرایش کنید
  •