-
Port-Channel Problem
سلام دوستان
مشکل کانفیگ من کجاست که پورت چنل اینطوری نشون میده؟
1 Po1(SD) - Gi0/2(I) Gi0/3(I) Gi0/4(I)
البته یه سوییچ دیگه وضعیت بهتره:
-
1 Po1(SD) PAgP Gi0/2(I) Gi0/3(I) Gi0/4(I)
کانفیگ این بوده:
interface Port-channel1
description ESX-01,NIC Teaming
switchport trunk encapsulation dot1q
switchport mode trunk
interface GigabitEthernet0/2
switchport trunk encapsulation dot1q
switchport trunk allowed vlan all
switchport mode trunk
channel-group 1 mode auto
spanning-tree portfast trunk
این برا اینترفیس های دیگه هم صدق میکنه
-
سلام میشه سناریو برام بزارید؟
-
سناریو مشخصه
سه تا نیک سرور میخاد وصل شه به سه تا اینترفیس سوییچ ...! اونم اترچنل ...
-
SD یعنی Switch port Down شما ارتباط فیزیکیتون بر قراره ؟
سمت سرور NIC ها فعالند و اضافه شده اند ؟
[COLOR="silver"][SIZE=1]- - - ادامه - - -[/SIZE][/COLOR]
[QUOTE=reza malem;448401]سلام میشه سناریو برام بزارید؟[/QUOTE]
دوست عزیز شما دنبال کتاب های LAB سیسکو باشید اونجا سناریو گذاشته کانفیگ هم کرده Best practice هم هست.
برای شروع CCNA LAB رو انجام بدید.
-
عزیزم ممنون از پیشنهادت اما من ااتر چانل رو توی ccnp switch اجرا کردیم و خواستم عملکرد دوست عزیزمون رو ببینم.حالا اگه سناریو بزاری میتونیم از تجربه بقیه اعضا گروه هم استفاده کنیم.ممنون دوستانم
-
[B]وقتی پورت چنل رو توی سوییچ فعال میکنی، توی سوییچ مجازی ESXi ت هم باید تغییراتی ایجاد کنی که قضیه درست کار کنه
[/B]
-
[LEFT][LTR][B]
[B]Purpose[/B]
This article provides information on the concepts, limitations, and some sample configurations of link aggregation, NIC Teaming, Link Aggregation Control Protocol (LACP), and EtherChannel connectivity between ESXi/ESX and Physical Network Switches, particularly for Cisco and HP.
[B]Resolution[/B]
Note[/B]: There are a number of requirements which need to be considered before implementing any form of link aggregation. For more/related information on these requirements, see [URL="http://kb.vmware.com/selfservice/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=1001938"]Host requirements for link aggregation for ESXi and ESX (1001938)[/URL].
Link aggregation concepts:
[LIST][*][B]EtherChannel[/B]: This is a link aggregation (port trunking) method used to provide fault-tolerance and high-speed links between switches, routers, and servers by grouping two to eight physical Ethernet links to create a logical Ethernet link with additional failover links. For additional information on Cisco EtherChannel, see the [URL="http://www.cisco.com/en/US/tech/tk389/tk213/tsd_technology_support_protocol_home.html"]EtherChannel Introduction[/URL] by Cisco. [*][B]LACP or IEEE 802.3ad[/B]: The Link Aggregation Control Protocol (LACP) is included in IEEE specification as a method to control the bundling of several physical ports together to form a single logical channel. LACP allows a network device to negotiate an automatic bundling of links by sending LACP packets to the peer (directly connected device that also implements LACP). For additional information on LACP see the [URL="http://www.cisco.com/en/US/docs/ios/12_2sb/feature/guide/gigeth.html"]Link Aggregation Control Protocol whitepaper[/URL] by Cisco.
[B]Note:[/B] LACP is only supported in vSphere 5.1 and 5.5, using vSphere Distributed Switches (VDS) or the Cisco Nexus 1000v. [*][B]EtherChannel vs. 802.ad[/B]: EtherChannel and IEEE 802.3ad standards are very similar and accomplish the same goal. There are a few differences between the two, other than EtherChannel is Cisco proprietary and 802.3ad is an open standard. [*]For additional information on EtherChannel implementation, see the [URL="http://www.cisco.com/en/US/tech/tk389/tk213/technologies_tech_note09186a0080094714.shtml"]Understanding EtherChannel Load Balancing and Redundancy on Catalyst Switches[/URL] article from Cisco. [/LIST]
EtherChannel supported scenarios:
[LIST][*]One IP to many IP connections. (Host A making two connection sessions to Host B and C) [*]Many IP to many IP connections. (Host A and B multiple connection sessions to Host C, D, etc)
[B]Note[/B]: One IP to one IP connections over multiple NICs is not supported. (Host A one connection session to Host B uses only one NIC). [*]Compatible with all ESXi/ESX VLAN configuration modes: VST, EST, and VGT. For more information on these modes, see [URL="http://kb.vmware.com/selfservice/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=1003806"]VLAN Configuration on Virtual Switch, Physical Switch, and Virtual Machines (1003806)[/URL]. [*]Supported Cisco configuration: EtherChannel Mode ON – ( Enable EtherChannel only) [*]Supported HP configuration: Trunk Mode [*]Supported switch Aggregation algorithm: IP-SRC-DST (short for IP-Source-Destination) [*]Supported Virtual Switch NIC Teaming mode: IP HASH. However see note below:
[B]Note[/B]: The LACP support in vSphere Distributed Switch 5.1 supports only IP hash load balancing. In vSphere Distributed Switch 5.5, all the load balancing algorithms of LACP are supported:
[LIST][*]Do not use beacon probing with IP HASH load balancing. [*]Do not configure standby or unused uplinks with IP HASH load balancing. [*]vSphere Distributed Switch 5.1 only supports one EtherChannel per vNetwork Distributed Switch (vDS). However, vSphere Distributed Switch 5.5 supports multiple LAGs. [/LIST]
[*]Lower model Cisco switches may have MAC-SRC-DST set by default, and may require additional configuration. For more information, see the [URL="http://www.cisco.com/en/US/tech/tk389/tk213/technologies_tech_note09186a0080094714.shtml"]Understanding EtherChannel Load Balancing and Redundancy on Catalyst Switches[/URL] article from Cisco. [/LIST]
This is a Cisco EtherChannel sample configuration:
interface Port-channel1
switchport
switchport access vlan 100
switchport mode access
no ip address
!
interface GigabitEthernet1/1
switchport
switchport access vlan 100
switchport mode access
no ip address
channel-group 1 mode on
!
ESX Server and Cisco switch sample topology and configuration:
[IMG]http://kb.vmware.com/Platform/Publishing/images/1004043_Etherchannel_vlan_esx_topology.jpg[/IMG]
Run this command to verify EtherChannel load balancing mode configuration:
Switch# show etherchannel load-balance
EtherChannel Load-Balancing Configuration:
src-dst-ip
mpls label-ip
EtherChannel Load-Balancing Addresses Used Per-Protocol:
Non-IP: Source XOR Destination MAC address
IPv4: Source XOR Destination IP address
IPv6: Source XOR Destination IP address
MPLS: Label or IP
Switch# show etherchannel summary
Flags: D - down P - bundled in port-channel
I - stand-alone s - suspended
H - Hot-standby (LACP only)
R - Layer3 S - Layer2
U - in use f - failed to allocate aggregator
M - not in use, minimum links not met
u - unsuitable for bundling
w - waiting to be aggregated
Number of channel-groups in use: 2
Number of aggregators: 2
Group Port-channel Protocol Ports
------+-------------+-----------+--------------------------
1 Po1(SU) - Gi1/15(P) Gi1/16(P)
2 Po2(SU) - Gi1/1(P) Gi1/2(P)
Switch# show etherchannel protocol
Channel-group listing:
-----------------------
Group: 1
----------
Protocol: - (Mode ON)
Group: 2
----------
Protocol: - (Mode ON)
[B]HP switch sample configuration[/B]
This configuration is specific to HP switches:
[LIST][*]HP switches support only two modes of LACP: ACTIVE and PASSIVE.
[B]Note:[/B] LACP is only supported in vSphere 5.1 and 5.5 with vSphere Distributed Switches and on the Cisco Nexus 1000V. [*]Set the HP switch port mode to TRUNK to accomplish static link aggregation with ESXi/ESX. [*]TRUNK Mode of HP switch ports is the only supported aggregation method compatible with ESXi/ESX NIC teaming mode IP hash. [/LIST]
To configure a static portchannel in an HP switch using ports 10, 11, 12, and 13, run this command:
conf
trunk 10-13 Trk1 Trunk
To verify your portchannel, run this command:
ProCurve# show trunk
Load Balancing
Port | Name Type | Group Type
---- + --------- + ----- -----
10 | 100/1000T | Trk1 Trunk
11 | 100/1000T | Trk1 Trunk
12 | 100/1000T | Trk1 Trunk
13 | 100/1000T | Trk1 Trunk
[B]Configuring load balancing within the vSphere/VMware Infrastructure Client[/B]
To configure vSwitch properties for load balancing:
[LIST=1][*]Click the ESXi/ESX host. [*]Click the [B]Configuration[/B] tab. [*]Click the [B]Networking[/B] link. [*]Click [B]Properties[/B]. [*]Click the virtual switch in the [B]Ports[/B] tab and click [B]Edit[/B]. [*]Click the [B]NIC Teaming[/B] tab. [*]From the [B]Load Balancing[/B] dropdown, choose [B]Route based on ip hash[/B]. However, see note below. [*]Verify that there are two or more network adapters listed under [B]Active Adapters[/B]. [/LIST]
[IMG]http://kb.vmware.com/Platform/Publishing/images/1004048_VIC_iphash_config_1.jpg[/IMG]
[B]
Note[/B]: The LACP support in vSphere Distributed Switch 5.1 supports only IP hash load balancing. In vSphere Distributed Switch 5.5, all the load balancing algorithms of LACP are supported:
[LIST][*]You must set NIC teaming to IP HASH in both the vSwitch and the included port group containing the kernel management port. See Additional Information section, [I]For additional NIC teaming with EtherChannel information[/I]. [*]Do not use beacon probing with IP HASH load balancing. [*]Do not configure standby or unused uplinks with IP HASH load balancing. [*]vSphere Distributed Switch 5.1 only supports one EtherChannel per vNetwork Distributed Switch (vDS). However, vSphere Distributed Switch 5.5 supports multiple LAGs. [*]ESX/ESXi running on a blade system do not require IP Hash load balancing if an EtherChannel exists between the blade chassis and upstream switch. This is only required if an EtherChannel exists between the blade and the internal chassis switch, or if the blade is operating in a network pass-through mode with an EtherChannel to the upstream switch. For more information on these various scenarios, please contact your blade hardware vendor. [/LIST]
[B]Additional Information[/B]
For more information on NIC teaming with EtherChannel information, see [URL="http://kb.vmware.com/selfservice/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=1022751"]NIC teaming using EtherChannel leads to intermittent network connectivity in ESXi (1022751)[/URL].
LACP is supported on vSphere ESXi 5.1 and 5.5 on VMware vDistributed Switches only. For more information, see [URL="http://kb.vmware.com/selfservice/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=2034277"]Enabling or disabling LACP on an Uplink Port Group using the vSphere Web Client (2034277)[/URL] and the [URL="http://www.vmware.com/files/pdf/techpaper/Whats-New-VMware-vSphere-51-Network-Technical-Whitepaper.pdf"]What's New in VMware vSphere 5.1 - Networking[/URL] white paper.
[B]Removing an EtherChannel configuration from a running ESX/ESXi host[/B]
To remove EtherChannel, there must only be one active network adapter on the vSwitch / dvSwitch. Ensure that the other host NICs in the EtherChannel configuration are disconnected (Link Down). Perform one of these options:
[LIST][*]Disconnect the network cables from the network adapters (ensure that one is left online). [*]Shut down the network port from the physical switch. [*]Disable the vmnic network cards in ESXi. For more information, see [URL="http://kb.vmware.com/selfservice/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=2006074"]Forcing a link state up or down for a vmnic interface on ESXi 5.x (2006074)[/URL]. [/LIST]
With only a single network card online, you can then remove the portchannel configuration from the physical network switch and change the network teaming settings on the vSwitch / dvSwitch from IP HASH to portID. For more information about teaming, see [URL="http://kb.vmware.com/selfservice/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=1004088"]NIC teaming in ESXi and ESX (1004088)[/URL].
[/LTR]
[/LEFT]
منبع :
[CODE][LTR]http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1004048[/LTR][/CODE]
-
[B]Note[/B]: There are a number of requirements which need to be considered before implementing any form of link aggregation. For more/related information on these requirements, see [URL="http://kb.vmware.com/selfservice/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=1001938"]Host requirements for link aggregation for ESXi and ESX (1001938)[/URL].
Link aggregation concepts:
[LIST][*][B]EtherChannel[/B]: This is a link aggregation (port trunking) method used to provide fault-tolerance and high-speed links between switches, routers, and servers by grouping two to eight physical Ethernet links to create a logical Ethernet link with additional failover links. For additional information on Cisco EtherChannel, see the [URL="http://www.cisco.com/en/US/tech/tk389/tk213/tsd_technology_support_protocol_home.html"]EtherChannel Introduction[/URL] by Cisco.[*][B]LACP or IEEE 802.3ad[/B]: The Link Aggregation Control Protocol (LACP) is included in IEEE specification as a method to control the bundling of several physical ports together to form a single logical channel. LACP allows a network device to negotiate an automatic bundling of links by sending LACP packets to the peer (directly connected device that also implements LACP). For additional information on LACP see the [URL="http://www.cisco.com/en/US/docs/ios/12_2sb/feature/guide/gigeth.html"]Link Aggregation Control Protocol whitepaper[/URL] by Cisco.
[B]Note:[/B] LACP is only supported in vSphere 5.1 and 5.5, using vSphere Distributed Switches (VDS) or the Cisco Nexus 1000v.[*][B]EtherChannel vs. 802.ad[/B]: EtherChannel and IEEE 802.3ad standards are very similar and accomplish the same goal. There are a few differences between the two, other than EtherChannel is Cisco proprietary and 802.3ad is an open standard.[*]For additional information on EtherChannel implementation, see the [URL="http://www.cisco.com/en/US/tech/tk389/tk213/technologies_tech_note09186a0080094714.shtml"]Understanding EtherChannel Load Balancing and Redundancy on Catalyst Switches[/URL] article from Cisco.[/LIST]
EtherChannel supported scenarios:
[LIST][*]One IP to many IP connections. (Host A making two connection sessions to Host B and C)[*]Many IP to many IP connections. (Host A and B multiple connection sessions to Host C, D, etc)
[B]Note[/B]: One IP to one IP connections over multiple NICs is not supported. (Host A one connection session to Host B uses only one NIC).[*]Compatible with all ESXi/ESX VLAN configuration modes: VST, EST, and VGT. For more information on these modes, see [URL="http://kb.vmware.com/selfservice/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=1003806"]VLAN Configuration on Virtual Switch, Physical Switch, and Virtual Machines (1003806)[/URL].[*]Supported Cisco configuration: EtherChannel Mode ON – ( Enable EtherChannel only)[*]Supported HP configuration: Trunk Mode[*]Supported switch Aggregation algorithm: IP-SRC-DST (short for IP-Source-Destination)[*]Supported Virtual Switch NIC Teaming mode: IP HASH. However see note below:
[B]Note[/B]: The LACP support in vSphere Distributed Switch 5.1 supports only IP hash load balancing. In vSphere Distributed Switch 5.5, all the load balancing algorithms of LACP are supported:
[LIST][*]Do not use beacon probing with IP HASH load balancing.[*]Do not configure standby or unused uplinks with IP HASH load balancing.[*]vSphere Distributed Switch 5.1 only supports one EtherChannel per vNetwork Distributed Switch (vDS). However, vSphere Distributed Switch 5.5 supports multiple LAGs.[/LIST]
[*]Lower model Cisco switches may have MAC-SRC-DST set by default, and may require additional configuration. For more information, see the [URL="http://www.cisco.com/en/US/tech/tk389/tk213/technologies_tech_note09186a0080094714.shtml"]Understanding EtherChannel Load Balancing and Redundancy on Catalyst Switches[/URL] article from Cisco.[/LIST]
This is a Cisco EtherChannel sample configuration:
interface Port-channel1
switchport
switchport access vlan 100
switchport mode access
no ip address
!
interface GigabitEthernet1/1
switchport
switchport access vlan 100
switchport mode access
no ip address
channel-group 1 mode on
!
ESX Server and Cisco switch sample topology and configuration:
Run this command to verify EtherChannel load balancing mode configuration:
Switch# show etherchannel load-balance
EtherChannel Load-Balancing Configuration:
src-dst-ip
mpls label-ip
EtherChannel Load-Balancing Addresses Used Per-Protocol:
Non-IP: Source XOR Destination MAC address
IPv4: Source XOR Destination IP address
IPv6: Source XOR Destination IP address
MPLS: Label or IP
Switch# show etherchannel summary
Flags: D - down P - bundled in port-channel
I - stand-alone s - suspended
H - Hot-standby (LACP only)
R - Layer3 S - Layer2
U - in use f - failed to allocate aggregator
M - not in use, minimum links not met
u - unsuitable for bundling
w - waiting to be aggregated
Number of channel-groups in use: 2
Number of aggregators: 2
Group Port-channel Protocol Ports
------+-------------+-----------+--------------------------
1 Po1(SU) - Gi1/15(P) Gi1/16(P)
2 Po2(SU) - Gi1/1(P) Gi1/2(P)
Switch# show etherchannel protocol
Channel-group listing:
-----------------------
Group: 1
----------
Protocol: - (Mode ON)
Group: 2
----------
Protocol: - (Mode ON)
[B]HP switch sample configuration[/B]
This configuration is specific to HP switches:
[LIST][*]HP switches support only two modes of LACP: ACTIVE and PASSIVE.
[B]Note:[/B] LACP is only supported in vSphere 5.1 and 5.5 with vSphere Distributed Switches and on the Cisco Nexus 1000V.[*]Set the HP switch port mode to TRUNK to accomplish static link aggregation with ESXi/ESX.[*]TRUNK Mode of HP switch ports is the only supported aggregation method compatible with ESXi/ESX NIC teaming mode IP hash.[/LIST]
To configure a static portchannel in an HP switch using ports 10, 11, 12, and 13, run this command:
conf
trunk 10-13 Trk1 Trunk
To verify your portchannel, run this command:
ProCurve# show trunk
Load Balancing
Port | Name Type | Group Type
---- + --------- + ----- -----
10 | 100/1000T | Trk1 Trunk
11 | 100/1000T | Trk1 Trunk
12 | 100/1000T | Trk1 Trunk
13 | 100/1000T | Trk1 Trunk
[B]Configuring load balancing within the vSphere/VMware Infrastructure Client[/B]
To configure vSwitch properties for load balancing:
[LIST=1][*]Click the ESXi/ESX host.[*]Click the [B]Configuration[/B] tab.[*]Click the [B]Networking[/B] link.[*]Click [B]Properties[/B].[*]Click the virtual switch in the [B]Ports[/B] tab and click [B]Edit[/B].[*]Click the [B]NIC Teaming[/B] tab.[*]From the [B]Load Balancing[/B] dropdown, choose [B]Route based on ip hash[/B]. However, see note below.[*]Verify that there are two or more network adapters listed under [B]Active Adapters[/B].[/LIST]
[IMG]file:///C:\Users\user\AppData\Local\Temp\msohtmlclip1\01\clip_image001.jpg[/IMG]
[B]Note[/B]: The LACP support in vSphere Distributed Switch 5.1 supports only IP hash load balancing. In vSphere Distributed Switch 5.5, all the load balancing algorithms of LACP are supported:
[LIST][*]You must set NIC teaming to IP HASH in both the vSwitch and the included port group containing the kernel management port. See Additional Information section, [I]For additional NIC teaming with EtherChannel information[/I].[*]Do not use beacon probing with IP HASH load balancing.[*]Do not configure standby or unused uplinks with IP HASH load balancing.[*]vSphere Distributed Switch 5.1 only supports one EtherChannel per vNetwork Distributed Switch (vDS). However, vSphere Distributed Switch 5.5 supports multiple LAGs.[*]ESX/ESXi running on a blade system do not require IP Hash load balancing if an EtherChannel exists between the blade chassis and upstream switch. This is only required if an EtherChannel exists between the blade and the internal chassis switch, or if the blade is operating in a network pass-through mode with an EtherChannel to the upstream switch. For more information on these various scenarios, please contact your blade hardware vendor.[/LIST]
[B]Additional Information[/B]
For more information on NIC teaming with EtherChannel information, see [URL="http://kb.vmware.com/selfservice/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=1022751"]NIC teaming using EtherChannel leads to intermittent network connectivity in ESXi (1022751)[/URL].
LACP is supported on vSphere ESXi 5.1 and 5.5 on VMware vDistributed Switches only. For more information, see [URL="http://kb.vmware.com/selfservice/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=2034277"]Enabling or disabling LACP on an Uplink Port Group using the vSphere Web Client (2034277)[/URL] and the [URL="http://www.vmware.com/files/pdf/techpaper/Whats-New-VMware-vSphere-51-Network-Technical-Whitepaper.pdf"]What's New in VMware vSphere 5.1 - Networking[/URL] white paper.
[B]Removing an EtherChannel configuration from a running ESX/ESXi host[/B]
To remove EtherChannel, there must only be one active network adapter on the vSwitch / dvSwitch. Ensure that the other host NICs in the EtherChannel configuration are disconnected (Link Down). Perform one of these options:
[LIST][*]Disconnect the network cables from the network adapters (ensure that one is left online).[*]Shut down the network port from the physical switch.[*]Disable the vmnic network cards in ESXi. For more information, see [URL="http://kb.vmware.com/selfservice/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=2006074"]Forcing a link state up or down for a vmnic interface on ESXi 5.x (2006074)[/URL].[/LIST]
With only a single network card online, you can then remove the portchannel configuration from the physical network switch and change the network teaming settings on the vSwitch / dvSwitch from IP HASH to portID. For more information about teaming, see [URL="http://kb.vmware.com/selfservice/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=1004088"]NIC teaming in ESXi and ESX (1004088)[/URL].
For translated versions of this article, see:
[LIST][*]Español: [URL="http://kb.vmware.com/selfservice/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=2034989"]Configuración de muestra de EtherChannel/agregación de enlaces con ESXi/ESX y conmutadores Cisco/HP (2034989)[/URL][*]Português: [URL="http://kb.vmware.com/selfservice/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=2037187"]Amostra de configuração do EtherChannel/Agregação de link com switches ESX/ESXi e Cisco/HP (2037187)[/URL][*]Mandarin Chinese: [URL="http://kb.vmware.com/selfservice/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=2040972"]与 ESXi/ESX 和 Cisco/HP 交换机的 EtherChannel/链接聚合的配置示例 (2040972)[/URL][/LIST]
[B]Tags[/B]
-
[QUOTE=greatcyrus;448241]سلام دوستان
مشکل کانفیگ من کجاست که پورت چنل اینطوری نشون میده؟
1 Po1(SD) - Gi0/2(I) Gi0/3(I) Gi0/4(I)
البته یه سوییچ دیگه وضعیت بهتره:
-
1 Po1(SD) PAgP Gi0/2(I) Gi0/3(I) Gi0/4(I)
این برا اینترفیس های دیگه هم صدق میکنه[/QUOTE]
بعد از اینکه کارت شبکه ها رو سمت ESXI فعال کردید دستور زیر رو هم در اینترفیس سوییچ وارد کنید :
[LEFT][CODE]
interface GigabitEthernet0/2
switchport trunk encapsulation dot1q
switchport mode trunk
[B]channel-group 1 mode on[/B]
[/CODE]
[/LEFT]
-
دوستان عزیز تاحالا 100 بار اینکارو کردم و اوکی بوده!
vmware ک اتوترانک هست، نیک تیمینگ هم شده.
فقط [B]channel-group 1 mode on نشد! گذاشتم Auto
[/B]
-
عزیزم وقتی یکطرف سوییچ داری و طرف دیگر کارت شبکه سرور باید LACP راه اندازی کنی
-
بذا تست کنم خبر میدم.
مرسی
-
[QUOTE=greatcyrus;448688]بذا تست کنم خبر میدم.
مرسی[/QUOTE]
[LEFT][CODE]
[B]Note: LACP is only supported in vSphere 5.1 and 5.5, using vSphere Distributed Switches (VDS) or the Cisco Nexus 1000v.[/B]
[/CODE]
[/LEFT]
آقای حسینی هم سمت Switch و هم سمت Host طبق دستوراتی که آقای علیپور قرار دادند پیش برید .
-
show etherchannel protocol
Channel-group listing:
----------------------
Group: 1
----------
Protocol: PAgP
Group: 2
----------
Protocol: PAgP
[COLOR="silver"][SIZE=1]- - - ادامه - - -[/SIZE][/COLOR]
show etherchannel sum
Flags: D - down P - bundled in port-channel
I - stand-alone s - suspended
H - Hot-standby (LACP only)
R - Layer3 S - Layer2
U - in use f - failed to allocate aggregator
M - not in use, minimum links not met
u - unsuitable for bundling
w - waiting to be aggregated
d - default port
Number of channel-groups in use: 2
Number of aggregators: 2
Group Port-channel Protocol Ports
------+-------------+-----------+-----------------------------------------------
1 Po1(SD) PAgP Gi0/2(I) Gi0/3(I) Gi0/4(I)
2 Po2(SD) PAgP Gi0/6(D) Gi0/7(D) Gi0/8(D)
[COLOR="silver"][SIZE=1]- - - ادامه - - -[/SIZE][/COLOR]
show etherchannel load-balance
EtherChannel Load-Balancing Configuration:
src-mac
EtherChannel Load-Balancing Addresses Used Per-Protocol:
Non-IP: Source MAC address
IPv4: Source MAC address
IPv6: Source MAC address
-
[QUOTE=greatcyrus;448691]show etherchannel protocol
Channel-group listing:
----------------------
Group: 1
----------
Protocol: [COLOR=#ff0000][B] PAgP[/B][/COLOR]
Group: 2
----------
Protocol: [COLOR=#ff0000][B] PAgP[/B][/COLOR]
[COLOR=silver][SIZE=1]-[/SIZE][/COLOR]
Group Port-channel Protocol Ports
------+-------------+-----------+-----------------------------------------------
1 Po1(SD) [COLOR=#ff0000][B]PAgP[/B][/COLOR] Gi0/2(I) Gi0/3(I) Gi0/4(I)
2 Po2(SD) [COLOR=#ff0000][B]PAgP[/B][/COLOR] Gi0/6(D) Gi0/7(D) Gi0/8(D)
[COLOR=silver][SIZE=1]- -[/SIZE][/COLOR][/QUOTE]
[LEFT][LTR]PAgP ?!
[B]
Understanding the Port Aggregation Protocol and Link Aggregation Protocol[/B]
The Port Aggregation Protocol (PAgP) and Link Aggregation Control Protocol (LACP) facilitate the automatic creation of EtherChannels by exchanging packets between Ethernet interfaces. [B]PAgP is a Cisco-proprietary protocol that can be run only on Cisco switches and on those switches licensed by licensed vendors to support PAgP[/B]. LACP is defined in IEEE 802.3AD and allows Cisco switches to manage Ethernet channels between switches that conform to the 802.3AD protocol.
[B]Mode ON [/B], Forces the interface into an EtherChannel without PAgP or LACP. With the on mode, a usable EtherChannel exists only when an interface group in the on mode is connected to another interface group in the on mode
Refrence : [url]http://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst3550/software/release/12-1_13_ea1/configuration/guide/3550scg/swethchl.html[/url]
[/LTR]
[/LEFT]