Question For Interview

SDH Logical Question: 1. Why Multiframe use in SDH. Ans: Multiframe is combination of 4 frames used to provide meaningf

Views 85 Downloads 0 File size 531KB

Report DMCA / Copyright

DOWNLOAD FILE

Recommend stories

Citation preview

SDH Logical Question:

1. Why Multiframe use in SDH. Ans: Multiframe is combination of 4 frames used to provide meaningful POH.

Each frame having 1 Byte POH and combination of these 4 frames are Multiframe and POH are v5, j2, n2, k4 respectively. You can find out that is the use of each byte by any of the sdh doc. Multiframe is used to reduce the Overhead ratio for lower order signals In addition to this, the Multiframe is used for the convenience of rate adaptation. If E1(2mb/s) signals have standard rate of 2.048mb/s,each C-12 container will accommodate 256bit(32 byte) payload(2.048/8000).however, when the rate of the E1 signal is not standard, the average bit number accommodate into each C-12 is not an integer. In this case, a Multiframe of four C-12 frames is used to accommodate signals

2. ALS (automatic laser shutdown). Ans: Automatic Laser Shutdown is a mechanism in which you are asking the TX Laser to shut down in case you are not receiving any kind of power in the Optical Snk. Essentially this is an ideal condition when you are running redundant networks and would like to stop the TX direction also when there is no power in the RX. Imagine in the following way. 1. You have a pair of fiber along two nodes. 2. From Node A to Node B there is a fiber cut, but from Node B to Node A the fiber is OK. 3. In such a condition you will receive a LOS in the NODE B. If ALS is not enabled then 1. You have LOS in NODE B, 2. This generates MS-AIS in the MS section. 3. The MS-AIS leads to a MS-RDI in the NODE-A. 4. The Path level AIS and corresponding RDI's also follow. Now if ALS is enabled in NODE B then 1. The moment NODE B receives a LOS it turns of the Laser of NODE B 2. This leads to a total link shut between NODEB and NODEA why is ALS needed? 1. Understand that if you are transferring a data wherein you require acknowledgement also

to be done, then in that case shutting of one path should shut the other path also. 2. So as to say that when the acknowledgement path is cut, then the Data transfer path should also stop. 3. ALS is also used in case of total link shut for signalling links. 4. It is also necessary for switching in both the direction in case you have SNCP. (this is a very good example of having ALS). Suppose you have a SNCP type of protection. If out of a pair of fiber one is down, then the RX would switch only on one side and not on the other side. This is because SNCP switch occurs only at the snk. For the path to switch completely on a different media the SNCP switch should occur at both the endpoints and this is the reason why you have AlS enabled, so that when one part of the fiber breaks the other TX also gives a link shut and at both the ends you have a switch. The major question is how does the mux recover from a ALS scenario. 1. The TX sends a pulse of signal every 90 sec for 3 sec. 2. If the LOS is cleared on one side then the TX of that side is also started. 3. However if the LOS is not cleared the TX of that side also doesn't fire. This situation remains till LOS is cleared on both sides and the link is completely up Disadvantage of ALS: Yes there is, but we ignore it. Look at this scenario. There is a multicast traffic only going from one source to many sinks through many legs. If you are enabling ALS and if there is a fiber loss in the reverse direction then your forward direction is also affected. In such cases you don't want to happen. Suppose you send multicast (unidirectional traffic) from A to B. So this has the following thing happening. When traffic is going from A to B and the ALS is enabled and the loss is there only from B to A then the ALS shuts laser from A to B also this kills your Unidirectional Multicast which otherwise could have easily not been killed had ALS been disabled. So this is one disadvantage of enabling ALS. Also in case of Unidirectional MSP1+1 you always keep ALS disabled as you want the switching only to take place in one direction.

3. What is difference between 4 X VC4 and VC4-4C? Ans: The basic difference between the two is that of Virtual concatenation and Contigous concatenation. VC-4-4C is an example of Contiguous concatenation. In this scheme the following thing happens. 4 VC-4s are combined into one signal but these VC-4s need to be in a sequence, either 1,2,3,4 or 5,6,7,8 and also need to share the same MS ie the same physical port. Remember this is a contiguous concatenation mode so the physical port resources cannot be different and also the alarm overheads are carried by the first VC-4. 4XVC-4 is an example of Virtual concatenation. In this case 4 VC-4s are actually grouped but they can be individually cross connected to

different VC-4 pipes belonging to different physical paths. So eventually you can take a 600Mb/s signal and then bifurcate it into 4 different VC-4s on 4 diverse routes. This technology is used most in Ethernet over SDH. To enhance this you have LCAS, which is described very well in one of my earlier posts. The Ethernet rides on this concatenation schemes. The Ethernet (which is asynchronous in nature) in order to run on this needs to be first encapsulated in GFP. However let us understand that this VCG can have traffic of any type. it may be Ethernet, FC, FICON or ATM whatever you want. You can say there is a kind of BW advantage in VC-4 4C, however this i very negligible considering the following facts. 1. You have to always take VC members in sequence. 2. You cannot have diverse routing options. 3. There is less flexibility if you don't have members in sequence all through the path. This is the reason contiguous concatenation is giving way to virtual concatenation.

4. MSP/MSPRING/SNCP Ans: 1. MSP 1+1 In this there is a linear topology that i protected by another topology. The granularities are STM1/4/16/64. In this there is a link that is dedicatedly protected by another link. so as to say there is dedicated section/link for every link. This protection scheme is triggered by the K1 and K2 bytes. 2. MS-SPRING: This is a protection scheme in which the traffic is more optimised for high density cores. Let us say that there is a STM 16 ring. To have a dedicated protection in this ring using MSP 1+1 over all the spans you may have to have more HW. In MS-SPRING this STM -16 ring is divided into 8+8 combination. VC-4 1-8 in each span is working as main and the VC-4 9-16 in each span is serving for shared protection. Remember in MS-SPRING the protection is shared. So this will allow you to actually use 16X (N/2) number of VC-4 that will be protected. Where N is the number of Nodes. 3. SNCP: This is a dedicated protection for each path. Just like MSP was in the MS level SNCP is in the path level. Each Circuit end to end will have its dedicated protection path. The bytes involved are K3 at higher order and k4 at lower order

SNCP is a path level protection that was designed in order to have a protection scheme at collector level for path level eventualities (alarms). Until long time there was a great deal of consolation to the users that now there will be a protection scheme that would also respond to path level alarms/Path level AIS ie TU-AIS and AU-AIS. This traditional design of SNCP is termed as SNCP-I or Intrusive SNCP. In this case the SNCP only triggers on a path failure. As the time progressed we realized that in path level we can very well have other problems than AIS. One of the problems were quality level problems like DEG and EXC, to which SNCP-I didn't respond. This made a change of design in the SNCP and there was an incumbency of SNCP-N which stands for NON-INTRUSIVE SNCP. This scheme also responds to qualitative alarms like EXC and DEG. However now there will be a question that what if one path has EXC and one path has DEG? In such a case one should remember the priorities of SNCP - N triggers. 1. AIS 2. EXC 3. DEG So AIS gets the top priority. So if both the paths have a problem then the path having the least serious problem tends to carry the traffic. Shortcomings of SNCP-N and solution: Most of the time SNCP - N is not preferred in the case of access links. This is because there is always a degree of qualitative degradation in both such links and this results the links to toggle between main and protection. To overcome this you should use a hold off timer. Just remember one thumb-rule. SNCP is always triggered at the drop point. So say if I have two paths. A-B-C-D-E-F-G (working) A-J-K-I-L-G (Protection) There is a failure of C-D then the D has a LOS, This generates a TU level AIS which is sent on the path to G (Drop point) here you have a decision point which switches the traffic.

Just remember one thing, Path level AIS generated at any point of the path will propagate to the End drop point, which will switch the traffic. 5. Difference between Holdover and free-running mode. Ans: Hold-over: the equipment clock samples the in-use timing in its memory and when the primary source of synchronisation lost, it uses it stored clock for synchronisation. Free -Running: the equipment runs of its own clock.Generaly, the NE runs in free running mode when it has just commissioned and is in use for first time. As you must be knowing that in every NE there is a Priority table of Line clock latching. The NE actually latches to the clock of the highest quality and if there is a tie it looks for the line with the highest priority. As the Lines fail consequently a next line is selected on the basis of quality and priority. In an event of the fact that all the lines towards that NE has failed and there is no way that the NE can actually latch to a line clock then the NE goes to the Holdover mode. In the Holdover Mode the NE maintains the quality of the last latched clock for the next 24 Hrs. Hold-over is actually holding the quality level for a finite amount of time in an event the reference is not coming. After the 24 Hrs expire and there is still no line clock in picture the NE will have the Freedom to actually synchronise itself with the internal oscillator. This is called Free Running mode. In this case the clock oscillator of the NE doesn't have any reference so there is no feedback. Free running mode is an undesirable mode of functioning for synch and is always avoided. 6. What is difference between 4 X VC4 and VC4-4C? Ans: This question is asked often in interviews and many people just go blank. I must say this is a very good question (and since we have this open platform, so let me reveal this to you). The basic difference between the two is that of Virtual concatenation and Contigous concatenation. VC-4-4C is an example of Contiguous concatenation. In this scheme the following thing happens. 4 VC-4s are combined into one signal but these VC-4s need to be in a sequence, either 1,2,3,4 or 5,6,7,8 and also need to share the same MS ie the same physical port. Remember this is a contiguous concatenation mode so the physical port resources cannot be different and also the alarm overheads are carried by the first VC-4. 4XVC-4 is an examply of Virtual concatenation. In this case 4 VC-4s are actually grouped but they can be individually cross connected to

different VC-4 pipes belonging to different physical paths. So eventually you can take a 600Mb/s signal and then bifurcate it into 4 different VC-4s on 4 diverse routes. This technology is used most in Ethernet over SDH. To enhance this you have LCAS, which is described very well in one of my earlier posts. What I told you was not at all related to Ethernet. These are related concatenation styles of SDH. The Ethernet rides on this concatenation schemes. The Ethernet (which is asynchronous in nature) in order to run on this needs to be first encapsulated in GFP. However let us understand that this VCG can have traffic of any type. it may be Ethernet, FC, FICON or ATM whatever you want. What I told you was SDH. You can say there is a kind of BW advantage in VC-4 4C, however this i very negligible considering the following facts. 1. You have to always take VC members in sequence. 2. You cannot have diverse routing options. 3. There is less flexibility if you don't have members in sequence all through the path. This is the reason contiguous concatenation is giving way to virtual concatenation.

7. What is difference between points to multipoint Configuration in SDH? Ans: The first thing that you should not do is to give hard loops or soft loops. Ethernet circuits are prone to MAC move (Mac duplications) when a hard/soft loop is given in the circuit. The working of a p2mp service is totally based on MAC learning even though if the VLAN is same. Remember a P2MP service is actually created under the following conditions. 1. There is a hub. 2. There are many spokes. 3. You want to have unicast communication (interactive) between the hub and the spokes. 4. You want to treat each and every spoke a seperate broadcast domain. 5. However, you have the same Vlan for all the spokes and hub. In such a case the VLAN functionality is also ruled out because you may have the same VLAN in different spokes.

So what to do. The best way is to test via multiple streaming by MAC address. Remember the circuit may be P2MP but after the MAC table is clear the communication is always P2P from hub to spoke. So le us consider a hub and spoke with one hub and 2 spokes. So. 1. MAC address of Hub = 00 00 00 00 00 0A 2. MAC address of Spoke -1 = 00 00 00 00 00 0B 3. MAC address of Spoke -2 = 00 00 00 00 00 0C You connect one analyser to hub and another to spoke -1. In the hub analyser you have to put the following things. Souce address = 00 00 00 00 00 0A Dest address = 00 00 00 00 00 0B In the spoke -1 analyser you have to put Source address = 00 00 00 00 00 0B Dest Address = 00 00 00 00 00 0A Now first start the stream of the hub. 1. You will see that the stream is actually reaching both the spokes. (This is because the destination address 00 00 00 00 00 0B from the hub is still not known or learnt by any switch). Now start the stream from Spoke -1. At this instant you will see that any packet from spoke -1 only reached the hub and the stream that was meant for spoke -1 from the hub only goes to the spoke -1. This is because due to the reverse stream the MAC address has been learnt and the traffic is now unicast. What happens in the real scenario???? If you look at the real scenarios then you ahve a L3 network of routers that are actually connected by means of a Metro ethernet network.

The metro ethernet network consists of such services of P2MP. Just like we did the MAC learnign from different streams of the analyser same thing happens in real scene also. Keep in mind for any L3 device to start sending traffic it has to do an ARP. (Address resolution protocal). In the analyser you are putting the destination address however in case of real router the destination mac address that is put on the MAC header of the frame that comes out of the router is decided by the ARP. Now as the ARP happens in the L3 network overlay, the underlying metro ethernet network has the MAC addresses resolved due to this. This is the reason why inspite of having the same VLANS you will not have any cross talk and the collision domains are broken.

2. What is Nut Configuration? the concept of NUT comes in MS-SPRING. As I told you that in MS-SPRING you have 8 in main and shared protection carried by 916 AU-4s. However in this ring if you want some of the VC-4s to be free of MS-SPRING configuration then you configure them as NUT. So if VC-4 number 5 is configured as NUT then VC-4 no 5 and VC-4 13 don't participated in MS-SPRING. This is something like in Railways you have RAC. So when you are boarding the train with RAC two people share one seat in Side lower birth. This is what happens when a pair of VCs are configured in MS-Spring. Moment there is a cancellation, (in our case addition of a MS-Spring protected member to NUT) then two seats are confirmed. So in our case two VC-S are free for either unprotected configuration or HO SNCP configuration. 3. Reason of LOF: The term LOF means Loss Of Frame. This happens when the A1 and A2 framing is not as expected. The STM-4 port expects 12 A1 (F6) and 12 A2 (28) combination of framing bytes but alas the STM-1 is able to send only 3 A1 and 3 A2 bytes.

This is the reason you have a RS-LOF on the STM-4. The consequent alarms on the mux of the STM-4 is as follows. 1. MS-AIS. 2. HP-AIS ( for the instance of a higher order XC present). 3. LP - AIS ( for the instance of a higher order termination and lower oder XC present). On the STM-1 end. 1. MS-RDI ( For the MS-AIS). 2. HP-RDI (For the HP-AIS). 3. LP-RDI (For the LP - AIS). 4. Pointer Alarm This is a Payload bit on the HO path over head. This can be in two flavors HP-UNEQP LP-UNEQP LOP stands for loss of pointer. This happens when there is no pointer identification in the payload. This can be in two flavours AU-LOP and TU-LOP Make a note of the following Any pointer related alarms relate to the AU or TU. But the alarms like UNEQP, PLM, SLM which are path related would actually carry a prefix of HO/HP or LO/LP

**Can somebody explain the function of N1 and N2 bytes??

N1 / n2 bytes are used for Tandem Connection Monitoring...It purpose is to reflect / synchronise the alarm/ warning time when the event has "occurred" not when it has "detected" in the complete Network/Ring.

This N1 byte is used for tandam connection monitoring for a big networks of diiff vendors.This byte is used for checking errors for a a perticular vendor network/ networks section. Follow the diagram. 1-----2------3-----4-----5-----6,This is one linear network. Suppose 1,2,5,6 belogs to A vendor and 3,4 belongs to other B vendor.Now errors occures in this networks.It could not understand from where the errors genarate.Now A vendor need to prove that its networks not having any problem.So what need to check? At 1 node(source) B3 byte can copy its value to N1 byte and both are trasmitted and at the 2 node(sink) both the B3 and N1 byte is compared.If B3 has same value to N1 then A can tell 1---2 section has no problem as N1 byte value do not change while B3 byte value changes according to path errors.If there is some diff so obviously 1----2 section generates errors.like that A can also check make 5 as a source and 6 as sink.

**What is Difference between Jitter and Wander?

Jitter or wander is “short term variations of the significant instants of a digital signal from their ideal positions in time”. Significant instant is any convenient, easily identifiable point on the signal such as a rising or falling edge. If frequency of this phase modulation is less than 10 Hz, it is known as jitter. If this variation is less than 10 Hz, it is known as wander.

Wander happens generally due to low phase variations arising out of sproadic pointer variations.and also due to low pass characteristics of netwok elements it gets superimpose at each level.this effect cancells out due to synchronous nature but amplified by superimposition. wander of more than 18us can cause slips. Jitter is a high frequency variation which happens when PDH signals are multimplexed or demultiplexed into sdh signals. To equalise this variations buffers are used at receiver and transmitter end. if jitter is not in acceptable limit the sampling circuitry can go awry and synchronisation also gets affected.there are mapping jitter,itrinsic jitter,pointer jitter and many more.. Both test are carried out together because the signal is passed through a low pass filter with lower limit of frequency as wander and upper limit of frequency as jitter.

If the clk frequency is in above 10hz is called jitter below10hz is called wander

**Can anyone help me to understand the difference btwn blocking & non blocking cross connect? BLOCKING: Suppose you have one euqipments having Capicity 60G and due to more traffic capicity increase 80G then your system dont take these extra traffic and these are come under BLOCKING. NON BLOCKING: Suppose your system having 60G and take capicity 60G than these are NONBLOCKING.

This type we can say as restricted and unrestricted xc, means suppose if u r taking a 60G xc capcity Eqpt in which we can Cross connect at level of VC4,VC3 and VC12 till the complte utilization of 60G this is called as Non blocking XC connect, but where as in Blocking Xc sysytem , If u r taking 60G XC capcity eqpt, within this one entire 60G we cant use for VC4,VC3 and VC12 x connections.it may be 20 G for VC4 Xcs and 40G may VC3,and VC12 X connetoins. For Ex: If u have any idea about ECI-XDMs: we can xc entire capacity with VC12/VC3/VC4, but in NEC eventogh the capacity is 80Gwe can drop VC3/VC12 at max 30G and rest is for AU4.

*** if we have 4 node like A,B,C,D.All r protected by MSP protection, simply tell me,we can do sncp protection between A and B.If no plz tell me why,if yes also

msp is protecting whole line(say STM16). and sncp is protection tributuary...say VC4. now it means that if a port is protected, the trib inside it is protected as well...or in other terms the traffic through the port is protected... here the problem is if something goes wrong with trib(insert AU-AIS defect or B3 error)...the msp won't notice the fault....so line is in good condtion where as trib inside it is in bad condition.... now if you trib is protected,(sncp)..u have a protection at trib level as well... I hope this gives u atleast a basic idea!!!!

It's depends upon the which vendor product u r using, some vendors will not support sncp if MSP is provided for that port, but like marconi and NEC may can configure the SNCP with MSP links.

***Can any body explain me about Oscillation guard time in SDH protection?

never came across such terminology as standard one. There is one wait to restore time ( WTR) used in protection switching. It offers the selected delay before it switches back to work path, once the work path is restored. The WTR bridge request is used to prevent frequent oscillation between the protection channels and the working channels. Consider there is break in fiber in a link and traffic switched to protected path. Once the team reaches the site for splicing, it does momentrly make and break multiple times which will interrupt the link multiple times, within 50 mSec limit. Consider the case when in a link one transreciever is recieving the power in threshold and is frequently fluctuating on either side. The intent is to minimize oscillations, since hits are incurred during switching.

I have seen this in Nortel products

**Can ne1 send link of detailed description of EoS alarms??

LOM: Loss of VCAT Multiframe Alignment SQM: Sequence number mismatch DDE, MND: Differential Delay Exceeded, Member not deskewable PersCRC: Persistent CRC errors LCR: Loss of capacity, Receive LCT: Loss of capacity, Transmit UnexMST: Unexpected Member Status SQNC: Inconsistent sequence numbers LMM: LCAS Mode Mismatch UnExMST: Unexpected Member Status LLC: Loss of LCAS capability LFD: Loss of Frame delineation CSF: Client Signal Failure UPM: User Payload Mismatch Extended Header Mismatch PLCT: Partial loss of capacity, Transmit

PLCR: Partial loss of capacity, Receive TLCT: Total loss of capacity, Transmit TLCR: Total loss of capacity, Receive SD: Signal Degrade, Receive EER: Excessive Error Ratio, Receive Link Down AN Fail: Autonegotiation failed Link integrity on SD: Signal Degrade, Receive EER: Excessive Error Ratio, Receive

**Can u explain about FOPR and GIDM???

FOPR: Lcas failure of protocol GIDM: Group ID mismatch

***Can anybody plz tell me the reason behind to keep the switching time200 Ghz Called CWDM When Channel Spacing is> 100 Ghz Called WDM When Channel Spacing is < 100 Ghz Called DWDM DWDM Classification when we are using C Band ( 1530 nm - 1565 nm Then Channel Spacing 35 nm) L Band ( 1565 nm - 1610 nm Then Channel Spacing 45 nm) WDM; When Mixing 2 Channel Per Fiber , These Type of Multiplexing can be used increase N/W .Where triffic is less with help of WDM Increase the distance between Central Office and Subscriber which difficult to copper cable. CWDM; When Mixing 4 & 8 Fiber per channel then called CWDM DWDM; When we are mixing more no of channel in fiber then using 16,32,64 DWDM Used for Long Haul and Ultra Long Haul Which Connect Metro N/W

** What is the difference between SNCP,MSPring,UBSR and BLSR. All these work on 2 fibre or 4 fibre ..?

SNCP-UPSR are same thing..one is SDH term and later one SONET term..this is trib level protection scheme.. MSPRING/BLSR are same thing..one is SDH term and later one is SONET term, it has two different form of protocol..2F and 4F....

***Hi can any body tell me how DWDM can be protected

There is no inherent protection mechanism defined in standards for DWDM.Anyways when you SDH traccif being carried is already having different protection schemes a protection on DWDM is required and would prove to be very costly .If still protection is needed one can have two lambda’s terminating between the same node and through diverse routes .

There is no inherent protection mechanism defined in standards for DWDM.Anyways when you SDH traffic being carried is already having different protection schemes a protection on DWDM is not required and would prove to be very costly .If still a protection is needed one can have two lambda’s terminating between the same node and through diverse routes

** Can you please tell me what is the difference between AUG-3 and TUG-3

There is nothing called a AUG-3 , there is however a TUG -3. What are they? ITU G.707 clearly explains that: An Administrative Unit is the information structure which provides adaptation between the higher order path layer and the multiplex section layer. It consists of an information payload (the higher order Virtual Container) and an Administrative Unit pointer which indicates the offset of the payload frame start relative to the multiplex section frame start. Two Administrative Units are defined. The AU-4 consists of a VC-4 plus an Administrative Unit pointer which indicates the phase alignment of the VC-4 with respect to the STM-N frame. The AU-3 consists of a VC-3 plus an Administrative Unit pointer which indicates the phase alignment of the VC3 with respect to the STM-N frame. In each case the Administrative Unit pointer location is fixed with respect to the STM-N frame. One or more Administrative Units occupying fixed, defined positions in an STM payload are termed an Administrative Unit Group (AUG) Further, A Tributary Unit is an information structure which provides adaptation between the lower order path layer and the higher order path layer. It consists of an information payload (the lower order Virtual Container) and a Tributary Unit pointer which indicates the offset of the payload frame start relative to the higher order Virtual Container frame start. The TU-n (n=1, 2, 3) consists of a VC-n together with a Tributary Unit pointer. One or more Tributary Units, occupying fixed, defined positions in a higher order VC-n payload is termed a Tributary Unit Group (TUG). TUGs are defined in such a way that mixed capacity payloads made up of different size Tributary Units can be constructed to increase flexibility of the transport network. A TUG-2 consists of a homogeneous assembly of identical TU-1s or a TU-2. A TUG-3 consists of a homogeneous assembly of TUG-2s or a TU-3. ** What is the difference between a terminal multiplexer and ADM? Function of Terminal Multiplexer is Dropping off all Signal Because SNR is more after some distance so we drop all signal at terminal point.

But in case of ADM we use only some specific Channel as per customer requirement. **Why do we used these specific bands?

We are using C & L BAND Because in particuler band PMD & CD is very less and these band follow Fiber Property and these band having very less Optical loss. **What is the significance of H4 byte in VCAT and LCAS context? H4 Byte are used for mutiframe generation. **Tell about Mux and OADMs Mux - Function of Mux to Multiply Many Transponder to send another location **What is the difference between Muxponder and Multiplexer? OADM- Optical Add & Drop Multiplexing OADM are used where as per customer requirement some traffic drop and some some traffic add. i am currently using NEC OADM in which we can ADD & Drop upto 6 Transponder **Which band does DWDM use? DWDM is used in two Band 1: C Band Range 1530.31 to 1562.23 2: L Band Range 1572.08 to 1608.32

**VCAT Significance? Virtual Concatenation is a Standardized Layer 1 Inverse Multiplexing Technique that can be applied to the Optical Transport Network , Synchronous Optical Network , Synchronous Digital Hierarchy , and PDH Component signals. By Inversing Multiplexing , it Multiple link at particular layer into aggregate links to achieve a commensurate increasein available bandwidth on the aggregate links to achieve a commensurate increase in available bandwidth on theaggregate link