Standards

Virtual Concatenation: Knowing the Details .

Pinterest LinkedIn Tumblr

Virtual Concatenation: Knowing the Details .
They say the devil is in the details. That’s certainly the case when dealing with virtual concatenation. Clearly, designers at the chip, equipment, and carrier level have touted the wonders that virtual concatenation delivers. But, what often gets lost in these discussions are the real challenges that chip and equipment developers will face when implementing virtual concatenation in a real-world design.

In this two-part series, we’ll examine the design issues that developers will encounter when implementing virtual concatenation in a system level design. In Part 1, we’ll examine the basic benefits of virtual concatenation, the difference between high- and low-order virtual concatenation pipes, and differential delay issues. In Part 2, we’ll take a detailed look at the link capacity adjustment scheme (LCAS).

Why VC is So hot
Much has already been said and written about the benefits of virtual concatenation over current payload mapping capabilities of Sonet and SDH. Table 1 summarizes the individual payload capacities of different commonly used Sonet or SDH paths. The table includes both high- and low-order paths with and without standard contiguous concatenation (denoted by the “c”).

While allowing a range of bandwidths to be provisioned, these current mappings do not have the granularity required to make efficient use of the existing network infrastructure. One other important point to note is that contiguous concatenation of VT1.5/VC-11s or VT2/VC-12s is not supported.

Table 1: Current Sonet and SDH Payload Capacities

 

Container (Sonet/SDH) Type Payload Capacity (Mbit/s)
VT1.5/VC 11 Low Order 1.600
VT2/VC 12 Low Order 2.176
STS-1/VC 3 High Order 48.384
STS-3c/VC 4 High Order 149.76
STS-12c/VC 4-4c High Order 599.04
STS-24c/VC 4-8c High Order 1198.08
STS-48c/VC 4-16c High Order 2396.16
STS-192c/VC 4-64c High Order 9584.64

 

Table 2 lists the payload capacities possible with virtual concatenation. As is shown, concatenation of VT1.5/VC-11s or VT2/VC-12s is supported and the concatenation of high-order paths is much more granular.

Table 2: Virtual Concatenation Payload Capacities

 

Container (Sonet/SDH) Type Payload Capacity (Mbit/s)
VT1.5 Xv/VC 11 Xv Low Order X x 1.600 (X=1.64)
VT2 Xv/VC 12 Xv Low Order X x 2.176 (X = 1.64)
STS-1-Xv/VC 3-Xv High Order X x 48.384 (X = 1.256)
STS-3-Xv/VC 4-Xv High Order X x 149.76 (X = 1.256)

 

Making Use of Unused Overhead
In addition to allowing more flexible mapping, virtual concatenation also releases two of the restrictions upon which contiguous concatenation relies to reconstruct the signal being carried. These are phase alignment of the members of the concatenation and an inherent sequence order of the members. Consequently, in order to reconstruct the original signal from a virtually concatenated group (VCG), it is necessary to determine the phase alignment and sequence of the received members. The information required to support this is carried in previously unused Sonet/SDH path overhead, which is overhead that is generated by a payload mapper and effectively stays intact regardless of how the payload makes its way through the network to its destination. Note: In SDH, tandem connection monitoring involves the modification of some Path Overhead at an intermediate point.

To put this in context, Figure 1 illustrates the high- and low-order paths and their path overhead. For high-order paths, virtual concatenation uses the H4 byte while for low-order paths, virtual concatenation uses bit 2 of the Z7/K4 byte.

Figure 1: High- and low-order paths and overhead.

For both high- and low-order paths, the information required is structured in a multi frame format as shown in Figure 2. For high-order paths, the multi frame structure is defined by the virtual concatenation overhead carried in the H4 byte. For the low-order paths, on the other hand, the multi frame structure is phase aligned with the multi frame alignment signal (MFAS) of bit 1 of the Z7/K4 byte that carries the extended signal label.

Figure 2: Virtual concatenation multi frame formats.

High-Order Overhead
In high-order paths, the H4 multi frame structure is 16 frames long for a total of 2 ms. Within this structure, there are two multi frame indicators—MFI1 and MFI2. MFI1 is a 4-bit field which increments every frame while MFI2 is an 8-bit field which increments every multi frame.

The most significant and least significant nibbles of MFI2 are sent over the first two frames of a multi frame. Together with MFI1, they form a 12-bit field that rolls over every 512 ms (4096 x 125 μs). This allows for a maximum differential path delay of less than 256 ms to ensure that it is always possible to determine which members of a VCG arrive earliest (shortest network delay) and which members arrive latest (longest network delay).

If the differential delay were 256 ms or more, it would not be possible to know if a member with an {MFI2,MFI1}=0 is 256 ms behind or 256 ms ahead of a member with an {MFI2,MFI1}=2048.

The second piece of information conveyed in the H4 byte is the sequence indicator (SQ). This is an 8-bit field that, like the MFI2, is sent a nibble at a time over two frames in the multi frame. In this case if SQ, it is sent over the last two frames. Consequently, a high-order VCG can contain up to 256 members.

The number of members is obviously limited by the number of paths available in the transport signal. Thus, a 40-Gbit pipe would have to be a reality to have 256 STS 1 or VC 3 members. Referring back to the payload capacities of Table 1, for STS 1 256v/VC 3 256vs, the payload capacity is 256 x 48.384 Mbit/s = 12,386.304 Mbit/s. For STS 3c 256v/VC 4 256v, the payload capacity would be close to 50 Gbit/s.

Low-Order Overhead
Figure 1 above shows that low-order paths have an inherent multi frame structure of 4 Sonet/SDH frames (or 500 μs). As illustrated in Figure 2, the virtual concatenation multi frame structure, delineated by the MFAS pattern in the extended signal label bit (bit 1 of K4), is 32 of these 500 μs multi frames for a total VC multi frame (or should we say multi multi frame) duration of 16 ms.

Within the virtual concatenation multi-frame structure of bit 2 of the K4 byte, again, there is a multi frame indicator (MFI) and an SQ. In this case, the MFI is a 5-bit field sent over the first five 500 μs multi frames of the VC multi frame that rolls over every 512 ms (32 x 16ms). Again, this permits a maximum differential delay across all members of a low-order VCG of less than 256 ms.

The SQ for LO paths is a 6-bit field which is transmitted over virtual concatenation multi frames 6 through 11 allowing for up to 64 members on a low-order VCG. Again, using the values in Table 2, for VT1.5 64v/VC 11 64v, the payload capacity is 102.4 Mbit/s and the payload capacity of a VT2 64v/VC 12 64v is 139.264 Mbit/s.

Differential Delay alignment
When data is mapped into a VCG, it is essentially ‘demultiplexed’, on a byte by byte basis, across the members of the VCG in the sequence provisioned (reflected by the SQ bytes of each member). At the destination, these discrete paths must be ‘remultiplexed’ to form the original signal. Allowance for differential delay across the members of a VCG implies that all members must be delayed to that of the maximum member such that the ‘remultiplexing’ can be performed correctly.

As a concept, differential delay alignment is not particularly complex. Each member has its data written into a buffer upon reception along with some kind of indication as to where the MFI boundaries are. Data for a given MFI is then read out of each buffer, thus creating phase alignment of the members. The depth of each buffer (the difference between the read and write pointers) is a measure of the difference in delay between that member and the member that has the most network delay.

The main issue with differential delay is the amount of buffer space required. Designers can calculate the amount of buffer space required using the maximum number of members supported. For example, each VT1.5/VC 11 has a payload capacity of 1.6Mbit/s. The worst case is that a member would have to be delayed by just under 256 ms which represents 1.6 Mbit/s x 0.256 s = 400 kbit. Similarly, an STS 1/VC 3 requires a maximum payload capacity of 1.48 Mbit.

These numbers may not seem significant until one considers the number of paths in a given transport signal. Table 3 shows the memory requirements for some potential combinations of virtual concatenation path types and the transport signals that may carry them. Note: the calculations in Table 3 reflect maximum buffer sizes on all paths assuming only payload data is buffered. At least one member of each VCG, by definition, will have minimal buffering so the actual requirements will be slightly lower. If any Path Overhead is also buffered, then the requirements may rise.

Table 3: Virtual Concatenation Delay Buffer Requirements for Various Transport Signals

 

Virtual Concatenation Path Type Transport Signal Number of Paths total Delay Buffer Size
VT1.5/VC 11 STS-3/STM-1 84 33 Mbit
VT1.5/VC 11 STS-12/STM-4 336 131 Mbit
VT2/VC 12 STS-3/STM-1 63 33.5 Mbit
VT2/VC 12 STS-3/STM-4 252 134 Mbit
STS-1/VC-3 STS-12/STM-4 12 142 Mbit
STS-1/VC-3 STS-48/STM-16 48 567 Mbit
STS-3c/VC-4 STS-12/STM-4 4 146 Mbit
STS-3c/VC-4 STS-48/STM-16 12 585 Mbit

 

It is clear from Table 3 that, even for low bandwidth mapping/demapping devices (STS-3/STM-1) that support virtual concatenation, it is impractical to provide on-board buffers allowing for 256 ms of differential delay.

The obvious way to solve this problem is to equip mapping/demapping devices with interfaces to external memory that is large enough to hold the amounts of data listed above. Again this sounds straightforward but there is another consideration that complicates the solution. The data transfer rate between the mapper/demapper and the external buffer memory is twice that of the transport signal rate. This is because the data must be both written to and read from the buffers at the transport signal rate. For an OC 48/STM 16 this amounts to close to 5 Gbit/s. Even with 32-bit wide memory, this results in approximately 150 Mtransfers/s.

The memory options that support these rates are not plentiful. Essentially, these devices must support external SDRAM or SRAM. SDRAM may seem like a good solution due to the large capacities available and the apparent speed that DDR and QDR SDRAMs can support. These speeds can only be achieved, however, if access to the memory involves sustained bursts to sequential memory blocks where successive blocks sit in different pages within the SDRAM structure. This can’t easily be guaranteed, as the allocation of memory is entirely dependent on the type, number and delay supported of the members all VCGs being terminated by the device.

SRAMs, on the other hand, can easily keep up with the transfer rates required with no restriction on the order that data is either written or read but capacities of 500 Mbit can be prohibitive in cost and real estate. Consequently, component vendors must choose carefully how much differential delay and what type of external memory their mapper/demapper devices will support.

Virtual Concatenation: Knowing the Details continues
The hype behind virtual concatenation has been growing for more than a year now. And the link capacity adjustment scheme is one of the reasons why. LCAS enhances the capabilities provided by virtual concatenation, allowing operators to adjust virtually concatenated groups (VCGs) on the fly, thus improving network utilization even further.

But, like virtual concatenation, LCAS implementation can be quite challenging for today’s chip and equipment designers. In Part 1, we looked at virtual concatenation and the implementation issues designers will face using this technology. Now, in Part 2, we’ll focus our attention on describing how LCAS works and the design issues engineers will face when using this technology in a chip or system design.

Understanding LCAS
The link capacity adjustment scheme (LCAS) mainly attempts to address two of the tricky issues associated with virtual concatenation: ability to increase or decrease the capacity of a VCG and the ability to deal gracefully with member failures.

With LCAS, not all members of a VCG need to be active in order to pass data from the source (So), to the Sink (Sk). Once a VCG is defined, the So and Sk equipment are responsible for agreeing which members will carry traffic. There are also procedures that allow them to agree to remove or add members at any time. To achieve this, signaling between the source and sink is required and some of the reserved fields in the virtual concatenation overhead are used for this purpose.

Within LCAS, a control packet is defined that carries the following fields:

  • Member status (MST)
  • Re-sequence acknowledge (RS-Ack)
  • Control (CTRL)
  • Group ID (GID)
  • CRC-3/CRC-8 (3 for LO, 8 for HO)

The position of these fields within the VC multi-frames for high- and low-order paths are shown in Figure 3. Note that, for high-order paths, the control packet begins with the MST field in MFI n and ends with the CRC-8 field in MFI n+1.

Figure 3: Signaling overhead associated with LCAS.

The MST field provides a means of communicating, from the Sk to the So, the state of all received VCG members. The state for each member is either OK or FAIL (1 bit). Since there are potentially more members than bits in the field in a given VC multi-frame, it takes 32 high-order virtual concatenation multi-frames and 8 low-order virtual-concatenation multi-frames to signal the status of all members. This signaling allows the Sk to indicate to the So that a given member has failed and may need be removed from the list of active members of the VCG.

The RS-Ack field is a bit that is toggled by the Sk to indicate to the So that changes in the sequence numbers for that VCG have been evaluated. It also signals to the So that the MST information in the previous multi-frame is valid. With this signaling, the So can be informed that the changes it has requested (either member addition or removal) have been accepted by the Sk.

The MST and RS-Ack fields are identical in all members of the VCG upon transmission from the Sk.

The Control Field
The control field allows the So to send information to the Sk describing the state of the link during the next control packet. Using this field, the So can signal that the particular path should be added (ADD) to the active members, be deleted (or remain deleted) from the active members (IDLE) or should not be used due to a failure detected at the Sk (DNU). It can also indicate that the particular path is an active member (NORM) or the active member with the highest SQ (EOS). Finally, for compatibility with non-LCAS VCAT the CTRL field can indicate that fixed bandwidth is used (FIXED).

The Group ID field provides a means for the receiver to determine that all members of a received VCG come from the same transmitter. The field contains a portion of a pseudo-random bit sequence (215–1). This value is the same for all members of a VCG at any given MFI.

Finally, the CRC field provides a means to validate the control packet received before acting on it. In this way, the signaling link is tolerant of bit errors.

Basic LCAS Operation
When a VCG is initiated, all MSTs generated by the Sk are set to FAIL. It is then the responsibility of the So to add members to the VCG to establish data continuity. The So can set the initial SQ numbers of multiple members and set their CTRL fields to ADD. The Sk will then set all the corresponding MSTs to OK.

The first MST recognized by the So has its SQ renumbered to the lowest value and this re-sequence will be transmitted to the Sk. Multiple members can be recognized at the same time by the So and the re-sequence may involve more than one member.

The Sk acknowledges the re-sequence by toggling the RS-Ack in all members. After the RS-Ack is received by the So, it will set CTRL field for the corresponding members to NORM with highest SQ member being set to EOS. This process continues until all members have been added to the active group. At this point, the CTRL field for all but one added members will be NORM. The member with the highest SQ will have its CTRL field set to EOS.

Adding, Deleting Members
When members are to be added or deleted, the sequence is similar. The CTRL field for the member or members in question will be set by the So to either ADD or IDLE depending on the operation requested. The Sk will then respond with MST values of either OK or FAIL respectively. Again, the order that the updated MST values are seen and confirmed by the So will determine how the SQ values are updated.

In the event of a network failure resulting in a member failure, the Sk will set the corresponding MST (or MSTs) to FAIL. The So, upon seeing and confirming the status of this member (or members), will set the CTRL field for that member (or those members) to DNU.

If the last member has an MST of FAIL, then the next previous member that remains active will have its CTRL field changed from NORM to EOS. In the event that the failure is repaired, the MST (or MSTs) will be updated by the Sk to OK. At this point, the So can update the CTRL value (or values) to NORM to indicate that the member (or members) will again carry traffic at the next boundary.

In all cases, bandwidth changes take place on the high-order frame or low-order multi-frame following the reception of the CRC of the control packet where the CTRL fields change to or from NORM. Specifically, this is synchronized with the first payload byte after the J1 or J2 following the end of the control packet. This byte will be the first one either filled with data in the case of an added member or the first one left empty in the case of a deleted member.

LCAS Design Considerations
One of the most attractive features of LCAS is the fact that it provides a mechanism to map around VCG member failures by allowing them to be temporarily removed from a VCG without user intervention. Typically, however, paths will be protected in some fashion whether it is 1:N span protections or, more likely unidirectional path switched/subnetwork connection protection (UPSR/SNCP)-type protection within the network. If this is the case, then, on a network failure, it would easily be possible for an Sk to lose a member and signal that condition via the MST field and then regain that member after the So has already initiated temporary removal from the group. Without the ability to allow existing network APS schemes to settle before acting at the LCAS level, this kind of scenario can lead to a considerable amount of thrashing in the reestablishment of data continuity for the VCG after a network failure.

Similarly, while the flexibility of SQ assignment can allow for graceful inclusion or exclusion of VCG members, it can also create significant complication in managing the members. When an So chooses to add multiple members, it must arbitrarily set the SQ values for each member that it wishes to add to something greater than the maximum existing SQ value.

Once an MST=OK is received for any of those members, the So then sets that member’s SQ value to one greater than the highest active member. This means that the all the ‘new’ SQ values of the other members waiting to be added may need to be rearranged.

The RS-Ack is defined specifically so that the Sk can evaluate the new SQ information and acknowledge it before data is placed on the new member, but any table driven alignment scheme based on received SQ values must be tolerant of these changes. Also, software must be able to manage the changes in correlation between Sonet/SDH paths and their SQ values over time.

Additionally, when members are deleted from a VCG, their new SQ values can be any value greater than the highest active member. There is no restriction that these values be unique so many inactive members can share the same SQ. Again, context-switched state machines that run through the SQ values ensuring that all members are processed properly must handle this condition.

Other potential problems can arise from how unused members are handled. If unused paths are received with AIS or unequipped, then the path overhead will contain no virtual concatenation signaling of any kind. There is then no way to determine any kind of virtual concatenation multi-frame alignment of these members. So it must be possible to achieve alignment on the working members of a VCG regardless of the state of all other members.

Moving Processing to Software
As seen above, complications can arise in how different information is interpreted when using LCAS. Due to this complexity and the signaling durations involved (e.g. 64 or 128 ms required to update all MST), it is attractive to move some of the processing to software where more variables can, more easily, be considered. In fact some functions, such as waiting for APS to settle, can be better handled in software.

Care must be taken in establishing the hardware/software partition, however. For example, if a system needs to support hitless addition or deletion of members, the time between the reception of a control packet and when the data multiplexing configuration changes is just 55.6 μs for high-order paths and 250 μs for low-order paths. Software will typically not be able to reconfigure the data multiplexing quickly enough once it has determined what changes are about to occur. It is possible, depending on how many VCGs have changes going on at the same time, that software implementation will not even sort out the changes before they happen.

Wrap Up
Access equipment that supports both high- and low-order mapping allows the service provider to tailor the connectivity granularity and cost based on the requirements of the customers at each installation while only needing to worry about a limited product inventory. With virtual concatenation, the service provider can efficiently provide this appropriate level of connectivity without having to resort to statistical multiplexing techniques that complicate service level agreements (SLAs). With LCAS, bandwidth flexibility and fault tolerance are added.

Designing the systems and components to support virtual-concatenation-enabled Sonet and SDH infrastructures is not trivial, however. Designers have to draw on their experience with legacy equipment and the problems found in the network today to ensure a robust implementation of tomorrow’s network.

.

Author

Share and Explore the Tech Inside You!!!

Write A Comment