Category

Technical

Category

As defined in G.709 an ODUk container consist of an OPUk (Optical Payload Unit) plus a specific ODUk Overhead (OH).    OPUk OH information is added to the OPUk information payload to create anOPUk.  It includes information to support the adaptation of client signals.Within the OPUk overhead there is the payload structure identifier (PSI) that includes the     payload type (PT).  The payload type (PT) is used to indicate the composition of the OPUk signal.

When an ODUj signal is multiplexed into an ODUk, the ODUj signal is first extended with frame alignment overhead and   then mapped into an Optical channel Data Tributary Unit (ODTU). Two different types of ODTU are   defined in G.709:

– ODTUjk ((j,k) = {(0,1), (1,2), (1,3), (2,3)}; ODTU01,ODTU12,ODTU13 and ODTU23) in which an ODUj signal is mapped via the asynchronous mapping procedure (AMP), defined in clause 19.5 of G.709.

– ODTUk.ts ((k,ts) = (2,1..8), (3,1..32), (4,1..80)) in which a lower order ODU (ODU0, ODU1, ODU2, ODU2e, ODU3, ODUflex) signal is mapped via the generic mapping procedure (GMP), defined in clause 19.6 of  G.709.

When PT is assuming value 20 or 21,together with OPUk type (K=1,2,3,4), it is used to discriminate two different  ODU multiplex structure ODTUGx :

 Value 20: supporting ODTUjk only,

– Value 21: supporting ODTUk.ts or ODTUk.ts and ODTUjk.

The discrimination is needed for OPUk with K =2 or 3, since OPU2 and OPU3 are able to support both the different ODU multiplex structures.For OPU4 and OPU1, only one type  of ODTUG is supported: ODTUG4 with PT=21 and ODTUG1 with PT=20.The relationship between PT and TS granularity, is in the fact that the twodifferent ODTUGk discriminated by PT and OPUk  are characterized by two different TS granularities of the relatedOPUk, the former at 2.5   Gbps, the latter at 1.25Gbps.

To detect a failure that occurs at the source (e.g., laser failure) or the transmission facility (e.g., fiber cut), all incoming SONET signals are monitored for loss of physical-layer signal (optical or electrical). The detection of an LOS defect must take place within a reasonably short period of time for timely restoration of the transported payloads.

A SONET NE shall monitor all incoming SONET signals (before descrambling) for an “all-zeros patterns,” where an all-zeros pattern corresponds to no light pulses for OC-N optical interfaces and no voltage transitions for STS-1 and STS-3 electrical interfaces. An LOS defect shall be detected when an all-zeros pattern on the incoming SONET signal lasts 100 μs or longer. If an all-zeros pattern lasts 2.3 μs or less, an LOS defect shall not be detected. The treatment of all-zeros patterns lasting between 2.3 μs and 100 μs for the purpose of LOS defect detection is not specified and is therefore left to the choice of the equipment designer. For testing conformance to the LOS detection requirement, it is sufficient to apply an all-zeros pattern lasting at most 2.3 μs, and to apply an all-zeros pattern lasting at least 100 μs.

Note that although an all-zeros pattern that lasts for less than 2.3 μs must not cause the detection of an LOS defect, an NE that receives a relatively long (in terms of the number of bit periods) all-zeros pattern of less than 2.3 μs is not necessarily expected to continue to operate error-free through that pattern.For example, in such cases it is possible that the NE’s clock recovery circuitry may drift off frequency due to the lack of incoming pulses, and therefore the NE may be “looking in the wrong bit positions” for the SONET framing pattern after the all-zeros pattern ends. If this occurs, it will continue for approximately 500 μs, at which point the NE will detect an SEF defect. The NE would then perform the actions associated with SEF defect detection (e.g., initiate a search for the “new” framing pattern position),rather than the actions associated with LOS defect detection (e.g., AIS and RDI insertion, possible protection switch initiation). In addition to monitoring for all-zeros patterns a SONET NE may also detect an LOS defect if the received signal level (e.g., the incoming optical power) drops below an implementation-determined threshold.

INTRODUCTION

For more than 30 years, Ethernet has evolved to meet the growing demands of packet‐switched networks. It has become the unifying technology enabling communications via the Internet and other networks using Internet Protocol (IP). Due to its proven low cost, known reliability, and simplicity, the majority of today’s internet traffic starts or ends on an Ethernet connection. This popularity has resulted in a complex ecosystem between carrier networks, enterprise networks, and consumers creating a symbiotic relationship between its various parts.In 2006, the IEEE 802.3 working group formed the Higher Speed Study Group (HSSG) and found that the Ethernet ecosystem needed something faster than 10 Gigabit Ethernet. The growth in bandwidth for network aggregation applications was found to be outpacing the capabilities of networks employing link aggregation with 10 Gigabit Ethernet. As the HSSG studied the issue, it was determined that computing and network aggregation applications were growing at different rates. For the first time in the history of Ethernet, a Higher Speed Study Group determined that two new rates were needed: 40 gigabit per second for server and computing applications and 100 gigabit per second for network aggregation applications.The IEEE P802.3ba 40 Gb/s and 100 Gb/s Ethernet Task Force was formed in January 2008 to develop a 40 Gigabit Ethernet and 100 Gigabit Ethernet draft standard. Encompassed in this effort was the development of physical layer specifications for communication across backplanes, copper cabling, multi‐ mode fibre, and single‐mode fibre. Continued efforts by the Task Force led to the approval of the IEEE Std 802.3ba‐2010 40 Gb/s and 100 Gb/s Ethernet amendment to the IEEE Std 802.3‐2008 Ethernet standard on June 17, 2010 by the IEEE Standards Board.

 

OBJECTIVE

  • The objectives that drove the development of this standard are that it
  • Support full‐duplex operation only
  • Preserve the 802.3 / Ethernet frame format utilizing the 802.3 media access controller (MAC)
  • Preserve minimum and maximum frame size of current 802.3 standard
  • Support a bit error rate (BER) better than or equal to 10‐12 at the MAC/ physical layer service interface
  • Provide appropriate support for optical transport network (OTN)
  • Support a MAC data rate of 40 gigabit per second
  • Provide physical layer specifications which support 40 gigabit per second operation over:
  • at least 10km on single mode fibre (SMF)
  • at least 100m on OM3 multi‐mode fibre (MMF)
  • at least 7m over a copper cable assembly
  • at least 1m over a backplane
  • Support a MAC data rate of 100 gigabit per second
  •  Provide physical layer specifications which support 100 gigabit per second operation over:
  • at least 40km on SMF
  •  at least 10km on SMF
  •  at least 100m on OM3 MMF
  •  at least 7m over a copper cable assembly

ARCHITECTURE

The 40 Gb/s media system defines a physical layer (PHY) that is composed of a set of IEEE sublayers. Figure 1-1 shows the sublayers involved in the PHY. The standard defines an XLGMII logical interface, using the Roman numerals XL to indicate 40 Gb/s. This interface includes a 64-bit-wide path over which frame data bits are sent to the PCS. The FEC and Auto-Negotiation sublayers may or may not be used, depending on the media type involved.

The 100 Gb/s media system defines a physical layer (PHY) that is composed of a set of IEEE sublayers. Figure 1-2 shows the sublayers involved in the PHY. The standard defines a CGMII logical interface, using the Roman numeral C to indicate 100 Gb/s. This interface defines a 64-bit-wide path, over which frame data bits are sent to the PCS. The FEC and AN sublayers may or may not be used, depending on the media type involved.

eth

Figure 1-2

PCS ( Physical Coding Sublayer) LANES

To help meet the engineering challenges of providing 40 Gb/s data flows, the IEEE engineers provided a multilane distribution system for data through the PCS sublayer  of the Ethernet interface.

the PCS translates between the respective media independent interface (MII) for each rate and the PMA sublayer. The PCS is responsible for the encoding of data bits into code groups for transmission via the PMA and the subsequent decoding of these code groups from the PMA. The Task Force developed a low‐overhead multilane distribution scheme for the PCS for 40 Gigabit Ethernet and 100 Gigabit Ethernet.

This scheme has been designed to support all PHY types for both 40 Gigabit Ethernet and 100 Gigabit Ethernet. It is flexible and scalable, and will support any future PHY types that may be developed, based on future advances in electrical and optical transmission. The PCS layer also performs the following functions:

  • Delineation of frames
  • Transport of control signals
  • Ensures necessary clock transition density needed by the physical optical and electrical technology
  • Stripes and re‐assembles the information across multiple lanes

The PCS leverages the 64B/66B coding scheme that was used in 10 Gigabit Ethernet. It provides a number of useful properties including low overhead and sufficient code space to support necessary code words, consistent with 10 Gigabit Ethernet.

PCS lane for 10 Gb/s Ethernet

The multilane distribution scheme developed for the PCS is fundamentally based on a striping of the 66‐bit blocks across multiple lanes. The mapping of the lanes to the physical electrical and optical channels that will be used in any implementation is complicated by the fact that the two sets of interfaces are not necessarily coupled. Technology development for either a chip interface or an optical interface is not always tied together. Therefore, it was necessary to develop an architecture that would enable the decoupling between the evolution of the optical interface widths and the evolution of the electrical interface widths.

The transmit PCS, therefore, performs the initial 64B/66B encoding and scrambling on the aggregate channel (40 or 100 gigabits per second) before distributing 66‐bit block in a round robin basis across the multiple lanes, referred to as “PCS Lanes,” as illustrated in Figure 2.

The number of PCS lanes needed is the least common multiple of the expected widths of optical and electrical interfaces. For 100 Gigabit Ethernet, 20 PCS lanes have been chosen.  The number of electrical or optical interface widths supportable in this architecture is equivalent to the number of factors of the total PCS lanes. Therefore, 20 PCS lanes support interface widths of 1, 2, 4, 5, 10 and 20 channels or wavelengths. For 40 Gigabit Ethernet 4 PCS lanes support interface widths of 1, 2, and 4 channels or wavelengths.

Figure 2- Virtual lane data distribution

Once the PCS lanes are created they can then be multiplexed into any of the supportable interface widths. Each PCS lane has a unique lane marker, which is inserted once every 16,384 blocks. All multiplexing is done at the bit‐level. The round‐robin bit‐level multiplexing can result in multiple PCS lanes being multiplexed into the same physical channel. The unique property of the PCS lanes is that no matter how they are multiplexed together, all bits from the same PCS lane follow the same physical path, regardless of the width of the physical interface. This enables the receiver to be able to correctly re‐assemble the aggregate channel by first de‐multiplexing the bits to re‐assemble the PCS lane and then re‐align the PCS lanes to compensate for any skew. The unique lane marker also enables the de‐skew operation in the receiver. Bandwidth for these lane markers is created by periodically deleting inter‐packet gaps (IPG). These alignment blocks are also shown in Figure 2.

The receiver PCS realigns multiple PCS lanes using the embedded lane markers and then re‐orders the lanes into their original order to reconstruct the aggregate signal.

Two key advantages of the PCS multilane distribution methodology are that all the encoding, scrambling and de‐skew functions can all be implemented in a CMOS device (which is expected to reside on the host device), and minimal processing of the data bits (other than bit muxing) happens in the high speed electronics embedded with an optical module. This will simplify the functionality and ultimately lower the costs of these high‐speed optical interfaces.

The PMA sublayer enables the interconnection between the PCS and any type of PMD sublayer. A PMA sublayer will also reside on either side of a retimed interface, referred to as “XLAUI” (40 gigabit per second attachment unit interface) for 40 Gigabit Ethernet or “CAUI” (100 gigabit per second attachment unit interface) for 100 Gigabit Ethernet.

PCS multilane for 40 Gb/s Ethernet

PCS lanes over a faster media system

 100 Gb/s multilane transmit operation

100 Gb/s multi-lane receive operation

 

Summary

Ethernet has become the unifying technology enabling communications via the Internet and other networks using IP. Its popularity has resulted in a complex ecosystem between carrier networks, data centers, enterprise networks, and consumers with a symbiotic relationship between the various parts.

100 GbE and 40 GbE technologies are rapidly approaching standardization and deployment. A key factor in their success will be ability to utilize existing fiber and copper media in an environment of advancing technologies. The physical coding sublayer (PCS) of the 802.3 architecture is in a perfect position to facilitate this flexibility. The current baseline proposal for PCS implementation uses a unique virtual lane concept that provides the mechanism to handle differing electrical and optical paths.

 

Notes:64b/66b is a line code that transforms 64-bit data to 66-bit line code to provide enough state changes to allow reasonable clock recovery and facilitate alignment of the data stream at the receiver.

PAUSE frames are mechanism used in  Ethernet flow control that allows an interface or switch port to send a signal requesting a short pause in frame transmission.

The PAUSE system of flow control on full-duplex link segments, originally defined in 802.3x, uses MAC control frames to carry the PAUSE commands. The MAC control opcode for a PAUSE command is 0x0001 (hex). A station that receives a MAC control frame with this opcode in the first two bytes of the data field knows that the control frame is being used to implement the PAUSE operation, for the purpose of providing flow control on a full-duplex link segment. Only stations configured for full- duplex operation may send PAUSE frames.

When a station equipped with MAC control wishes to send a PAUSE command, it sends a PAUSE frame to the 48-bit destination multicast address of 01-80-C2-00-00-01. This particular multicast address has been reserved for use in PAUSE frames. Having a well- known multicast address simplifies the flow control process by making it unnecessary for a station at one end of the link to discover and store the address of the station at the other end of the link.

Another advantage of using this multicast address arises from the use of flow control on full-duplex segments between switches. The particular multicast address used was selected from a range of addresses reserved by the IEEE 802.1D standard, which specifies basic Ethernet switch (bridge) operation. Normally, a frame with a multicast destination address that is sent to a switch will be forwarded out all other ports of the switch. How‐ ever, this range of multicast addresses is special—they will not be forwarded by an 802.1D-compliant switch. Instead, frames sent to these addresses are understood by the switch to be frames meant to be acted upon within the switch.

A station sending a PAUSE frame to the special multicast address includes not only the PAUSE opcode, but also the period of pause time being requested, in the form of a two- byte integer. This number contains the length of time for which the receiving station is requested to stop transmitting data. The pause time is measured in units of pause “quanta,” where each unit is equal to 512 bit times. The range of possible pause time requests is from 0 through 65,535 units.

pause

 

Figure 1 shows what a PAUSE frame looks like. The PAUSE frame is carried in the data field of the MAC control frame. The MAC control opcode of 0x0001 indicates that this is a PAUSE frame. The PAUSE frame carries a single parameter, defined as the pause_time in the standard. In this example, the content of pause_time is 2, indicating a request that the device at the other end of the link stop transmitting for a period of two pause quantas (1,024 bit times total).

By using MAC control frames to send PAUSE requests, a station at one end of a full- duplex link can request the station at the other end of the link to stop transmitting frames for a period of time. This provides real-time flow control between switches, or between a switch and a server that are equipped with the optional MAC control software and connected by a full-duplex link.

The Ethernet Frame

The organization of the Ethernet frame is central to the operation of the system. The Ethernet standard determines both the structure of a frame and when a station is allowed to send a frame. The frame was first defined in the original Ethernet DEC-Intel-Xerox (DIX) standard, and was later redefined and modified in the IEEE 802.3 standard. The changes between the two standards were mostly cosmetic, except for the type or length field.

The DIX standard defined a type field in the frame. The first 802.3 standard (published in 1985) specified this field as a length field, with a mechanism that allowed both versions of frames to coexist on the same Ethernet system. Most networking software kept using the type field version of the frame. A later version of the IEEE 802.3 standard was changed to define this field of the frame as being either length or type, depending on usage.

Figure 1-1 shows the DIX and IEEE versions of the Ethernet frame. There are three sizes of frame currently defined in the standard, and a given Ethernet interface must support at least one of them. The standard recommends that new implementations support the most recent frame definition, called an envelope frame, which has a maximum size of 2,000 bytes. The two other sizes are basic frames, with a maximum size of 1,518 bytes, and Q-tagged frames with a maximum of 1,522 bytes.

 

 

Because the DIX and IEEE basic frames both have a maximum size of 1,518 bytes and are identical in terms of the number and length of fields, Ethernet interfaces can send either DIX or IEEE basic frames. The only difference in these frames is in the contents of the fields and the subsequent interpretation of those contents by the network interface software.

Now, we’ll take a detailed tour of the frame fields.

Preamble

The frame begins with the 64-bit preamble field, which was originally incorporated to allow 10 Mb/s Ethernet interfaces to synchronize with the incoming data stream before the fields relevant to carrying the content arrived.

The preamble was initially provided to allow for the loss of a few bits due to signal start- up delays as the signal propagates through a cabling system. Like the heat shield of a spacecraft, which protects the spacecraft from burning up during reentry, the preamble was originally developed as a shield to protect the bits in the rest of the frame when operating at 10 Mb/s.

The original 10 Mb/s cabling systems could include long stretches of coaxial cables, joined by signal repeaters. The preamble ensures that the entire path has enough time to start up, so that signals are re‐ ceived reliably for the rest of the frame.

The higher-speed Ethernet systems use more complex mechanisms for encoding the signals that avoid any signal start-up losses, and these systems don’t need a preamble to protect the frame signals. However, it is maintained for backward compatibility with the original Ethernet frame and to provide some extra timing for interframe house‐ keeping, as demonstrated, for example, in the 40 Gb/s system.

While there are differences in how the two standards formally defined the preamble bits, there is no practical difference between the DIX and IEEE preambles. The pattern of bits being sent is identical:

DIX standard

In the DIX standard, the preamble consists of eight “octets,” or 8-bit bytes. The first seven comprise a sequence of alternating ones and zeros. The eighth byte of the preamble contains 6 bits of alternating ones and zeros, but ends with the special pattern of “1, 1.” These two bits signal to the receiving interface that the end of the preamble has been reached, and that the bits that follow are the actual fields of the frame.

IEEE standard

In the 802.3 specification, the preamble field is formally divided into two parts consisting of seven bytes of preamble and one byte called the start frame delimit‐ er (SFD). The last two bits of the SFD are 1, 1, as with the DIX standard.

Destination Address

The destination address field follows the preamble. Each Ethernet interface is assigned a unique 48-bit address, called the interface’s physical or hardware address. The desti‐nation address field contains either the 48-bit Ethernet address that corresponds to the address of the interface in the station that is the destination of the frame, a 48-bit mul‐ ticast address, or the broadcast address.

Ethernet interfaces read in every frame up through at least the destination address field. If the destination address does not match the interface’s own Ethernet address, or one of the multicast or broadcast addresses that the interface is programmed to receive, then the interface is free to ignore the rest of the frame. Here is how the two standards implement destination addresses:

DIX standard

The first bit of the destination address, as sent onto the network medium, is used to distinguish physical addresses from multicast addresses. If the first bit is zero, then the address is the physical address of an interface, which is also known as a  unicast address, because a frame sent to this address only goes to one destination. If the first bit of the address is a one, then the frame is being sent to a multicast address. If all 48 bits are ones, this indicates the broadcast, or all-stations, address.

IEEE standard

The IEEE 802.3 version of the frame adds significance to the second bit of the destination address, which is used to distinguish between locally and globally ad‐ ministered addresses. A globally administered address is a physical address assigned to the interface by the manufacturer, which is indicated by setting the second bit to zero. (DIX Ethernet addresses are always globally administered.) If the address of the Ethernet interface is administered locally for some reason, then the second bit is supposed to be set to a value of one. In the case of a broadcast address, the second bit and all other bits are ones in both the DIX and IEEE standards.

Locally administered addresses are rarely used on Ethernet systems, because each Ethernet interfaces is assigned its own unique 48-bit address at the factory. Locally administered addresses, however, were used on some other local area network systems.

Understanding physical addresses

In Ethernet, the 48-bit physical address is written as 12 hexadecimal digits with the digits paired in groups of two, representing an octet (8 bits) of information. The octet order of transmission on the Ethernet is from the leftmost octet (as written or displayed) to the rightmost octet. The actual transmission order of bits within the octet, however, goes from the least significant bit of the octet through to the most significant bit.

This means that an Ethernet address that is written as the hexadecimal string F0-2E-15-6C-77-9B is equivalent to the following sequence of bits, sent over the Ether‐net channel from left to right:

0000 1111 0111 0100 1010 1000 0011 0110 1110 11101101 1001.

Therefore, the 48-bit destination address that begins with the hexadecimal value 0xF0 is a unicast address, because the first bit sent on the channel is a zero.

Source Address

The next field in the frame is the source address. This is the physical address of the device that sent the frame. The source address is not interpreted in any way by the Ethernet MAC protocol, although it must always be the unicast address of the device sending the frame. It is provided for the use of high-level network protocols, and as an aid in troubleshooting. It is also used by switches to build a table associating source addresses with switch ports. An Ethernet station uses its physical address as the source address in any frame it transmits.

The DIX standard notes that a station can change the Ethernet source address, while the IEEE standard does not specifically state that an interface may have the ability to override the 48-bit physical address assigned by the manufacturer. However, all Ethernet interfaces in use these days appear to allow the physical address to be changed, which makes it possible for the network administrator or the high-level network software to modify the Ethernet interface address if necessary.

To provide the physical address used in the source address field, a vendor of Ethernet equipment acquires an organizationally unique identifier (OUI), which is a unique 24- bit identifier assigned by the IEEE. The OUI forms the first half of the physical address of any Ethernet interface that the vendor manufactures. As each interface is manufac‐ tured, the vendor also assigns a unique address to the interface using the second 24 bits of the 48-bit address space, and that, combined with the OUI, creates the 48-bit address. The OUI may make it possible to identify the vendor of the interface chip, which can sometimes be helpful when troubleshooting network problems.

Q-Tag

The Q-tag is so called because it carries an 802.1Q tag, also known as a VLAN or priority tag. The 802.1Q standard defines a virtual LAN (VLAN) as one or more switch ports that function as a separate and independent Ethernet system on a switch. Ethernet traffic within a given VLAN (e.g., VLAN 100) will be sent and received only on those ports of the switch that are defined to be members of that particular VLAN (in this case, VLAN 100). A 4-byte-long Q-tag is inserted in an Ethernet frame between the source address and the length/type field to identify the VLAN to which the frame belongs. When a Q- Tag is present, the minimum data field size is reduced to 42 bytes, maintaining a min‐ imum frame size of 64 bytes.

Switches can be connected together with an Ethernet segment that functions as a trunk  connection that carries Ethernet frames with VLAN tags in them. That, in turn, makes it possible for Ethernet frames belonging to VLAN 100, for example, to be carried be‐ tween multiple switches and sent or received on switch ports that are assigned to VLAN 100.

VLAN tagging, a vendor innovation, was originally accomplished using a variety of proprietary approaches. Development of the IEEE 802.1Q standard for virtual bridged LANs produced the VLAN tag as a vendor-neutral mechanism for identifying which VLAN a frame belongs to.

The addition of the 4-byte VLAN tag causes the maximum size of an Ethernet frame to be extended from the original maximum of 1,518 bytes (not including the preamble) to a new maximum of 1,522 bytes. Because VLAN tags are only added to Ethernet frames by switches and other devices that have been programmed to send and receive VLAN- tagged frames, this does not affect traditional, or “classic,” Ethernet operation.

The first two bytes of the Q-tag contain an Ethernet type identifier of 0x8100. If an Ethernet station that is not programmed to send or receive a VLAN tagged frame hap‐ pens to receive a tagged frame, it will see what looks like a type identifier for an unknown protocol type and simply discard the frame.

Envelope Prefix and Suffix

As networks grew in complexity and features, the IEEE received requests for more tags to achieve new goals. The VLAN tag provided space for a VLAN ID and Class of Service (CoS) bits, but vendors and standards groups wanted to add extra tags to support new bridging features and other schemes.

To accommodate these requests, the 802.3 standards engineers defined an “envelope frame,” which adds an extra 482 bytes to the maximum frame size. The envelope frame was specified in the 802.3as supplement to the standard, adopted in 2006. In another change, the tag data was added to the data field to produce a MAC Client Data field. Because the MAC client data field includes the tagging fields, it may seem like the frame size definition has changed, but in fact this is just a way of referring to the combination of tag data and the data field for the purpose of defining the envelope frame.

The 802.3as supplement modified the standard to state that an Ethernet implementation should support at least one of three maximum MAC client data field sizes. The data field size continues to be defined as 46 to 1,500 bytes, but to that is added the tagging infor‐ mation to create the MAC client data field, resulting in the following MAC client data field sizes:

  • 1,500-byte “basic frames” (no tagging information)1,982-byte “envelope frames” (1,500-byte data field plus 482 bytes for all tags)  
  • 1,504-byte “Q-tagged frames” (1,500-byte data field plus 4-byte tag)

The contents of the tag space are not defined in the Ethernet standard, allowing maxi‐ mum flexibility for the other standards to provide tags in Ethernet frames. Either or both prefix and suffix tags can be used in a given frame, occupying a maximum tag space of 482 bytes if either or both are present. This can result in a maximum frame size of 2,000 bytes.

The latest standard simply includes the Q-tag as one of the tags that can be carried in an envelope prefix. The standard notes, “All Q-tagged frames are envelope frames, but not all envelope frames are Q-tagged frames.” In other words, you can use the envelope space for any kind of tagging, and if you use a Q-tag, then it is carried in the envelope prefix as defined in the latest standard. An envelope frame carrying a Q-tag will have a minimum data size of 42 bytes, preserving the minimum frame size of 64 bytes.

Tagged frames are typically sent between switch ports that have been configured to add and remove tags as necessary to achieve their goals. Those goals can include VLAN operations and tagging a frame as a member of a given VLAN, or more complex tagging schemes to provide information for use by higher-level switching and routing protocols. Normal stations typically send basic Ethernet frames without tags, and will drop tagged frames that they are not configured to accept.

Type or Length Field

The old DIX standard and the IEEE standard implement the type and/or length fields differently:

DIX standard

In the DIX Ethernet standard, this 16-bit field is called a type field, and it always contains an identifier that refers to the type of high-level protocol data being carried in the data field of the Ethernet frame. For example, the hexadecimal value 0x0800 has been assigned as the identifier for the Internet Protocol (IP). A DIX frame being used to carry an IP packet is sent with the value of 0x0800 in the type field of the frame. All IP packets are carried in frames with this value in the type field.

 IEEE standard

When the IEEE 802.3 standard was first published in 1985, the type field was not included, and instead the IEEE specifications called this field a length field. Type fields were added to the IEEE 802.3 standard in 1997, so the use of a type field in the frame is officially recognized in 802.3. This change simply made the common practice of using the type field an official part of the standard. The identifiers used in the type field were originally assigned and maintained by Xerox, but with the type field now part of the IEEE standard, the responsibility for assigning type num‐ bers was transferred to the IEEE.

In the IEEE 802.3 standard, this field is called a length/type field, and the hexadecimal value in the field indicates the manner in which the field is being used. The first octet of the field is considered the most significant octet in terms of numeric value.

If the value in this field is numerically less than or equal to 1,500 (decimal), then the field is being used as a length field. In that case, the value in the field indicates the number of logical link control (LLC) data octets that follow in the data field of the frame. If the number of LLC octets is less than the minimum required for the data field of the frame, then octets of padding data will automatically be added to make the data field large enough. The content of the padding data is unspecified by the standard. Upon reception of the frame, the length field is used to determine the length of valid data in the data field, and the padding data is discarded.

If the value in this field of the frame is numerically greater than or equal to 1,536 decimal (0x600 hex), then the field is being used as a type field.The range of 1,501 to 1,535 was intentionally left undefined in the standard.

In that case, the hexadecimal identifier in the field is used to indicate the type of protocol data being carried in the data field of the frame. The network software on the station is responsible for providing any padding data required to ensure that the data field is 46 bytes in length. With this method, there is no conflict or ambiguity about whether the field indicates length or type.

Data Field

Next comes the data field of the frame, which is also treated differently in the two standards:

DIX standard

In a DIX frame, this field must contain a minimum of 46 bytes of data, and may range up to a maximum of 1,500 bytes of data. The network protocol software is expected to provide at least 46 bytes of data.

IEEE standard

The total size of the data field in an IEEE 802.3 frame is the same as in a DIX frame: a minimum of 46 bytes and a maximum of 1,500. However, a logical link control protocol defined in the IEEE 802.2 LLC standard may ride in the data field of the 802.3 frame to provide control information. The LLC protocol is also used as a way to identify the type of protocol data being carried by the frame if the type/length field is used for length information. The LLC protocol data unit (PDU) is carried in the first set of bytes in the data field of the IEEE frame. The structure of the LLC PDU is defined in the IEEE 802.2 LLC standard.

The process of figuring out which protocol software stack gets the data in an incoming frame is known as demultiplexing. An Ethernet frame may use the type field to identify the high-level protocol data being carried by the frame. In the LLC specification, the receiving station demultiplexes the frame by deciphering the contents of the logical link control protocol data unit.

FCS Field

The last field in both the DIX and IEEE frames is the frame check sequence (FCS) field, also called the cyclic redundancy check (CRC). This 32-bit field contains a value that is used to check the integrity of the various bits in the frame fields (not including the preamble/SFD). This value is computed using the CRC, a polynomial that is calculated using the contents of the destination, source, type (or length), and data fields. As the frame is generated by the transmitting station, the CRC value is simultaneously being calculated. The 32 bits of the CRC value that are the result of this calculation are placed in the FCS field as the frame is sent. The x31 coefficient of the CRC polynomial is sent as the first bit of the field, and the x0 coefficient as the last.

The CRC is calculated again by the interface in the receiving station as the frame is read in. The result of this second calculation is compared with the value sent in the FCS field by the originating station. If the two values are identical, then the receiving station is provided with a high level of assurance that no errors have occurred during transmission over the Ethernet channel. If the values are not identical, then the interface can discard the frame and increment the frame error counter.

End of Frame Detection

The presence of a signal on the Ethernet channel is known as carrier. The transmitting interface stops sending data after the last bit of a frame is transmitted, which causes the Ethernet channel to become idle. In the original 10 Mb/s system, the loss of carrier when the channel goes idle signals to the receiving interface that the frame has ended. When the interface detects loss of carrier, it knows that the frame transmission has come to an end. The higher-speed Ethernet systems use more complex signal encoding schemes, which have special symbols available for signaling to the interface the start and end of a frame.

A basic frame carrying a maximum data field of 1,500 bytes is actually 1,518 bytes in length (not including the preamble) when the 18 bytes needed for the addresses, length/ type field, and the frame check sequence are included. The addition of a further 482 bytes for envelope frames makes the maximum frame size become 2,000 bytes. This was chosen as a useful maximum frame size that could be handled by a typical Ethernet implementation in an interface or switch port, while providing enough room for current and future prefixes and suffixes.

 

Auto-Negotiation for fiber optic media segments turned out to be sufficiently difficult to achieve that most Ethernet fiber optic segments do not support Auto-Negotiation. During the development of the Auto-Negotiation standard, attempts were made to de‐ velop a system of Auto-Negotiation signaling that would work on the 10BASE-FL and 100BASE-FX fiber optic media systems.

However, these two media systems use different wavelengths of light and different signal timing, and it was not possible to come up with an Auto-Negotiation signaling standard that would work on both. That’s why there is no IEEE standard Auto-Negotiation sup‐ port for these fiber optic link segments. The same issues apply to 10 Gigabit Ethernet segments, so there is no Auto-Negotiation system for fiber optic 10 Gigabit Ethernet media segments either.

The 1000BASE-X Gigabit Ethernet standard, on the other hand, uses identical signal encoding on the three media systems defined in 1000BASE-X. This made it possible to develop an Auto-Negotiation system for the 1000BASE-X media types, as defined in Clause 37 of the IEEE 802.3 standard.

This lack of Auto-Negotiation on most fiber optic segments is not a major problem, given that Auto-Negotiation is not as useful on fiber optic segments as it is on twisted- pair desktop connections. For one thing, fiber optic segments are most often used as network backbone links, where the longer segment lengths supported by fiber optic media are most effective. Compared to the number of desktop connections, there are far fewer backbone links in most networks. Further, an installer working on the back‐ bone of the network can be expected to know which fiber optic media type is being connected and how it should be configured.

Carrier Ethernet: A Formal Definition

The MEF (Metro Ethernet Forum)  has defined Carrier Ethernet as the “ubiquitous, standardized, Carrier-class service defined by five attributes that distinguish Carrier Ethernet from the familiar LAN based Ethernet.” As depicted in Figure , these five attributes, in no particular order, are

1. Standardized services  

•E-Line, E-LAN provide transparent, private line, virtual private line and LAN services
•A ubiquitous service providing globally & locally via standardized equipment
•Requires no changes to customer LAN equipment or networks and accommodates existing network connectivity such as, time-sensitive, TDM traffic and signaling
•Ideally suited to converged voice, video & data networks
•Wide choice and granularity of bandwidth and quality of service options

  2. Scalability

•The ability for millions to use a network service that is ideal for the widest variety of business, information, communications and entertainment applications with voice, video and data
•Spans Access & Metro to National & Global Services over a wide variety of physical infrastructures implemented by a wide range of Service Providers
•Scalability of bandwidth from 1Mbps to 10Gbps and beyond, in granular increments

 

 

 

 

3. Reliability

•The ability for the network to detect & recover from incidents without impacting users
•Meeting the most demanding quality and availability requirements
•Rapid recovery time when problems do occur, as low as 50ms

4. Quality of Service (QoS)

•Wide choice and granularity of bandwidth and quality of service options
•Service Level Agreements (SLAs) that deliver end-to-end performance matching the requirements for voice, video and data over converged business and residential networks
•Provisioning via SLAs  that provide end-to-end performance based on CIR, frame loss, delay and delay variation characteristics

5. Service management

•The ability to monitor, diagnose and centrally manage the network, using standards-based vendor independent implementations
•Carrier-class OAM
•Rapid service provisioning

 

What is Carrier Ethernet?

Carrier Ethernet essentially augments traditional Ethernet, optimized for LAN deployment,with Carrier-class capabilities which make it optimal for deployment in Service Provider Access/Metro Area Networks and beyond, to the Wide Area Network. And conversely,from an end-user (enterprise) standpoint, Carrier Ethernet is a service that not only provides a standard Ethernet (or for that matter, a standardized non-Ethernethand-off  but also provides the robustness, deterministic performance, management, and flexibility expected of Carrier-class services.

Carrier Ethernet Architecture

 

Data moves from UNI to UNI across “the network” with a layered architecture.

When traffic moves between ETH domains is does so at the TRAN layer. This allows  Carrier Ethernet traffic to be
agnostic to the networks that it traverses

ce

 

ce1

MEF Carrier Ethernet Terminology

•The User Network Interface (UNI)
–The UNI is always provided by the Service Provider
–The UNI in a Carrier Ethernet Network is a physical Ethernet Interface at operating speeds 10Mbs, 100Mbps, 1Gbps or 10Gbps
•Ethernet Virtual Connection (EVC)
–Service container
–Connects two or more subscriber sites (UNI’s)
–An association of two or more UNIs
–Prevents data transfer between sites that are not part of the same EVC
–Three types of EVCs
•Point-to-Point
•Multipoint-to-Multipoint
•Rooted Multipoint
–Can be bundled or multiplexed on the same UNI
–Defined in MEF 10.2 technical specification
Carrier Ethernet Terminology
•UNI Type I
–A UNI compliant with MEF 13
–Manually Configurable
•UNI Type II
–Supports E-Tree
–Support service OAM, link protection
–Automatically Configurable via E-LMI
–Manageable via OAM
•Network to Network Interface (NNI)
–Network to Network Interface between distinct MEN operated by one or more carriers
–An active project of the MEF
•Metro Ethernet Network (MEN)
–An Ethernet transport network connecting user end-points
(Expanded to Access and Global networks in addition to the original Metro Network meaning)

Carrier Ethernet Service Types

ce3
Services Using E-Line Service Type

Ethernet Private Line (EPL)

•Replaces a TDM Private line
•Port-based service with single service (EVC) across dedicated UNIs providing site-to-site connectivity
•Typically delivered over SDH (Ethernet over SDH)
•Most popular Ethernet service due to its simplicity

Ethernet Virtual Private Line (EVPL)

•Replaces Frame Relay or ATM L2 VPN services
–To deliver higher bandwidth, end-to-end services
•Enables multiple services (EVCs) to be delivered over  single physical connection (UNI) to customer premises
•Supports “hub & spoke” connectivity via Service Multiplexed UNI at hub site
–Similar to Frame Relay or Private Line hub and spoke deployments
Services Using E-LAN Service Type
•EP-LAN: Each UNI dedicated to the EP-LAN service. Example use is Transparent LAN
•EVP-LAN: Service Multiplexing allowed at each UNI. Example use is Internet access and corporate VPN via one UNI

Services Using E-Tree Service Type

EP-Tree and EVP-Tree:  Both allow root – root and root – leaf communication but not leaf – leaf communication.

•EP-Tree requires dedication of the UNIs to the single EP-Tree service
•EVP-Tree allows each UNI to be support multiple simultaneous services at the cost of more complex configuration that EP-Tree

APPLICATION OF CARRIER ETHERNET

 

 

The Standardization of Services: Approved MEF Specifications

•MEF 2   Requirements and Framework for Ethernet Service Protection
•MEF 3  Circuit Emulation Service Definitions, Framework and Requirements in Metro Ethernet Networks
•MEF 4   Metro Ethernet Network Architecture Framework
Part 1: Generic Framework
•MEF 6  Metro Ethernet Services Definitions Phase I
•MEF 7   EMS-NMS Information Model
•MEF 8  Implementation Agreement for the Emulation of PDH Circuits over Metro Ethernet Networks
•MEF 9   Abstract Test Suite for Ethernet Services at the UNI
•MEF 10   Ethernet Services Attributes Phase I
•MEF 11   User Network Interface (UNI) Requirements and Framework
•MEF 12  Metro Ethernet Network Architecture Framework
Part 2: Ethernet Services Layer
•MEF 13   User Network Interface (UNI) Type 1 Implementation Agreement
•MEF 14   Abstract Test Suite for Traffic Management Phase 1
•MEF 15  Requirements for Management of Metro Ethernet
Phase 1 Network Elements
•MEF 16   Ethernet Local Management Interface

How the MEF Specifications Enable Carrier Ethernet

When Ethernet was developed it was recognized that the use of repeaters to connect segments to form a larger network would result in pulse regeneration delays that could adversely affect the probability of collisions. Thus, a limit was required on the number of repeaters that could be used to connect segments together. This limit in turn limited the number of segments that could be interconnected. A further limitation involved the number of populated segments that could be joined together, because stations on populated segments generate traffic that can cause collisions, whe   reas non-populated segments are more suitable for extending the length of a network of interconnected segments. A result of the preceding was the ‘‘5-4-3 rule.’’ That rule specifies that a maximum of five Ethernet segments can be joined through the use of a maximum of four repeaters. In actuality, this part of the Ethernet rule really means that no two communicating Ethernet nodes can be more than two repeaters away from one another. Finally, the ‘‘three’’ in the rule denotes the maximum number of Ethernet segments that can be populated. Figure illustrates an example of the 5-4-3 rule for the original bus-based Ethernet.

The Optical Time Domain Reflectometer (OTDR) is useful for testing the integrity of fiber optic cables. An optical time-domain reflectometer (OTDR) is an optoelectronic instrument used to characterize an optical fiber. An OTDR is the optical equivalent of an electronic time domain reflectometer. It injects a series of optical pulses into the fiber under test. It also extracts, from the same end of the fiber, light that is scattered (Rayleigh backscatter) or reflected back from points along the fiber. The strength of the return pulses is measured and integrated as a function of time, and plotted as a function of fiber length.

Using an OTDR, we can:

1. Measure the distance to a fusion splice, mechanical splice, connector, or significant bend in the fiber.

2. Measure the loss across a fusion splice, mechanical splice, connector, or significant bend in the fiber.

3. Measure the intrinsic loss due to mode-field diameter variations between two pieces of single-mode optical fiber connected by a splice or connector.

4. Determine the relative amount of offset and bending loss at a splice or connector joining two single-mode fibers.

5. Determine the physical offset at a splice or connector joining two pieces of single-mode fiber, when bending loss is insignificant.

6. Measure the optical return loss of discrete components, such as mechanical splices and connectors.

7. Measure the integrated return loss of a complete fiber-optic system.

8. Measure a fiber’s linearity, monitoring for such things as local mode-field pinch-off.

9. Measure the fiber slope, or fiber attenuation (typically expressed in dB/km).

10. Measure the link loss, or end-to-end loss of the fiber network.

11. Measure the relative numerical apertures of two fibers.

12. Make rudimentary measurements of a fiber’s chromatic dispersion.

13. Measure polarization mode dispersion.

14. Estimate the impact of reflections on transmitters and receivers in a fiber-optic system.

15. Provide active monitoring on live fiber-optic systems.

16. Compare previously installed waveforms to current traces.

Chromatic dispersion affects all optical transmissions to some degree.These effects become more pronounced as the transmission rate increases and fiber length increases. 

Factors contributing to increasing chromatic dispersion signal distortion include the following:

1. Laser spectral width, modulation method, and frequency  chirp. Lasers with wider spectral widths and chirp have shorter dispersion limits. It is important to refer to manufacturer specifications to determine the total amount of dispersion that can be tolerated by the lightwave equipment.

2. The wavelength of the optical signal. Chromatic dispersion varies with wavelength in a fiber. In a standard non-dispersion shifted fiber (NDSF G.652), chromatic dispersion is near or at zero at 1310 nm. It increases positively with increasing wavelength and increases negatively for wavelengths less than 1310 nm.

3. The optical bit rate of the transmission laser. The higher the fiber bit rate, the greater the signal distortion effect.
4. The chromatic dispersion characteristics of fiber used in the link. Different types of fiber have different dispersion characteristics.
5. The total fiber link length, since the effect is cumulative along the length of the fiber.
6. Any other devices in the link that can change the link’s total chromatic dispersion including chromatic dispersion compensation modules.
7. Temperature changes of the fiber or fiber cable can cause small changes to chromatic dispersion. Refer to the manufacturer’s fiber cable specifications for values.

Methods to Combat Link Chromatic Dispersion

1. Change the equipment laser with a laser that has a specified longer dispersion limit. This is typically a laser with a more narrow spectral width or a laser that has some form of precompensation. As laser spectral width decreases, chromatic dispersion limit increases.
2. For new construction, deploy NZ-DSF instead of SSMF fiber.NZ-DSF has a lower chromatic dispersion specification.
3. Insert chromatic dispersion compensation modules (DCM) into the fiber link to compensate for the excessive dispersion. The optical loss of the DCM must be added to the link optical loss budget and optical amplifiers may be required to compensate.
4. Deploy a 3R optical repeater (re-amplify, reshape, and retime the signal) once a link reaches chromatic dispersion equipment limit.
5. For long haul undersea fiber deployment, splicing in alternating lengths of dispersion compensating fiber can be considered.
6. To reduce chromatic dispersion variance due to temperature, buried cable is preferred over exposed aerial cable.

The maintenance signals defined in [ITU-T G.709] provide network connection status information in the form of payload missing indication (PMI), backward error and defect indication (BEI, BDI), open connection indication (OCI), and link and tandem connection status information in the form of locked indication (LCK) and alarm indication signal (FDI, AIS).

 

 

 

 

Interaction diagrams are collected from ITU G.798 and OTN application note from IpLight

“In analog world the standard test message is the sine wave, followed by the two-­tone signal  for more rigorous tests.  The property being optimized is generally  signal-to-noise ratio (SNR). Speech  is  interesting, but does not lend itself easily to mathematical analysis, or measurement. 

ln digital world a binary sequence, with a known pattern of ‘ 1’ and ‘0’ ,  i s common .   It i s more common  to measure Bit error  rates (BER) than  SNR, and this is simplified by the fact that  known binary sequences are easy to generate and reproduce. A common sequence is the pseudo random  binary sequence.”

**********************************************************************************************************************************************************

“A PRBS (Pseudo Random Binary Sequence) is a binary PN (Pseudo-Noise) signal. The sequence of binary 1’s and 0’s exhibits certain randomness and auto-correlation properties.Bit-sequences like PRBS are used for testing transmission lines and transmission equipment because of their randomness properties.Simple bit-sequences are used to test the DC compatibility of transmission lines and transmission equipment.”

**********************************************************************************************************************************************************

 Pseudo-Random-Bit-Sequence (PRBS) is used to simulate random data for transmission across the link.The different types of PRBS and the suggested data-rates for the different PRBS types are described in the ITU-T standards O.150, O.151, O.152 and O.153.In order to properly simulate real traffic, a pseudo-random bit sequence (PRBS) is also used. The rate of the PRBS can range between 2^-9 and 2^-31. Typically, for higher-bit-rate devices, a high-rate PBRS pattern is preferable so that the device under test is effectively stressed

**********************************************************************************************************************************************************

 Bit-error measurements are an important means of assessing the performance of digital transmission. It is necessary to specify reproducible test sequences that simulate real traffic as closely as possible. Reproducible test sequences are also a prerequisite to perform end-to-end measurement.  Pseudo-random bit sequences (PRBS) with lengths of 2n – 1 bits are the most common solution to this problem.

PRBS bit-pattern are generated in a linear feed-back shift-register. This is a shift-register with a xored– feedback of the output-values of specific flip-flops to the input of the first flip-flop.2*X (X = PRBS shift register length). 

Example : PRBS-Generation of the sequence 2^9  -1 :

 

PRBS_TYPE   

 

ERROR TYPE   

 

Note:(PRBS) of order 31 (PRBS31), which is the inverted bit stream.

G(x) = 1 + x28 + x31 (1)

The advantage of using a PRBS pattern for BER testing is that it is a deterministic signal with properties similar to those of a random signal for the link , i. e. of white noise.

Bit error counting

Whereas a mask of the bit errors in the stream can be created by ANDing the received bytes after coalescing them with the locally generated PRBS31 pattern, counting the number of bits set in this mask in order to calculate the BER is a bit tricky. So we need to follow this

 

Typical links are designed for BERs better than 10-12

The Bit Error Ratio (BER) is often specified as a performance parameter of a transmission system, which needs to be verified during investigation. Designing an experiment to demonstrate adequate BER performance is not, however, as straightforward as it appears since the number of errors detected over a practical measurement time is generally small. It is, therefore, not sufficient to quote the BER as simply the ratio of the number of errors divided by the number of bits transmitted during the measurement period, instead some knowledge of the statistical nature of the error distribution must first be assumed.

The bit error rate (BER) is the most significant performance parameter of any digital communications system. It is a measure of the probability that any given bit will have been received in error. For example a standard maximum bit error rate specified for many systems is 10-9. This means that the receiver is allowed to generate a maximum of 1 error in every 109 bits of information transmitted or, putting it another way, the probability that any received bit is in error is 10-9.

 The BER depends primarily on the signal to noise ratio (SNR) of the received signal which in turn is determined by the transmitted signal power, the attenuation of the link, the link dispersion and the receiver noise. The S/N ratio is generally quoted for analog links while the bit-error-rate (BER) is used for digital links. BER is practically an inverse function of S/N. There must be a minimum power at the receiver to provide an acceptable S/N or BER. As the power increases, the BER or S/N improves until the signal becomes so high it overloads the receiver and receiver performance degrades rapidly.

 The formula used to calculate residual BER assumes a gaussian error distribution:

C = 1 – e–nb

C = Degree of confidence required

(0.95 = 95% confidence)

n = No. of bits examined with no error found.

b = Upper bound on BER with a confidence C

(b = 10–15)

To determine the length of time, that is, the number of bits needed to test for (at a given bit rate), requires the above equation to be transposed:

n = loge(1 – C)/b

 

So, to test for a residual BER of 10–13 with a 95% confidence limit requires a test pattern equal to 3 x 1013 bits. This equates to only 0.72 hours using an OC-192c/STM-64c payload rather than 55.6 hours using an STS-3c/VC-4 bulk filled payload (149.76 Mb/s).The graph in Figure plots test time versus residual BER and shows the difference in test time for OC-192c/STM-64c payloads versus an OC-48c/STM-16c payload.The graphs are plotted for different confidence limits and they clearly indicate that the payload capacity is the dominant factor in improving the test time and not the confidence limit. Table 1 shows the exact test times for each BER threshold and confidence limit.

 

collected from::Product Note-OmniBER

FEC codes in optical communications are based on a class of codes know as Reed-Solomon.

Reed-Solomon code is specified as  RS (nk), which means that the encoder takes k data bytes and adds parity bytes to make an n bytes codeword. A Reed-Solomon decoder can correct up to t bytes in the codeword, where 2t=n – k.

 

ITU recommendation G.975 proposes a Reed-Solomon (255, 239). In this case 16 extra bytes are appended to 239 information-bearing bytes. The bit rate increase is about 7% [(255-239)/239 = 0.066], the code can correct up to 8 byte errors [255-239/2 =8] and the coding gain can be demonstrated to be about 6dB.

The same Reed-Solomon coding (RS (255,239)) is recommended in ITU-T G.709. The coding overhead is again about 7% for a 6dB coding gain. Both G.975 and G.709 improve the efficiency of the Reed-Solomon by interleaving data from different codewords. The interleaving technique carries an advantage for burst errors, because the errors can be shared across many different codewords. In the interleaving approach lies the main difference between G.709 and G.975: G.709 interleave approach is fully standardized,while G.975 is not.

The actual G.975 data overhead includes also one bit for framing overhead, therefore the bit rate exp ansion is [(255-238)/238 = 0.071]. In G.709 the frame overhead is higher than in G.975, hence an even higher bit rate expansion. One byte error occurs when 1 bit in a byte is wrong or when all the bits in a byte are wrong. Example: RS (255,239) can correct 8 byte errors. In the worst case, 8 bit errors may occur, each in a separate byte so that the decoder corrects 8 bit errors. In the best case, 8 complete byte errors occur so that the decoder corrects 8 x 8 bit errors.

There are other, more powerful and complex RS variants (like for example concatenating two RS codes) capable of Coding Gain 2 or 3 dB higher than the ITU-T FEC codes, but at the expense of an increased bit rate (sometimes as much as 25%).

FOR OTN FRAME: Calculation of RS( n,k) is as follows:-

*OPU1 payload rate= 2.488 Gbps (OC48/STM16)

 

*Add OPU1 and ODU1 16 bytes overhead:

 

3808/16 = 238, (3808+16)/16 = 239

ODU1 rate: 2.488 x 239/238** ~ 2.499Gbps

*Add FEC

OTU1 rate: ODU1 x 255/239 = 2.488 x 239/238 x 255/239

=2.488 x 255/238 ~2.667Gbps

 

NOTE:4080/16=(255)

**Multiplicative factor is just a simple math :eg. for ODU1/OPU1=3824/3808={(239*16)/(238*16)}

Here value of multiplication factor will give the number of times  for rise in the frame size after adding header/overhead.

As we are using Reed Soloman(255,239) i.e we are dividing 4080bytes in sixteen frames (The forward error correction for the OTU-k uses 16-byte interleaved codecs using a Reed- Solomon S(255,239) code. The RS(255,239) code operates on byte symbols.).

Hence 4080/16=255…I have understood it you need to do simpler maths to understand..)

Transparency here is transmission over network without altering original property of the client signal.

G.709 defines the OPUk which can contain the entire SDH signal. This means that one can transport 4 STM-16 signals in one OTU2 and not modify any of the SDH overhead.

Thus the transport of such client signals in the OTN is bit-transparent (i.e. the integrity of the whole client signal is maintained).

OTN is also timing transparent. The asynchronous mapping mode transfers the input timing (asynchronous mapping client) to the far end (asynchronous de-mapping client).

OTN is also delay transparent. For example if 4 STM-16 signals are mapped into ODU1’s and then multiplexed into an ODU2, their timing relationship is preserved until they are de-mapped back to ODU1’s.

Tandem Connection Monitoring (TCM)

Tandem system is also known as cascaded systems.

SDH monitoring is divided into section and path monitoring. A problem arises when you have “Carrier’s Carrier” situation where it is required to monitor a segment of the path that passes another carrier network.

 

Tandem Connection Monitoring

Here Operator A needs to have Operator B carries his signal. However he also needs a way of monitoring the signal as it passes through Operator B’s network. This is what a “Tandem connection” is. It is a layer between Line Monitoring and Path Monitoring. SDH was modified to allow a single Tandem connection. ITU-T rec. G.709 allows 6.

TCM1 is used by the User to monitor the Quality of Service (QoS) that they see. TCM2 is used by the first operator to monitor their end-to-end QoS. TCM3 is used by the various domains for Intra domain monitoring. Then TCM4 is used for protection monitoring by Operator B.

There is no standard on which TCM is used by whom. The operators have to have an agreement, so that they do not conflict.

TCM’s also support monitoring of ODUk connections for one or more of the following network applications (refer to ITU-T Rec. G.805 and ITU-T Rec. G.872):

–          optical UNI to UNI tandem connection monitoring ; monitoring the ODUk connection through the public transport network (from public network ingress network termination to egress network termination)

–          optical NNI to NNI tandem connection monitoring; monitoring the ODUk connection through the network of a network operator (from operator network ingress network termination to egress network termination)

–          sub-layer monitoring for linear 1+1, 1:1 and 1:n optical channel sub-network connection protection switching, to determine the signal fail and signal degrade conditions

–          sub-layer monitoring for optical channel shared protection ring (SPRING) protection switching, to determine the signal fail and signal degrade conditions

–          Monitoring an optical channel tandem connection for the purpose of detecting a signal fail or signal degrade condition in a switched optical channel connection, to initiate automatic restoration of the connection during fault and error conditions in the network

–          Monitoring an optical channel tandem connection for, e.g., fault localization or verification of delivered quality of service

A TCM field is assigned to a monitored connection. The number of monitored connections along an ODUk trail may vary between 0 and 6. Monitored connections can be nested, overlapping and/or cascaded.

 

ODUk monitored connections

Monitored connections A1-A2/B1-B2/C1-C2 and A1-A2/B3-B4 are nested, while monitored connections B1-B2/B3-B4 are cascaded.

Overlapping monitored connections are also supported.

 

Overlapping ODUk monitored connections

Channel Coding-A walkthrough

This article is just for revising Channel Coding concepts.

Channel coding is the process that transforms binary data bits into signal elements that can cross the transmission medium. In the simplest case, in a metallic wire a bi- nary 0 is represented by a lower voltage, and a binary 1 by a higher voltage. How- ever, before selecting a coding scheme it is necessary to identify some of the strengths and weaknesses of line codes:

  • High-frequency components are not desirable because they require more chan- nel bandwidth, suffer more attenuation, and generate crosstalk in electrical links.
  • Direct current (dc) components should be avoided because they require physi- cal coupling of transmission elements. Since the earth/ground potential usually varies between remote communication ends, dc provokes unwanted earth-re- turn loops.
  • The use of alternating current (ac) signals permits a desirable physical isola- tion using condensers and transformers.
  • Timing control permits the receiver to correctly identify each bit in the trans- mitted message. In synchronous transmission, the timing is referenced to the transmitter clock, which can be sent as a separate clock signal, or embedded into the line code. If the second option is used, then the receiver can extract its clock from the incoming data stream thereby avoiding the installation of an additional line.

 

Figure 1.1: Line encoding technologies. AMI and HDB3 are usual in electrical signals, while CMI is often used in optical signals.

In order to meet these requirements, line coding is needed before the signal is trans- mitted, along with the corresponding decoding process at the receiving end. There are a number of different line codes that apply to digital transmission, the most widely used ones are alternate mark inversion (AMI), high-density bipolar three ze- ros (HDB3), and coded mark inverted (CMI).

 Nonreturn to zero 

Nonreturn to zero (NRZ) is a simple method consisting of assigning the bit “1” to the positive value of the signal amplitude (voltage), and the bit “0” to the nega- tive value (see Figure 1.1 ). There are two serious disadvantages to this:

No timing information is included in the signal, which means that synchronism can easily be lost if, for instance, a long sequence of zeros is being received.

The spectrum of the signal includes a dc component.

Alternate mark inversion

Alternate mark inversion (AMI) is a transmission code, also known as pseudo- ternary, in which a “0” bit is transmitted as a null voltage and the “1” bits are represented alternately as positive and negative voltage. The digital signal coded in AMI is characterized as follows (see Figure 1.1):

            • The dc component of its spectrum is null.
            • It does not solve the problem of loss of synchronization with long sequences of zeros.

Bit eight-zero suppression

Bit eight-zero suppression (B8ZS) is a line code in which bipolar violations are de- liberately inserted if the user data contains a string of eight or more consecutive ze- ros. The objective is to ensure a sufficient number of transitions to maintain the synchronization when the user data stream contains a large number of consecutive zeros (see Figure 1.1 and Figure 1.2).

The coding has the following characteristics:

The coding has the following characteristics:

  • The timing information is preserved by embedding it in the line signal, even when long sequences of zeros are transmitted, which allows the clock to be re- covered properly on reception
  • The dc component of a signal that is coded in B8Z3 is null.

 

Figure 1.2     B8ZS and HDB3 coding. Bipolar violations are: V+ a positive level and V- negative.

High-density bipolar three zeroes

High-density bipolar three zeroes (HDB3) is similar to B8ZS, but limits the maxi- mum number of transmitted consecutive zeros to three (see Figure 1.5). The basic idea consists of replacing a series of four bits that are equal to “0” with a code word “000V” or “B00V,” where “V” is a pulse that violates the AMI law of alternate po- larity, and B it is for balancing the polarity.

  • “B00V” is used when, until the previous pulse, the coded signal presents a dc component that is not null (the number of positive pulses is not compensated by the number of negative pulses).
  • “000V” is used under the same conditions as above, when, until the previous pulse, the dc component is null (see Figure 1.6).
  • The pulse “B” (for balancing), which respects the AMI alternation rule and has positive or negative polarity, ensuring that two consecutive “V” pulses will have different polarity.

Coded mark inverted

The coded mark inverted (CMI) code, also based on AMI, is used instead of HDB3 at high transmission rates, because of the greater simplicity of CMI coding and de- coding circuits compared to the HDB3 for these rates. In this case, a “1” is transmit- ted according to the AMI rule of alternate polarity, with a negative level of voltage during the first half of the period of the pulse, and a positive level in the second half. The CMI code has the following characteristics (see Figure 1.1):

  • The spectrum of a CMI signal cancels out the components at very low frequencies.
  • It allows for the clock to be recovered properly, like the HDB3 code.
  • The bandwidth is greater than that of the spectrum of the same signal coded in AMI.

Rejuvenating PCM:Pulse Code Modulation

This article consists the very basic of PCM(Pulse Code Modulation).i.e foundation for Telecom Networks.

The pulse code modulation (PCM) technology (see Figure 1.1) was patented and developed in France in 1938, but could not be used because suitable technology was not available until World War II. This came about with the arrival of digital systems in the 1960s, when improving the performance of communications net- works became a real possibility. However, this technology was not completely adopted until the mid-1970s, due to the large amount of analog systems already in place and the high cost of digital systems, as semiconductors were very expensive. PCM’s initial goal was that of converting an analog voice telephone channel into a digital one based on the sampling theorem.

 

The sampling theorem states that for digitalization without information loss, the sampling frequency (fs) should be at least twice the maximum frequency component (fmax) of the analog information:

fs × fmax 

The frequency 2·fmax is called the Nyquist sampling rate. The sampling theorem is considered to have been articulated by Nyquist in 1928, and mathematically prov- en by Shannon in 1949. Some books use the term Nyquist sampling theorem, and others use Shannon sampling theorem. They are in fact the same theorem.

PCM involves three phases: sampling, encoding, and quantization:

In sampling, values are taken from the analog signal every 1/fs seconds (the sampling period).

 

Quantization assigns these samples a value by approximation, and in accordance with a quantization curve (i.e., A-law of ITU-T).

Encoding provides the binary value of each quantified sample.

 

If SDH is based on node and signal synchronization, why do fluctuations occur?Very general question for a Optics beginner.

The answer lies in the practical limitations of synchronization. SDH networks use high-quality clocks feeding network el- ements. However, we must consider the following:

  • A number of SDH islands use their own reference clocks, which may be nominally identical, but never exactly the same.
  • Cross services carried by two or more operators always generate offset and clock fluctuations whenever a common reference clock is not used.
  • Inside an SDH network, different types of breakdown may occur and cause a temporary loss of synchronization. When a node switches over to a secondary clock reference, it may be different from the original, and it could even be the internal clock of the node.
  • Jitter and wander effects

SDH/SONET:Maintenance and Performance Events

We know SDH/SONET is older technology now but just have a glimpse for the revision of basic FM process:

SDH SONET MAINTENANCE

SDH and SONET transmission systems are robust and reliable; however they are vulnerable to several effects that may cause malfunction. These effects can be clas- sified as follows:

  • Natural causes: This include thermal noise, always present in regeneration systems; solar radiation; humidity and Raleigh fading in radio systems; hardware aging; degraded lasers; degradation of electric connections; and electrostatic discharge.
  • A network design pitfall: Bit errors due to bad synchronization in SDH. Timing loops may collapse a transmission network partially, or even completely.
  • Human intervention: This includes fiber cuts, electrostatic discharges, power failure, and topology modifications.

 

Anomalies and defects management. (In regular characters for SDH; in italic for SONET.)

All these may produce changes in performance, and eventually collapse transmission services.

SDH/SONET Events

SDH/SONET events are classified as anomalies, defects, damage, failures, and alarms depending on how they affect the service:

  • Anomaly: This is the smallest disagreement that can be observed between mea- sured and expected characteristics. It could for instance be a bit error. If a single anomaly occurs, the service will not be interrupted. Anomalies are used to monitor performance and detect defects.

Defect: A defect level is reached when the density of anomalies is high enough to interrupt a function. Defects are used as input for performance monitoring, to con- trol consequent actions, and to determine fault causes.

  • Damage or fault: This is produced when a function cannot finish a requested action. This situation does not comprise incapacities caused by preventive maintenance.
  • Failure: Here, the fault cause has persisted long enough so that the ability of an item to perform a required function may be terminated. Protection mechanisms can now be activated.
  • Alarm: This is a human-observable indication that draws attention to a failure (detected fault), usually giving an indication of the depth of the damage. For example, a light emitting diode (LED), a siren, or an e-mail.
  • Indication: Here events are notified upstream to the peer layer for performance monitoring and eventually to request an action or a human intervention that can fix the situation .

Errors reflect anomalies, and alarms show defects. Terminology here is often used in a confusing way, in the sense that people may talk about errors but actually refer to anomalies, or use the word, “alarm” to refer to a defect.

 

OAM management. Signals are sent downstream and upstream when events are detected at the LP edge (1, 2); HP edge (3, 4); MS edge (5, 6); and RS edge (7, 8).

In order to support a single-end operation the defect status and the number of detected bit errors are sent back to the far-end termination by means of indications such an RDI, REI, or RFI

 Monitoring Events

SDH frames contain a lot of overhead information to monitor and manage events  When events are detected, overhead channels are used to notify peer layers to run network protection procedures or evaluate performance. Messages are also sent to higher layers to indicate the local detection of a service affecting fault to the far-end terminations.

Defects trigger a sequence of upstream messages using G1 and V2 bytes. Down- stream AIS signals are sent to indicate service unavailability. When defects are detected, upstream indications are sent to register and troubleshoot causes.

 Event Tables

 

PERFORMANCE MONITORING

SDH has performance monitoring capabilities based on bit error monitoring. A bit parity is calculated for all bits of the previous frame, and the result is sent as over- head. The far-end element repeats the calculation and compares it with the received

 

 

 

 

overhead. If the result is equal, there is considered to be no bit error; otherwise, a bit error indication is sent to the peer end.

A defect is understood as any serious or persistent event that holds up the transmission service. SDH defect processing reports and locates failures in either the complete end-to-end circuit (HP-RDI, LP-RDI) or on a specific multiplex section between adjacent SDH nodes (MS-RDI)

Alarm indication signal

An alarm indication signal (AIS) is activated under standardized criteria, and sent downstream in a path in the client layer to the next NE to inform about the event. The AIS will arrive finally at the NE at which that path terminates, where the client layer interfaces with the SDH network .

As an answer to a received AIS, a remote defect indication is sent backwards. An RDI is indicated in a specific byte, while an AIS is a sequence of “1s” in the payload space. The permanent sequence of “1s” tells the receiver that a defect affects the service, and no information can be provided.

 

Depending on which service is affected, the AIS signal adopts several forms:-

  • MS-AIS: All bits except for the RSOH are set to the binary value “1.”
  • AU-AIS: All bits of the administrative unit are set to “1” but the RSOH and MSOH maintain their codification.
  • TU-AIS: All bits in the tributary unit are set to “1,” but the unaffected tributar- ies and the RSOH and MSOH maintain their codification.
  • PDH-AIS: All the bits in the tributary are “1.”

Enhanced remote defect indication 

Enhanced remote defect indication (E-RDI) provides the SDH network with addi- tional information about the defect cause by means of differentiating:

  • Server defects: like AIS and LOP;
  • Connectivity defects: like TIM and UNEQ;
  • Payload defects: like PLM.

Enhanced RDI information is codified in G1 (bits 5-7) or in k4 (bits 5-7), depending on the path.

Many times we heard that we should implement Unidirectional or Bidirectional APS in the network.Just some of the advantages of these are:-

Unidirectional and bidirectional protection switching

Possible advantages of unidirectional protection switching include:

  • Unidirectional protection switching is a simple scheme to implement and does not require a protocol.
  • Unidirectional protection switching can be faster than bidirectional protection switching because it does not require a protocol.
  • Under multiple failure conditions there is a greater chance of restoring traffic by protection switching if unidirectional protection switching is used, than if bidirectional protection switching is used.

Possible advantages of bidirectional protection switching when uniform routing is used include:

  • With bidirectional protection switching operation, the same equipment is used for both directions of transmission after a failure. The number of breaks due to single failures will be less than if the path is delivered using the different equipment.
  • With bidirectional protection switching, if there is a fault in one path of the network, transmission of both paths between the affected nodes is switched to the alternative direction around the network. No traffic is then transmitted over the faulty section of the network and so it can be repaired without    further  protection switching.
  • Bidirectional protection switching is easier to manage because both directions of transmission use the same equipments along the full length of the trail.
  • Bidirectional protection switching maintains equal delays for both directions of transmission. This may be important where there is a significant  imbalance in the length of the trails e.g. transoceanic links where one trail is via a satellite link and the other via a cable link.
  •  Bidirectional protection switching also has the ability to carry extra traffic on the protection path.

@above is extracted form ITU-G.*841

CDC allows operators to future proof their network so they are able to optimize, scale and flexibly meet any future bandwidth demands.

  • Directionless: for the ability to route a wavelength across any viable path in the network
  • Colorless: for the ability to receive any wavelength on any port.
  • Contentionless: eliminates wavelength blocking, allowing the add/drop of a duplicate wavelength onto a single mux/demux
  • Flexible grid: for the ability to future-proof the network for any higher capacity channel that needs >50GHz spectrum

The CDC solution allows the operator to handle unpredictable A-Z services or temporary bandwidth demands over the full life of the network. Reconfigurations such as wavelength defragmentation and route optimization are also made possible to scale the network for support of more services. CDC also supports the transport of SuperChannels when these become available.

CDC can operate with photonic control plane for increased automation of operations as well as to support automated photonic restoration and other future capabilities.

Gridless networks are the evolution of photonic line systems to improve spectral efficiency and flexibility.i.e

Channel grid are no longer required to be centered at ITU wavelengths/frequencies .
Why Do We Need Gridless?
  • Improved spectral efficiency with existing 40G/100G technology.
  • Define a super channel that has multiple sub-channels within it, in order to fit the same channels in a smaller region of spectrum.
  • Support higher line-rate transponders.
  • In order to get the same reach/performance from 400 Gb/s and 1 Tb/s transponders we have no choice but to increase the spectral width of these signals well beyond 50GHz or even 100GHz spacing.

1.OverView

Availability is a probabilistic measure of the length of time a system or network is functioning.

  • Generally calculated as a percentage, e.g. 99.999% (referred to as 5 nines up time) is carrier grade availability.
  • A network has a high availability when downtime / repair times are minimal.
  • For example, high availability networks are down for minutes, where low availability networks are down for hours.
  • Unavailability is the percentage of time a system is not functioning or downtime and is generally expressed in minutes.
  • Unavailability = (1 – Availability)*365*24*60
  • Unavailability(U)=MTTR/MTBF
  • The unavailability of a 99.999% available system is 5.3 minutes per year.
  • Availability is generally measured as either failure rates  or mean time before failure (MTBF).
  • Availability calculations always assume a bi-directional system.

2.Circuit vs. Nodal Availability

Circuit and nodal availability measure different quantities.  To help explain this clearly un-availability (Unavailability=1-Availablity) will be used in this section.

  • Circuit un-availability is a measure of the average down time of a traffic demand / service.
    • A circuit is un-available only if traffic affecting components that help transport the demand / service have failed.
    • Circuit unavailability is calculated by considering the unavailabilities of components which are traffic affecting and by taking into consideration those components that are hardware protected.
    • For example, the failure of both 10G line cards on an NE can cause a traffic outage.
  • Nodal un-availability is a measure of the average down time of a node.
    • Each time there is a failure in a node, regardless if it is traffic affecting or not, an engineer is required to visit the node to fix the failure.
    • Therefore nodal un-availability is based on calculated failure rates, it is still a direct measure of an operational expenditure.
    • Nodal unavailability is calculated by adding all components of a network element regardless of hardware protection, i.e. in series.
    • For example, failure of a protected switch card is non-traffic affecting but still requires a site visit to be replaced.

3.Terms & Definitions

Failure rate

  •  Failure rate is usually measured as Failures in Time (FIT), where one FIT equals a single failure in one billion (109) hours of operation.
  •  FITs are calculated according to industry standard (Telcordia SR 332).

MTBF- (Mean time between failure)

  •  Average time between failures for a given component.
  •  Measured either in hours or years.
  • MTBF is inversely proportional to FITs.

MTTR-(Mean time to repair)

  •  Average time to repair a given failure.
  •  Measured in hours.
  •  Availability is always quoted in terms of number of nines
  •  For example, carrier grade is 5 9’s, which is 99.999%
  • Availability is better understood in terms of unavailability in minutes per year
  • Therefore for an availability of 99.999%, the unavailability or downtime is 5.3 minutes per year
Data Planes are set of network elements which receive, send, and switch the network data.
 As per the  Generalized Multi-Protocol Label Switching (GMPLS) standards, following nomenclature is used for various technological data planes:
  • Packet Switching Capable (PSC) layer
  • Layer-2 Switching Capable (L2SC) layer
  • Time Division Multiplexing (TDM) layer
  • Lambda Switching Capable (LSC) layer
  • Fiber-Switch Capable (FSC)

And as per Layered architectures concepts ;above technologies are correlated as:-

  • Layer 3 for PSC (IP Routing)
  • Layer 2.5 for PSC (MPLS)
  • Layer 2 for L2SC (often Ethernet)
  • Layer 1.5 for TDM (often SONET/SDH)
  • Layer 1 for LSC (often WDM switch elements)
  • Layer 0 for FSC (often port switching devices based on optical or mechanical technologies)

**********************************************************************************************

In a “N” Layered Network Architecture, the services are grouped in a hierarchy of layers

– Layer N uses services of layer N-1

– Layer N provides services to layer N+1

A communication layer is completely defined by

(a) A peer protocol which specifies how entities at layer-N communicate.

(b) The service interface which specifies how adjacent layers at the same system communicate

 When talking about two adjacent layers,

(a) the higher layer is a service user, and

(b) the lower layer is a service provider

– The communication between entities at the same layer is logical

– The physical flow of data is vertical.

Just prior to transmission, the entire SONET signal, with the exception of the framing bytes and the section trace byte, is scrambled.  Scrambling randomizes the bit stream is order to provide sufficient 0–>1 and 1–>0 transitions for the receiver to derive a clock with which to receive the digital information.

 

 

 

 

 

 

 

Actually every add/drop multiplexer sample incoming bits according to a particular clock frequency. Now this clock frequency is recovered by using transitions between 1s and 0s in the incoming OC-N signal. Suppose, incoming bit stream contains long strings of all 1s or all 0s. Then clock recovery would be difficult. So to enable clock recovery at the receiver such long strings of all 1s or 0s are avoided. This is achieved by a process called Scrambling.

Scrambler is designed as shown in the figure given below:-

It is a frame synchronous scrambler of sequence length 127. The generating polynomial is 1+x6+x7. The scrambler shall be reset to ‘1111111’ on the most significant byte following Z0 byte in the Nth STS-1. That bit and all subsequent bits to be scrambled shall be added, modulo 2, to the output from the x7 position of the scrambler, as shown in Figure above.Example:

The first 127 bits are:

111111100000010000011000010100 011110010001011001110101001111 010000011100010010011011010110 110111101100011010010111011100 0101010

The same operation is used for descrambling. For example, the input data is 000000000001111111111.

        00000000001111111111  <-- input data
        11111110000001000001  <-- scramble sequence
        --------------------  <-- exclusive OR (scramble operation)
        11111110001110111110  <-- scrambled data
        11111110000001000001  <-- scramble sequence
        --------------------  <-- exclusive OR (descramble operation)
        00000000001111111111  <-- original data

The framing bytes A1 and A2, Section Trace byte J0 and Section Growth byte Z0 are not scrambled to avoid possibility that bytes in the frame might duplicate A1/A2 and cause an error in framing. The receiver searches for A1/A2 bits pattern in multiple consecutive frames, allowing the receiver to gain bit and byte synchronization. Once bit synchronization is gained, everything is done, from there on, on byte boundaries – SONET/SDH is byte synchronous, not bit synchronous.

An identical operation called descrambling is done at the receiver to retrieve the bits.

Scrambling is performed by XORing the data signal with a pseudo-random bit sequence generated by the scrambler polynomial indicated above.  The scrambler is frame synchronous, which means that it starts every frame in the same state.

Descrambling is performed by the receiver by XORing the received signal with the same pseudo random bit sequence.  Note that since the scrambler is frame synchronous, the receiver must have found the frame alignment before the signal can be descrambled.  That is why the frame bytes(A1A2) are not scrambled.

References:http://www.electrosofts.com/sonet/scrambling.html

Frequency justification and pointers:+/-ve Stuffing mechanism in SONET/SDH

When the input data has a rate lower than the output data rate of a multiplexer, the positive stuffing will occur. The input is stored in a buffer at a rate which is controlled by the WRITE clock. Since the output (READ) clock rate is higher than the WRITE clock rate, the buffer content will be depleted or emptied. To avoid this condition, the buffer fill is constantly monitored and compared to a threshold. If the the content fill is below a threshold, the READ clock is inhibited and stuffed bit is inserted to the output stream. Meanwhile, the input data stream is still filling the buffer. The stuffed bit location information must be transmitted to the receiver so that the receiver can remove the stuffed bit.

When the input data has a rate higher than the output data rate of a multiplexer, the negative stuffing will occur. If negative stuffing occur, the extra data can be transmitted through an other channel. The receiver must need to kown how to retrieve the data.

AnchorPositive Stuffing

If the frame rate of the STS SPE is too slow with respect to the frame rate then the alignment of the envelope should periodically slip back or the pointer should be incremented by one periodically. This operation is indicated by inverting the I bits of the 10 bit pointer. The byte right after the H3 byte is the stuff byte and should be ignored. The following frames should contain the new pointer. For example, the 10 bit of the H1 and H2 pointer bytes has the value of ‘0010010011’ for STS-1 frame N.

        Frame #     IDIDIDIDID
        ----------------------
          N         0010010011
          N+1       1000111001  <-- the I bits are inverted, positive stuffing
                                    is required.
          N+2       0010010100  <-- the pointer is increased by 1
AnchorNegative Stuffing

If the frame rate of the STS SPE is too fast with respect to the frame rate then the alignment of the envelope should periodically advance or the pointer should be decremented by one periodically. This operation is indicated by inverting the D bits of the 10 bit pointer. The H3 byte is containing actual data. The following frames should contain the new pointer. For example, the 10 bit of the H1 and H2 pointer bytes has the value of ‘0010010011’ for STS-1 frame N.

        Frame #     IDIDIDIDID
        ----------------------
          N         0010010011
          N+1       0111000110  <-- the D bits are inverted, negative stuffing
                                    is required.
          N+2       0010010010  <-- the pointer is decreased by 1

Network Operation Center

network operations center (NOC, pronounced like the word knock), also known as a “network management center”, is one or more locations from which network monitoring and control, or network management, is exercised over a computertelecommunication orsatellite network.

NOCs are implemented by business organizationspublic utilitiesuniversities, and government agencies that oversee complex networking environments that require high availability. NOC personnel are responsible for monitoring one or many networks for certain conditions that may require special attention to avoid degraded service. Organizations may operate more than one NOC, either to manage different networks or to provide geographic redundancy in the event of one site becoming unavailable.

In addition to monitoring internal and external networks of related infrastructure, NOCs can monitor social networks to get a head-start on disruptive events.

NOCs analyze problems, perform troubleshooting, communicate with site technicians and other NOCs, and track problems through resolution. When necessary, NOCs escalate problems to the appropriate stakeholders. For severe conditions that are impossible to anticipate, such as a power failure or a cut optical fiber cable, NOCs have procedures in place to immediately contact technicians to remedy the problem.

Primary responsibilities of NOC personnel may include:

  • Network monitoring
  • Incident response
  • Communications management
  • Reporting

NOCs often escalate issues in a hierarchic manner, so if an issue is not resolved in a specific time frame, the next level is informed to speed up problem remediation. NOCs sometimes have multiple tiers of personnel, which define how experienced and/or skilled a NOC technician is. A newly hired NOC technician might be considered a “tier 1”, whereas a technician that has several years of experience may be considered “tier 3” or “tier 4”. As such, some problems are escalated within a NOC before a site technician or other network engineer is contacted.

NOC personnel may perform extra duties; a network with equipment in public areas (such as a mobile network Base Transceiver Station) may be required to have a telephone number attached to the equipment for emergencies; as the NOC may be the only continuously staffed part of the business, these calls will often be answered there.

A Network Operations Center rests at the heart of every telecom network or major data center, a place to keep an eye on everything.

Some of these NOCs are really “dressed to impress”, while others have taken a more mundane approach.

So, for inspiration, here is a set of pictures of different NOCs from telecom companies and data centers (and one content delivery network) that we here at Pingdom have collected from around the internet.

Dressed to impress

These NOCs are obviously designed to impress visitors on top of being useful. Also NOC constitutes the best Technical Experts of Networks as it acts as a heart for the network operations to run and to make human life more comfortable.

See the glimpse of some world’s best NOC across world!

Airtel Network Experience Center,Gurgaon,India

 

Reliance Communications’ NOC in India

 

AT&T’s Global NOC in Bedminster, New Jersey

 

Lucent’s Network Reliability Center in Aurora, Colorado (1998-99)

 

Conexim’s NOC in Australia

 

Akamai’s NOC in Cambridge, Massachusetts

 

Slightly more discreet

While still impressive on a smaller scale, these NOCs have taken a slightly more conventional approach. We noticed a divide here. Data centers tend to have more scaled-back NOCs while telecom companies often fall in the “dressed to impress” category, perhaps partly due to having more infrastructure to monitor than the average data center (and shareholders).

Easy CGI’s NOC in Pearl River, New York

 

Ensynch’s NOC in Tempe, Arizona

 

TWAREN’s NOC (Taiwan Advanced Research & Education Network)

 

The Planet’s NOC in Houston, Texas

 

KDL’s NOC in Evansville, Indiana

 

And the not-flashy-in-the-least award goes to…

Some of the small NOC’s could be seen as 

 

Image sources:

AT&T NOC from AT&T, Reliance NOC from Suraj, Lucent NOC from Evans Consoles, Conexim NOC from Conexim, Akamai NOC from Akamai via Bert Boerland’s blog, Easy CGI NOC from Easy CGI, Ensynch NOC from Ensynch, TWAREN NOC from TWAREN, The Planet NOC from The Planet’s blog, Rackspace NOC from Naveenium, KDL NOC from Kentucky Data Link.

http://royal.pingdom.com/2008/05/21/gallery-of-network-operations-centers/