Category

Standards

Category

FEC codes in optical communications are based on a class of codes know as Reed-Solomon.

Reed-Solomon code is specified as  RS (nk), which means that the encoder takes k data bytes and adds parity bytes to make an n bytes codeword. A Reed-Solomon decoder can correct up to t bytes in the codeword, where 2t=n – k.

 

ITU recommendation G.975 proposes a Reed-Solomon (255, 239). In this case 16 extra bytes are appended to 239 information-bearing bytes. The bit rate increase is about 7% [(255-239)/239 = 0.066], the code can correct up to 8 byte errors [255-239/2 =8] and the coding gain can be demonstrated to be about 6dB.

The same Reed-Solomon coding (RS (255,239)) is recommended in ITU-T G.709. The coding overhead is again about 7% for a 6dB coding gain. Both G.975 and G.709 improve the efficiency of the Reed-Solomon by interleaving data from different codewords. The interleaving technique carries an advantage for burst errors, because the errors can be shared across many different codewords. In the interleaving approach lies the main difference between G.709 and G.975: G.709 interleave approach is fully standardized,while G.975 is not.

The actual G.975 data overhead includes also one bit for framing overhead, therefore the bit rate exp ansion is [(255-238)/238 = 0.071]. In G.709 the frame overhead is higher than in G.975, hence an even higher bit rate expansion. One byte error occurs when 1 bit in a byte is wrong or when all the bits in a byte are wrong. Example: RS (255,239) can correct 8 byte errors. In the worst case, 8 bit errors may occur, each in a separate byte so that the decoder corrects 8 bit errors. In the best case, 8 complete byte errors occur so that the decoder corrects 8 x 8 bit errors.

There are other, more powerful and complex RS variants (like for example concatenating two RS codes) capable of Coding Gain 2 or 3 dB higher than the ITU-T FEC codes, but at the expense of an increased bit rate (sometimes as much as 25%).

FOR OTN FRAME: Calculation of RS( n,k) is as follows:-

*OPU1 payload rate= 2.488 Gbps (OC48/STM16)

 

*Add OPU1 and ODU1 16 bytes overhead:

 

3808/16 = 238, (3808+16)/16 = 239

ODU1 rate: 2.488 x 239/238** ~ 2.499Gbps

*Add FEC

OTU1 rate: ODU1 x 255/239 = 2.488 x 239/238 x 255/239

=2.488 x 255/238 ~2.667Gbps

 

NOTE:4080/16=(255)

**Multiplicative factor is just a simple math :eg. for ODU1/OPU1=3824/3808={(239*16)/(238*16)}

Here value of multiplication factor will give the number of times  for rise in the frame size after adding header/overhead.

As we are using Reed Soloman(255,239) i.e we are dividing 4080bytes in sixteen frames (The forward error correction for the OTU-k uses 16-byte interleaved codecs using a Reed- Solomon S(255,239) code. The RS(255,239) code operates on byte symbols.).

Hence 4080/16=255…I have understood it you need to do simpler maths to understand..)

Transparency here is transmission over network without altering original property of the client signal.

G.709 defines the OPUk which can contain the entire SDH signal. This means that one can transport 4 STM-16 signals in one OTU2 and not modify any of the SDH overhead.

Thus the transport of such client signals in the OTN is bit-transparent (i.e. the integrity of the whole client signal is maintained).

OTN is also timing transparent. The asynchronous mapping mode transfers the input timing (asynchronous mapping client) to the far end (asynchronous de-mapping client).

OTN is also delay transparent. For example if 4 STM-16 signals are mapped into ODU1’s and then multiplexed into an ODU2, their timing relationship is preserved until they are de-mapped back to ODU1’s.

Tandem Connection Monitoring (TCM)

Tandem system is also known as cascaded systems.

SDH monitoring is divided into section and path monitoring. A problem arises when you have “Carrier’s Carrier” situation where it is required to monitor a segment of the path that passes another carrier network.

 

Tandem Connection Monitoring

Here Operator A needs to have Operator B carries his signal. However he also needs a way of monitoring the signal as it passes through Operator B’s network. This is what a “Tandem connection” is. It is a layer between Line Monitoring and Path Monitoring. SDH was modified to allow a single Tandem connection. ITU-T rec. G.709 allows 6.

TCM1 is used by the User to monitor the Quality of Service (QoS) that they see. TCM2 is used by the first operator to monitor their end-to-end QoS. TCM3 is used by the various domains for Intra domain monitoring. Then TCM4 is used for protection monitoring by Operator B.

There is no standard on which TCM is used by whom. The operators have to have an agreement, so that they do not conflict.

TCM’s also support monitoring of ODUk connections for one or more of the following network applications (refer to ITU-T Rec. G.805 and ITU-T Rec. G.872):

–          optical UNI to UNI tandem connection monitoring ; monitoring the ODUk connection through the public transport network (from public network ingress network termination to egress network termination)

–          optical NNI to NNI tandem connection monitoring; monitoring the ODUk connection through the network of a network operator (from operator network ingress network termination to egress network termination)

–          sub-layer monitoring for linear 1+1, 1:1 and 1:n optical channel sub-network connection protection switching, to determine the signal fail and signal degrade conditions

–          sub-layer monitoring for optical channel shared protection ring (SPRING) protection switching, to determine the signal fail and signal degrade conditions

–          Monitoring an optical channel tandem connection for the purpose of detecting a signal fail or signal degrade condition in a switched optical channel connection, to initiate automatic restoration of the connection during fault and error conditions in the network

–          Monitoring an optical channel tandem connection for, e.g., fault localization or verification of delivered quality of service

A TCM field is assigned to a monitored connection. The number of monitored connections along an ODUk trail may vary between 0 and 6. Monitored connections can be nested, overlapping and/or cascaded.

 

ODUk monitored connections

Monitored connections A1-A2/B1-B2/C1-C2 and A1-A2/B3-B4 are nested, while monitored connections B1-B2/B3-B4 are cascaded.

Overlapping monitored connections are also supported.

 

Overlapping ODUk monitored connections

Channel Coding-A walkthrough

This article is just for revising Channel Coding concepts.

Channel coding is the process that transforms binary data bits into signal elements that can cross the transmission medium. In the simplest case, in a metallic wire a bi- nary 0 is represented by a lower voltage, and a binary 1 by a higher voltage. How- ever, before selecting a coding scheme it is necessary to identify some of the strengths and weaknesses of line codes:

  • High-frequency components are not desirable because they require more chan- nel bandwidth, suffer more attenuation, and generate crosstalk in electrical links.
  • Direct current (dc) components should be avoided because they require physi- cal coupling of transmission elements. Since the earth/ground potential usually varies between remote communication ends, dc provokes unwanted earth-re- turn loops.
  • The use of alternating current (ac) signals permits a desirable physical isola- tion using condensers and transformers.
  • Timing control permits the receiver to correctly identify each bit in the trans- mitted message. In synchronous transmission, the timing is referenced to the transmitter clock, which can be sent as a separate clock signal, or embedded into the line code. If the second option is used, then the receiver can extract its clock from the incoming data stream thereby avoiding the installation of an additional line.

 

Figure 1.1: Line encoding technologies. AMI and HDB3 are usual in electrical signals, while CMI is often used in optical signals.

In order to meet these requirements, line coding is needed before the signal is trans- mitted, along with the corresponding decoding process at the receiving end. There are a number of different line codes that apply to digital transmission, the most widely used ones are alternate mark inversion (AMI), high-density bipolar three ze- ros (HDB3), and coded mark inverted (CMI).

 Nonreturn to zero 

Nonreturn to zero (NRZ) is a simple method consisting of assigning the bit “1” to the positive value of the signal amplitude (voltage), and the bit “0” to the nega- tive value (see Figure 1.1 ). There are two serious disadvantages to this:

No timing information is included in the signal, which means that synchronism can easily be lost if, for instance, a long sequence of zeros is being received.

The spectrum of the signal includes a dc component.

Alternate mark inversion

Alternate mark inversion (AMI) is a transmission code, also known as pseudo- ternary, in which a “0” bit is transmitted as a null voltage and the “1” bits are represented alternately as positive and negative voltage. The digital signal coded in AMI is characterized as follows (see Figure 1.1):

            • The dc component of its spectrum is null.
            • It does not solve the problem of loss of synchronization with long sequences of zeros.

Bit eight-zero suppression

Bit eight-zero suppression (B8ZS) is a line code in which bipolar violations are de- liberately inserted if the user data contains a string of eight or more consecutive ze- ros. The objective is to ensure a sufficient number of transitions to maintain the synchronization when the user data stream contains a large number of consecutive zeros (see Figure 1.1 and Figure 1.2).

The coding has the following characteristics:

The coding has the following characteristics:

  • The timing information is preserved by embedding it in the line signal, even when long sequences of zeros are transmitted, which allows the clock to be re- covered properly on reception
  • The dc component of a signal that is coded in B8Z3 is null.

 

Figure 1.2     B8ZS and HDB3 coding. Bipolar violations are: V+ a positive level and V- negative.

High-density bipolar three zeroes

High-density bipolar three zeroes (HDB3) is similar to B8ZS, but limits the maxi- mum number of transmitted consecutive zeros to three (see Figure 1.5). The basic idea consists of replacing a series of four bits that are equal to “0” with a code word “000V” or “B00V,” where “V” is a pulse that violates the AMI law of alternate po- larity, and B it is for balancing the polarity.

  • “B00V” is used when, until the previous pulse, the coded signal presents a dc component that is not null (the number of positive pulses is not compensated by the number of negative pulses).
  • “000V” is used under the same conditions as above, when, until the previous pulse, the dc component is null (see Figure 1.6).
  • The pulse “B” (for balancing), which respects the AMI alternation rule and has positive or negative polarity, ensuring that two consecutive “V” pulses will have different polarity.

Coded mark inverted

The coded mark inverted (CMI) code, also based on AMI, is used instead of HDB3 at high transmission rates, because of the greater simplicity of CMI coding and de- coding circuits compared to the HDB3 for these rates. In this case, a “1” is transmit- ted according to the AMI rule of alternate polarity, with a negative level of voltage during the first half of the period of the pulse, and a positive level in the second half. The CMI code has the following characteristics (see Figure 1.1):

  • The spectrum of a CMI signal cancels out the components at very low frequencies.
  • It allows for the clock to be recovered properly, like the HDB3 code.
  • The bandwidth is greater than that of the spectrum of the same signal coded in AMI.

Rejuvenating PCM:Pulse Code Modulation

This article consists the very basic of PCM(Pulse Code Modulation).i.e foundation for Telecom Networks.

The pulse code modulation (PCM) technology (see Figure 1.1) was patented and developed in France in 1938, but could not be used because suitable technology was not available until World War II. This came about with the arrival of digital systems in the 1960s, when improving the performance of communications net- works became a real possibility. However, this technology was not completely adopted until the mid-1970s, due to the large amount of analog systems already in place and the high cost of digital systems, as semiconductors were very expensive. PCM’s initial goal was that of converting an analog voice telephone channel into a digital one based on the sampling theorem.

 

The sampling theorem states that for digitalization without information loss, the sampling frequency (fs) should be at least twice the maximum frequency component (fmax) of the analog information:

fs × fmax 

The frequency 2·fmax is called the Nyquist sampling rate. The sampling theorem is considered to have been articulated by Nyquist in 1928, and mathematically prov- en by Shannon in 1949. Some books use the term Nyquist sampling theorem, and others use Shannon sampling theorem. They are in fact the same theorem.

PCM involves three phases: sampling, encoding, and quantization:

In sampling, values are taken from the analog signal every 1/fs seconds (the sampling period).

 

Quantization assigns these samples a value by approximation, and in accordance with a quantization curve (i.e., A-law of ITU-T).

Encoding provides the binary value of each quantified sample.

 

If SDH is based on node and signal synchronization, why do fluctuations occur?Very general question for a Optics beginner.

The answer lies in the practical limitations of synchronization. SDH networks use high-quality clocks feeding network el- ements. However, we must consider the following:

  • A number of SDH islands use their own reference clocks, which may be nominally identical, but never exactly the same.
  • Cross services carried by two or more operators always generate offset and clock fluctuations whenever a common reference clock is not used.
  • Inside an SDH network, different types of breakdown may occur and cause a temporary loss of synchronization. When a node switches over to a secondary clock reference, it may be different from the original, and it could even be the internal clock of the node.
  • Jitter and wander effects

SDH/SONET:Maintenance and Performance Events

We know SDH/SONET is older technology now but just have a glimpse for the revision of basic FM process:

SDH SONET MAINTENANCE

SDH and SONET transmission systems are robust and reliable; however they are vulnerable to several effects that may cause malfunction. These effects can be clas- sified as follows:

  • Natural causes: This include thermal noise, always present in regeneration systems; solar radiation; humidity and Raleigh fading in radio systems; hardware aging; degraded lasers; degradation of electric connections; and electrostatic discharge.
  • A network design pitfall: Bit errors due to bad synchronization in SDH. Timing loops may collapse a transmission network partially, or even completely.
  • Human intervention: This includes fiber cuts, electrostatic discharges, power failure, and topology modifications.

 

Anomalies and defects management. (In regular characters for SDH; in italic for SONET.)

All these may produce changes in performance, and eventually collapse transmission services.

SDH/SONET Events

SDH/SONET events are classified as anomalies, defects, damage, failures, and alarms depending on how they affect the service:

  • Anomaly: This is the smallest disagreement that can be observed between mea- sured and expected characteristics. It could for instance be a bit error. If a single anomaly occurs, the service will not be interrupted. Anomalies are used to monitor performance and detect defects.

Defect: A defect level is reached when the density of anomalies is high enough to interrupt a function. Defects are used as input for performance monitoring, to con- trol consequent actions, and to determine fault causes.

  • Damage or fault: This is produced when a function cannot finish a requested action. This situation does not comprise incapacities caused by preventive maintenance.
  • Failure: Here, the fault cause has persisted long enough so that the ability of an item to perform a required function may be terminated. Protection mechanisms can now be activated.
  • Alarm: This is a human-observable indication that draws attention to a failure (detected fault), usually giving an indication of the depth of the damage. For example, a light emitting diode (LED), a siren, or an e-mail.
  • Indication: Here events are notified upstream to the peer layer for performance monitoring and eventually to request an action or a human intervention that can fix the situation .

Errors reflect anomalies, and alarms show defects. Terminology here is often used in a confusing way, in the sense that people may talk about errors but actually refer to anomalies, or use the word, “alarm” to refer to a defect.

 

OAM management. Signals are sent downstream and upstream when events are detected at the LP edge (1, 2); HP edge (3, 4); MS edge (5, 6); and RS edge (7, 8).

In order to support a single-end operation the defect status and the number of detected bit errors are sent back to the far-end termination by means of indications such an RDI, REI, or RFI

 Monitoring Events

SDH frames contain a lot of overhead information to monitor and manage events  When events are detected, overhead channels are used to notify peer layers to run network protection procedures or evaluate performance. Messages are also sent to higher layers to indicate the local detection of a service affecting fault to the far-end terminations.

Defects trigger a sequence of upstream messages using G1 and V2 bytes. Down- stream AIS signals are sent to indicate service unavailability. When defects are detected, upstream indications are sent to register and troubleshoot causes.

 Event Tables

 

PERFORMANCE MONITORING

SDH has performance monitoring capabilities based on bit error monitoring. A bit parity is calculated for all bits of the previous frame, and the result is sent as over- head. The far-end element repeats the calculation and compares it with the received

 

 

 

 

overhead. If the result is equal, there is considered to be no bit error; otherwise, a bit error indication is sent to the peer end.

A defect is understood as any serious or persistent event that holds up the transmission service. SDH defect processing reports and locates failures in either the complete end-to-end circuit (HP-RDI, LP-RDI) or on a specific multiplex section between adjacent SDH nodes (MS-RDI)

Alarm indication signal

An alarm indication signal (AIS) is activated under standardized criteria, and sent downstream in a path in the client layer to the next NE to inform about the event. The AIS will arrive finally at the NE at which that path terminates, where the client layer interfaces with the SDH network .

As an answer to a received AIS, a remote defect indication is sent backwards. An RDI is indicated in a specific byte, while an AIS is a sequence of “1s” in the payload space. The permanent sequence of “1s” tells the receiver that a defect affects the service, and no information can be provided.

 

Depending on which service is affected, the AIS signal adopts several forms:-

  • MS-AIS: All bits except for the RSOH are set to the binary value “1.”
  • AU-AIS: All bits of the administrative unit are set to “1” but the RSOH and MSOH maintain their codification.
  • TU-AIS: All bits in the tributary unit are set to “1,” but the unaffected tributar- ies and the RSOH and MSOH maintain their codification.
  • PDH-AIS: All the bits in the tributary are “1.”

Enhanced remote defect indication 

Enhanced remote defect indication (E-RDI) provides the SDH network with addi- tional information about the defect cause by means of differentiating:

  • Server defects: like AIS and LOP;
  • Connectivity defects: like TIM and UNEQ;
  • Payload defects: like PLM.

Enhanced RDI information is codified in G1 (bits 5-7) or in k4 (bits 5-7), depending on the path.

Many times we heard that we should implement Unidirectional or Bidirectional APS in the network.Just some of the advantages of these are:-

Unidirectional and bidirectional protection switching

Possible advantages of unidirectional protection switching include:

  • Unidirectional protection switching is a simple scheme to implement and does not require a protocol.
  • Unidirectional protection switching can be faster than bidirectional protection switching because it does not require a protocol.
  • Under multiple failure conditions there is a greater chance of restoring traffic by protection switching if unidirectional protection switching is used, than if bidirectional protection switching is used.

Possible advantages of bidirectional protection switching when uniform routing is used include:

  • With bidirectional protection switching operation, the same equipment is used for both directions of transmission after a failure. The number of breaks due to single failures will be less than if the path is delivered using the different equipment.
  • With bidirectional protection switching, if there is a fault in one path of the network, transmission of both paths between the affected nodes is switched to the alternative direction around the network. No traffic is then transmitted over the faulty section of the network and so it can be repaired without    further  protection switching.
  • Bidirectional protection switching is easier to manage because both directions of transmission use the same equipments along the full length of the trail.
  • Bidirectional protection switching maintains equal delays for both directions of transmission. This may be important where there is a significant  imbalance in the length of the trails e.g. transoceanic links where one trail is via a satellite link and the other via a cable link.
  •  Bidirectional protection switching also has the ability to carry extra traffic on the protection path.

@above is extracted form ITU-G.*841

1.OverView

Availability is a probabilistic measure of the length of time a system or network is functioning.

  • Generally calculated as a percentage, e.g. 99.999% (referred to as 5 nines up time) is carrier grade availability.
  • A network has a high availability when downtime / repair times are minimal.
  • For example, high availability networks are down for minutes, where low availability networks are down for hours.
  • Unavailability is the percentage of time a system is not functioning or downtime and is generally expressed in minutes.
  • Unavailability = (1 – Availability)*365*24*60
  • Unavailability(U)=MTTR/MTBF
  • The unavailability of a 99.999% available system is 5.3 minutes per year.
  • Availability is generally measured as either failure rates  or mean time before failure (MTBF).
  • Availability calculations always assume a bi-directional system.

2.Circuit vs. Nodal Availability

Circuit and nodal availability measure different quantities.  To help explain this clearly un-availability (Unavailability=1-Availablity) will be used in this section.

  • Circuit un-availability is a measure of the average down time of a traffic demand / service.
    • A circuit is un-available only if traffic affecting components that help transport the demand / service have failed.
    • Circuit unavailability is calculated by considering the unavailabilities of components which are traffic affecting and by taking into consideration those components that are hardware protected.
    • For example, the failure of both 10G line cards on an NE can cause a traffic outage.
  • Nodal un-availability is a measure of the average down time of a node.
    • Each time there is a failure in a node, regardless if it is traffic affecting or not, an engineer is required to visit the node to fix the failure.
    • Therefore nodal un-availability is based on calculated failure rates, it is still a direct measure of an operational expenditure.
    • Nodal unavailability is calculated by adding all components of a network element regardless of hardware protection, i.e. in series.
    • For example, failure of a protected switch card is non-traffic affecting but still requires a site visit to be replaced.

3.Terms & Definitions

Failure rate

  •  Failure rate is usually measured as Failures in Time (FIT), where one FIT equals a single failure in one billion (109) hours of operation.
  •  FITs are calculated according to industry standard (Telcordia SR 332).

MTBF- (Mean time between failure)

  •  Average time between failures for a given component.
  •  Measured either in hours or years.
  • MTBF is inversely proportional to FITs.

MTTR-(Mean time to repair)

  •  Average time to repair a given failure.
  •  Measured in hours.
  •  Availability is always quoted in terms of number of nines
  •  For example, carrier grade is 5 9’s, which is 99.999%
  • Availability is better understood in terms of unavailability in minutes per year
  • Therefore for an availability of 99.999%, the unavailability or downtime is 5.3 minutes per year
Data Planes are set of network elements which receive, send, and switch the network data.
 As per the  Generalized Multi-Protocol Label Switching (GMPLS) standards, following nomenclature is used for various technological data planes:
  • Packet Switching Capable (PSC) layer
  • Layer-2 Switching Capable (L2SC) layer
  • Time Division Multiplexing (TDM) layer
  • Lambda Switching Capable (LSC) layer
  • Fiber-Switch Capable (FSC)

And as per Layered architectures concepts ;above technologies are correlated as:-

  • Layer 3 for PSC (IP Routing)
  • Layer 2.5 for PSC (MPLS)
  • Layer 2 for L2SC (often Ethernet)
  • Layer 1.5 for TDM (often SONET/SDH)
  • Layer 1 for LSC (often WDM switch elements)
  • Layer 0 for FSC (often port switching devices based on optical or mechanical technologies)

**********************************************************************************************

In a “N” Layered Network Architecture, the services are grouped in a hierarchy of layers

– Layer N uses services of layer N-1

– Layer N provides services to layer N+1

A communication layer is completely defined by

(a) A peer protocol which specifies how entities at layer-N communicate.

(b) The service interface which specifies how adjacent layers at the same system communicate

 When talking about two adjacent layers,

(a) the higher layer is a service user, and

(b) the lower layer is a service provider

– The communication between entities at the same layer is logical

– The physical flow of data is vertical.

Just prior to transmission, the entire SONET signal, with the exception of the framing bytes and the section trace byte, is scrambled.  Scrambling randomizes the bit stream is order to provide sufficient 0–>1 and 1–>0 transitions for the receiver to derive a clock with which to receive the digital information.

 

 

 

 

 

 

 

Actually every add/drop multiplexer sample incoming bits according to a particular clock frequency. Now this clock frequency is recovered by using transitions between 1s and 0s in the incoming OC-N signal. Suppose, incoming bit stream contains long strings of all 1s or all 0s. Then clock recovery would be difficult. So to enable clock recovery at the receiver such long strings of all 1s or 0s are avoided. This is achieved by a process called Scrambling.

Scrambler is designed as shown in the figure given below:-

It is a frame synchronous scrambler of sequence length 127. The generating polynomial is 1+x6+x7. The scrambler shall be reset to ‘1111111’ on the most significant byte following Z0 byte in the Nth STS-1. That bit and all subsequent bits to be scrambled shall be added, modulo 2, to the output from the x7 position of the scrambler, as shown in Figure above.Example:

The first 127 bits are:

111111100000010000011000010100 011110010001011001110101001111 010000011100010010011011010110 110111101100011010010111011100 0101010

The same operation is used for descrambling. For example, the input data is 000000000001111111111.

        00000000001111111111  <-- input data
        11111110000001000001  <-- scramble sequence
        --------------------  <-- exclusive OR (scramble operation)
        11111110001110111110  <-- scrambled data
        11111110000001000001  <-- scramble sequence
        --------------------  <-- exclusive OR (descramble operation)
        00000000001111111111  <-- original data

The framing bytes A1 and A2, Section Trace byte J0 and Section Growth byte Z0 are not scrambled to avoid possibility that bytes in the frame might duplicate A1/A2 and cause an error in framing. The receiver searches for A1/A2 bits pattern in multiple consecutive frames, allowing the receiver to gain bit and byte synchronization. Once bit synchronization is gained, everything is done, from there on, on byte boundaries – SONET/SDH is byte synchronous, not bit synchronous.

An identical operation called descrambling is done at the receiver to retrieve the bits.

Scrambling is performed by XORing the data signal with a pseudo-random bit sequence generated by the scrambler polynomial indicated above.  The scrambler is frame synchronous, which means that it starts every frame in the same state.

Descrambling is performed by the receiver by XORing the received signal with the same pseudo random bit sequence.  Note that since the scrambler is frame synchronous, the receiver must have found the frame alignment before the signal can be descrambled.  That is why the frame bytes(A1A2) are not scrambled.

References:http://www.electrosofts.com/sonet/scrambling.html

Frequency justification and pointers:+/-ve Stuffing mechanism in SONET/SDH

When the input data has a rate lower than the output data rate of a multiplexer, the positive stuffing will occur. The input is stored in a buffer at a rate which is controlled by the WRITE clock. Since the output (READ) clock rate is higher than the WRITE clock rate, the buffer content will be depleted or emptied. To avoid this condition, the buffer fill is constantly monitored and compared to a threshold. If the the content fill is below a threshold, the READ clock is inhibited and stuffed bit is inserted to the output stream. Meanwhile, the input data stream is still filling the buffer. The stuffed bit location information must be transmitted to the receiver so that the receiver can remove the stuffed bit.

When the input data has a rate higher than the output data rate of a multiplexer, the negative stuffing will occur. If negative stuffing occur, the extra data can be transmitted through an other channel. The receiver must need to kown how to retrieve the data.

AnchorPositive Stuffing

If the frame rate of the STS SPE is too slow with respect to the frame rate then the alignment of the envelope should periodically slip back or the pointer should be incremented by one periodically. This operation is indicated by inverting the I bits of the 10 bit pointer. The byte right after the H3 byte is the stuff byte and should be ignored. The following frames should contain the new pointer. For example, the 10 bit of the H1 and H2 pointer bytes has the value of ‘0010010011’ for STS-1 frame N.

        Frame #     IDIDIDIDID
        ----------------------
          N         0010010011
          N+1       1000111001  <-- the I bits are inverted, positive stuffing
                                    is required.
          N+2       0010010100  <-- the pointer is increased by 1
AnchorNegative Stuffing

If the frame rate of the STS SPE is too fast with respect to the frame rate then the alignment of the envelope should periodically advance or the pointer should be decremented by one periodically. This operation is indicated by inverting the D bits of the 10 bit pointer. The H3 byte is containing actual data. The following frames should contain the new pointer. For example, the 10 bit of the H1 and H2 pointer bytes has the value of ‘0010010011’ for STS-1 frame N.

        Frame #     IDIDIDIDID
        ----------------------
          N         0010010011
          N+1       0111000110  <-- the D bits are inverted, negative stuffing
                                    is required.
          N+2       0010010010  <-- the pointer is decreased by 1

Network Operation Center

network operations center (NOC, pronounced like the word knock), also known as a “network management center”, is one or more locations from which network monitoring and control, or network management, is exercised over a computertelecommunication orsatellite network.

NOCs are implemented by business organizationspublic utilitiesuniversities, and government agencies that oversee complex networking environments that require high availability. NOC personnel are responsible for monitoring one or many networks for certain conditions that may require special attention to avoid degraded service. Organizations may operate more than one NOC, either to manage different networks or to provide geographic redundancy in the event of one site becoming unavailable.

In addition to monitoring internal and external networks of related infrastructure, NOCs can monitor social networks to get a head-start on disruptive events.

NOCs analyze problems, perform troubleshooting, communicate with site technicians and other NOCs, and track problems through resolution. When necessary, NOCs escalate problems to the appropriate stakeholders. For severe conditions that are impossible to anticipate, such as a power failure or a cut optical fiber cable, NOCs have procedures in place to immediately contact technicians to remedy the problem.

Primary responsibilities of NOC personnel may include:

  • Network monitoring
  • Incident response
  • Communications management
  • Reporting

NOCs often escalate issues in a hierarchic manner, so if an issue is not resolved in a specific time frame, the next level is informed to speed up problem remediation. NOCs sometimes have multiple tiers of personnel, which define how experienced and/or skilled a NOC technician is. A newly hired NOC technician might be considered a “tier 1”, whereas a technician that has several years of experience may be considered “tier 3” or “tier 4”. As such, some problems are escalated within a NOC before a site technician or other network engineer is contacted.

NOC personnel may perform extra duties; a network with equipment in public areas (such as a mobile network Base Transceiver Station) may be required to have a telephone number attached to the equipment for emergencies; as the NOC may be the only continuously staffed part of the business, these calls will often be answered there.

A Network Operations Center rests at the heart of every telecom network or major data center, a place to keep an eye on everything.

Some of these NOCs are really “dressed to impress”, while others have taken a more mundane approach.

So, for inspiration, here is a set of pictures of different NOCs from telecom companies and data centers (and one content delivery network) that we here at Pingdom have collected from around the internet.

Dressed to impress

These NOCs are obviously designed to impress visitors on top of being useful. Also NOC constitutes the best Technical Experts of Networks as it acts as a heart for the network operations to run and to make human life more comfortable.

See the glimpse of some world’s best NOC across world!

Airtel Network Experience Center,Gurgaon,India

 

Reliance Communications’ NOC in India

 

AT&T’s Global NOC in Bedminster, New Jersey

 

Lucent’s Network Reliability Center in Aurora, Colorado (1998-99)

 

Conexim’s NOC in Australia

 

Akamai’s NOC in Cambridge, Massachusetts

 

Slightly more discreet

While still impressive on a smaller scale, these NOCs have taken a slightly more conventional approach. We noticed a divide here. Data centers tend to have more scaled-back NOCs while telecom companies often fall in the “dressed to impress” category, perhaps partly due to having more infrastructure to monitor than the average data center (and shareholders).

Easy CGI’s NOC in Pearl River, New York

 

Ensynch’s NOC in Tempe, Arizona

 

TWAREN’s NOC (Taiwan Advanced Research & Education Network)

 

The Planet’s NOC in Houston, Texas

 

KDL’s NOC in Evansville, Indiana

 

And the not-flashy-in-the-least award goes to…

Some of the small NOC’s could be seen as 

 

Image sources:

AT&T NOC from AT&T, Reliance NOC from Suraj, Lucent NOC from Evans Consoles, Conexim NOC from Conexim, Akamai NOC from Akamai via Bert Boerland’s blog, Easy CGI NOC from Easy CGI, Ensynch NOC from Ensynch, TWAREN NOC from TWAREN, The Planet NOC from The Planet’s blog, Rackspace NOC from Naveenium, KDL NOC from Kentucky Data Link.

http://royal.pingdom.com/2008/05/21/gallery-of-network-operations-centers/

Here we will discuss what are the advantages of OTN(Optical Transport Network) over SDH/SONET.

The OTN architecture concept was developed by the ITU-T initially a decade ago, to build upon the Synchronous Digital Hierarchy (SDH) and Dense Wavelength-Division Multiplexing (DWDM) experience and provide bit  rate efficiency,  resiliency and  management  at  high capacity.  OTN therefore looks a  lot like Synchronous Optical Networking (SONET) / SDH in structure, with less overhead and more management features.

It is a common misconception that OTN is just SDH with a few insignificant changes. Although the multiplexing structure and terminology look the same, the changes in OTN have a great impact on its use in, for example, a multi-vendor, multi-domain environment. OTN was created to be a carrier technology, which is why emphasis was put on enhancing transparency, reach, scalability and monitoring of signals carried over large distances and through several administrative and vendor domains.

The advantages of OTN compared to SDH are mainly related to the introduction of the following changes:

Transparent Client Signals:

In OTN the Optical Channel Payload Unit-k (OPUk) container is defined to include the entire SONET/SDH and Ethernet signal, including associated overhead bytes, which is why no modification of the overhead is required when transporting through OTN. This allows the end user to view exactly what was transmitted at the far end and decreases the complexity of troubleshooting as transport and client protocols aren’t the same technology.

OTN uses asynchronous mapping and demapping of client signals, which is another reason why OTN is timing transparent.

Better Forward Error Correction:

OTN has increased the number of bytes reserved for Forward Error Correction (FEC), allowing a theoretical improvement of the Signal-to-Noise Ratio (SNR) by 6.2 dB. This improvement can be used to enhance the optical systems in the following areas:

  • Increase the reach of optical systems by increasing span length or increasing the number of spans.
  • Increase the number of channels in the optical systems, as the required power theoretical has been lowered 6.2 dB, thus also reducing the non-        linear effects, which are dependent on the total power in the system.
  • The increased power budget can ease the introduction of transparent optical network elements, which can’t be introduced without a penalty.    These elements include Optical Add-Drop Multiplexers (OADMs), Photonic Cross Connects (PXCs), splitters, etc., which are fundamental for the  evolution from point-to-point optical networks to meshed ones.
  • The FEC part of OTN has been utilised on the line side of DWDM transponders for at least the last 5 years, allowing a significant increase in reach/capacity.

Better scalability:

The old transport technologies like SONET/SDH were created to carry voice circuits, which is why the granularity was very dense – down to 1.5 Mb/s. OTN is designed to carry a payload of greater bulk, which is why the granularity is coarser and the multiplexing structure less complicated.

Tandem Connection Monitoring:

The introduction of additional (six) Tandem Connection Monitoring (TCM) combined with the decoupling of transport and payload protocols allow a significant improvement in monitoring signals that are transported through several administrative domains, e.g. a meshed network topology where the signals are transported through several other operators before reaching the end users.

In a multi-domain scenario – “a classic carrier’s carrier scenario”, where the originating domain can’t ensure performance or even monitor the signal when it passes to another domain – TCM introduces a performance monitoring layer between line and path monitoring allowing each involved network to be monitored, thus reducing the complexity of troubleshooting as performance data is accessible for each individual part of the route.

Also a major drawback with regards to SDH is that a lot of capacity during packet transport is wasted in overhead and stuffing, which can also create delays in the transmission, leading to problems for the end application, especially if it is designed for asynchronous, bursty communications behavior. This over-complexity is probably one of the reasons why the evolution of SDH has stopped at STM 256 (40 Gbps).

References: OTN and NG-OTN: Overview by GEANT

What is Q-factor ?

Q-factor measurement occupies an intermediate position between the classical optical parameters (power, OSNR, and wavelength) and the digital end-to-end performance parameters based on BER.A Q-factor is measured in the time domain by analyzing the statistics of the pulse shape of the optical signal. A Q-factor is a comprehensive measure for the signal quality of an optical channel taking into account the effects of noise, filtering, and linear/non-linear distortions on the pulse shape, which is not possible with simple optical parameters alone.

Definition 1:

The Q-factor, a function of the OSNR, provides a qualitative description of the receiver performance. The Q-factor suggests the minimum signal-to-noise ratio (SNR) required to obtain a specific BER for a given signal. OSNR is measured in decibels. The higher the bit rate, the higher the OSNR ratio required. For OC-192 transmissions, the OSNR should be at least 27 to 31 dB compared to 18 to 21 dB for OC-48.

 Definition 2:

The Quality factor is a measure of how noisy a pulse is for diagnostic purposes. The eye pattern oscilloscope will typically generate a report that shows what the Q factor number is. The Q factor is defined as shown in the figure: the difference of the mean values of the two signal levels (level for a “1” bit and level for a “0” bit) divided by the sum of the noise standard deviations at the two signal levels. A larger number in the result means that the pulse is relatively free from noise.

 Definition 3:

Q is defined as follows: The ratio between the sums of the distance from the decision point within the eye (D) to each edge of the eye, and the sum of the RMS noise on each edge of the eye.

This definition can be derived from the following definition, which in turn comes from ITU-T G.976 (ref. 3).

where m1,0 are the mean positions of each rail of the eye, and s1,0 are the S.D., or RMS noise, present on each of these rails.

For an illustration of where these values lie within the eye see the following figure:

 

As Q is a ratio it is reported as a unit-less positive value greater than 1 (Q>1). A Q of 1 represents complete closure of the received optical eye. To give some idea of the associated raw BER a Q of 6 corresponds to a raw BER of 10-9.

Q factor as defined in ITU-T G.976

The Q factor is the signal-to-noise ratio at the decision circuit in voltage or current units, and is typically expressed by:

                                                                                                                                                                                                   (A-1)

where µ1,0, is the mean value of the marks/spaces voltages or currents, and s1,0 is the standard deviation.

The mathematic relations to BER when the threshold is set to the optimum value are:

    

                                                                                                                          (A-2)

with:

    (A-3)

 

The Q factor can be written in terms of decibels rather than in linear values:

                            (A-4)

 

Calculation of Q-Factor from OSNR

The OSNR is the most important parameter that is associated with a given optical signal. It is a measurable (practical) quantity for a given network, and it can be calculated from the given system parameters. The following sections show you how to calculate OSNR. This section discusses the relationship of OSNR to the Q-factor.

The logarithmic value of Q (in dB) is related to the OSNR by following  Equation

 

In the equation, B0 is the optical bandwidth of the end device (photodetector) and Bc is the electrical bandwidth of the receiver filter.

Therefore, Q(dB) is shown in

 

In other words, Q is somewhat proportional to the OSNR. Generally, noise calculations are performed by optical spectrum analyzers (OSAs) or sampling oscilloscopes, and these measurements are carried over a particular measuring range of Bm. Typically, Bmis approximately 0.1 nm or 12.5 GHz for a given OSA. From Equation showing Q in dB in terms of OSNR, it can be understood that if B0 < Bc, then OSNR (dB )> Q (dB). For practical designs OSNR(dB) > Q(dB), by at least 1–2 dB. Typically, while designing a high-bit rate system, the margin at the receiver is approximately 2 dB, such that Q is about 2 dB smaller than OSNR (dB).

The Q-Factor, is in fact a metric to identify the attenuation in the receiving signal and determine a potential LOS and it is an estimate of the Optical-Signal-to-Noise-Ratio (OSNR) at the optical receiver.   As attenuation in the receiving signal increases, the dBQ value drops and vice-versa.  Hence a drop in the dBQ value can mean that there is an increase in the Pre FEC BER, and a possible LOS could occur if the problem is not corrected in time.

Reference:

ITU-T G.976

What is an eye diagram?

An eye diagram is an overlay of all possible received bit sequences, e.g. the sum of…

eyediagram

Note: should really be an overlap of “infinitely long” bit sequences to get a true eye.  This catches all potential inter-symbol interference

results in…

(the above is considered to be an “open” eye)

 Eye diagrams can be used to evaluate distortion in the received signal, e.g. a “closed” eye

Note:More the wide open eye more the network is error-free

The hex code F628 is transmitted in every frame of every STS-1.

This allows a receiver to locate the alignment of the 125usec frame within the received serial bit stream.  Initially, the receiver scans the serial stream for the code F628.  Once it is detected, the receiver watches to verify that the pattern repeats after exactly 810 STS-1 bytes, after which it can declare the frame found. Once the frame alignment is found, the remaining signal can be descrambled and the various overhead bytes extracted and processed.

Just prior to transmission, the entire SONET signal, with the exception of the framing bytes and the section trace byte, is scrambled.  Scrambling randomizes the bit stream is order to provide sufficient 0-1 and 1-0 transitions for the receiver to derive a clock with which to receive the digital information.

Scrambling is performed by XORing the data signal with a pseudo-random bit sequence generated by the scrambler polynomial indicated above.  The scrambler is frame synchronous, which means that it starts every frame in the same state.

Descrambling is performed by the receiver by XORing the received signal with the same pseudo random bit sequence.  Note that since the scrambler is frame synchronous, the receiver must have found the frame alignment before the signal can be descrambled.  That is why the frame bytes are not scrambled.

 

One more interesting answer from Standardisation Expert ITU-T Huub van Helvoort:-

The initial “F” is to provide four consecutive bits for the clock recovery circuit to lock the clock. The number of “0” and “1”in F628 is equal (=8) to take care that there is a DC balance in the signal (after line coding).

It has nothing to do with the scrambling, the pattern may even occur in the scrambled signal, however it is very unlikely that that occurs exactly every 125 usec so there will not be a false
lock to this pattern.

Explanation to Huub’s dc balancing w.r.t NEXT GENERATION TRANSPORT NETWORKS Data, Management, and Control Planes by Manohar Naidu Ellanti:

A factor that was not initially anticipated when the Al and A2 bit patterns were chosen was the potential affect on the laser for higher rate signals. For an ST S-N signal, the frame begins with N adjacent Al bytes followed by N adjacent A2 bytes. Note that for Al, there are more ls than Os, which means that the laser is on for a higher percentage of the time when the Al byte than for A2, in which there are more Os than ls. The A2 byte has fewer ls than Os. As a result, if the laser is directly modulated, for large values of N, the lack of balance between Os and 1s causes the transmitting laser to become hotter during the string of A 1 bytes and cooler during the string of A2 bytes. The thermal drift affects the laser performance such that the signal level changes, making it difficult for the receiver threshold detector to track. Most high-speed systems have addressed this problem by using a laser that is continuously on and modulate the signal with a shutter after the laser.

We know that in SDH frame rate is fixed i.e. 125us.

But in case of OTN, it is variable rather frame size is fixed.

So, frame rate calculation for OTN could be done by following method:-

Frame Rate (us) =ODUk frame size(bits)/ODUk bit rate(bits/s)…………………………………….(1)

 Also, we know that 

STM16=OPU1==16*9*270*8*8000=2488320000 bits/s

 Now assume that multiplicative factor (Mk)** for rate calculation of various rates

Mk= OPUk=(238/239-k) ODUk=239/(239-k) OTUk=255/(239-k)

Now, Master Formula to calculate bit rate for different O(P/D/T)Uk will be

Bit Rate for O(P/D/T)Uk b/s =Mk*X*STM16=Mk*X*2488320000 b/s………..(2)

Where X=granularity in order of STM16 for OTN bit rates(x=1,4,40,160)

Putting values of equation(2) in equation(1) we will get OTN frame rates.

Eg:-

otn-rates

For further queries revert:)

**Multiplicative factor is just a simple math :eg. for ODU1/OPU1=3824/3808={(239*16)/(238*16)}

Here value of multiplication factor will give the number of times  for rise in the frame size after adding header/overhead.

As we are using Reed Soloman(255,239) i.e we are dividing 4080bytes in sixteen frames (The forward error correction for the OTU-k uses 16-byte interleaved codecs using a Reed- Solomon S(255,239) code. The RS(255,239) code operates on byte symbols.).

Hence 4080/16=255…I have understood it you need to do simpler maths to understand..)

>> SPECIFICATION COMPARISION

>CHARACTERISTICS COMPARISON 

>>WDM BAND

*****************************************************************************************************************************References

EDFA-Erbium Doped Fiber Amplifier

EDFA block

RAMAN AMPLIFIER

SOA-Semiconductor Optical Amplifier

Latency in Fiber Optic Networks

As we are very much aware that Internet traffic is growing very fast. The more information we are transmitting the more we need to think about parameters like available bandwidth and latency. Bandwidth is usually understood by end-users as the important indicator and measure of network performance. It is surely a reliable figure of merit, but it mainly depends on the characteristics of the equipment. Unlike bandwidth, latency and jitter depend on the specific context of transmission network topology and traffic conditions.

Latency we understand delay from the time of packet transmission at the sender to the end of packet reception at the receiver. If latency is too high it spreads data packets over the time and can create an impression that an optical metro network is not operating at data transmission speed which was expected. Data packets are still being transported at the same bit rate but due to latency they are delayed and affect the overall transmission system performance.

It should be pointed out, that there is need for low latency optical networks in almost all industries where any data transmission is realized. It is becoming a critical requirement for a wide set of applications like financial transactions, videoconferencing, gaming, telemedicine and cloud services which requires transmission line with almost no delay performance. These industries are summarized and shown in table below, please see Table 1.

Table 1. Industries where low latency services are very important .

In fiber optical networks latency consists of three main components which adds extra time delay:

  •  the optical fiber itself,
  •  optical components
  •  opto-electrical components.

Therefore, for the service provider it is extremely important to choose best network components and think on efficient low latency transport strategy.

Latency is a critical requirement for a wide set of applications mentioned above. Even latency of 250 ns can make the difference between winning and losing a trade. Latency reduction is very important in financial sector, for example, in the stock exchange market where 10 ms of latency could potentially result in a 10% drop in revenues for a company. No matter how fast you can execute a trade command, if your market data is delayed relative to competing traders, you will not achieve the expected fill rates and your revenue will drop. Low latency trading has moved from executing a transaction within several seconds to milliseconds, microseconds, and now even to nanoseconds.

LATENCY SOURCES IN OPTICAL NETWORKS

Latency is a time delay experienced in system and it describes how long it takes for data to get from transmission side to receiver side. In a fiber optical communication systems it is essentially the length of optical fiber divided by the speed of light in fiber core, supplemented with delay induced by optical and electro optical elements plus any extra processing time required by system, also called overhead.Signal processing delay can be reduced by using parallel processing based on large scale integration CMOS technologies.

Added to the latency due to propagation in the fiber, there are other path building blocks that affect the total data transport time. These elements include

  •   opto-electrical conversion,
  •   switching and routing,
  •   signal regeneration,
  •   amplification,
  •   chromatic dispersion (CD) compensation,
  •   polarization mode dispersion (PMD) compensation,
  •   data packing, digital signal processing (DSP),
  •   protocols and addition forward error correction (FEC)

Data transmission speed over optical metro network must be carefully chosen. If we upgrade 2.5 Gbit/s link to 10 Gbit/s link then CD compensation or amplification may be necessary, but it also will increase overall latency. For optical lines with transmission speed more than 10 Gbit/s (e.g. 40 Gbit/s) a need for coherent detection arises. In coherent detection systems CD can be electrically compensated using DSP which also adds latency. Therefore, some companies avoid using coherent detection for their low-latency network solutions.

From the standpoint of personal communications, effective dialogue requires latency < 200 ms, an echo needs > 80 ms to be distinguished from its source, remote music lessons require latency < 20 ms, and remote performance < 5 ms. It has been reported that in virtual environments, human beings can detect latencies as low as 10 to 20 ms. In trading industry or in telehealth every microsecond matters. But in all cases, the lower latency we can get the better system performance will be.

Single mode optical fiber

In standard single-mode fiber, a major part of light signal travels in the core while a small amount of light travels in the cladding. Optical fiber with lower group index of refraction provides an advantage in low latency applications.It is useful to use a parameter “effective group index of refraction (neff) instead of “index of refraction (n)” which only defines the refractive index of core or cladding of single mode fiber. The neff parameter is a weighted average of all the indices of refraction encountered by light as it travels within the fiber, and therefore it represents the actual behavior of light within a given fiber.The impact of profile shape on neff by comparing its values for several Corning single mode fiber (SMF) products with different refractive index profiles is illustrated in Fig. 2.

 

Figure 2. Effective group index of refraction impact of various commercially available Corning single mode fiber types.

It is known that speed of light in vacuum is 299792.458 km/s. Assuming ideal propagation at the speed of light in vacuum, an unavoidable latency value can be calculated as following in Equation (1):

 

However, due to the fiber’s refractive index light travels more slowly in optical fiber than in vacuum. In standard single mode fiber defined by ITU-T G.652 recommendation the effective group index of refraction (neff), for example, can be equal to 1.4676 for transmission on 1310 nm and 1.4682 for transmission on 1550 nm wavelength. By knowing neff we can express the speed of light in selected optical fiber at 1310 and 1550 nm wavelengths, see Equations (2) and (3):

 

By knowing speed of light in optical fiber at different wavelengths (see Equation (2) and (3) ) optical delay which is caused by 1 km long optical fiber can be calculated as following:

 

As one can see from Equations (4) and (5), propagation delay of optical signal is affected not only by the fiber type with certain neff, but also with the wavelength which is used for data transmission over fiber optical network. It is seen that optical signal delay values in single mode optical fiber is about 4.9 μs. This value is the practically lower limit of latency achievable for 1 km of fiber in length if it were possible to remove all other sources of latency caused by other elements and data processing overhead.

Photonic crystal fibers (PCFs) can have very low effective refractive index, and can propagate light much faster than in SMFs. For example, hollow core fiber (HCF) may provide up to 31% reduced latency relative to traditional fiber optics. But there is a problem that attenuation in HCF fibers is much higher compared to already implemented standard single mode fibers (for SMF α=0.2 dB/km but for HCF α=3.3 dB/km at 1550 nm). However, it is reported even 1.2 dB/km attenuation obtained in hollow-core photonic crystal fiber.

Chromatic Dispersion Compensation

Chromatic dispersion (CD) occurs because different wavelengths of light travel at different speeds in optical fiber. CD can be compensated by dispersion compensation module (DCM) where dispersion compensating fiber (DCF) or fiber Bragg grating (FBG) is employed.

A typical long reach metro access fiber optical network will require DCF approximately 15 to 25% of the overall fiber length. It means that use of DCF fiber adds about 15 to 25% to the latency of the fiber.For example, 100 km long optical metro network where standard single mode fiber (SMF) is used, can accumulate chromatic dispersion in value about 1800 ps/nm at 1550 nm wavelength.For full CD compensation is needed about 22.5 km long DCF fiber spool with large negative dispersion value (typical value is -80 ps/nm/km).If we assume that light propagation speed in DCF fiber is close to speed in SMF then total latency of 100 km long optical metro network with CD compensation using DCF DCM is about 0.6 ms.

Solution for how to avoid need for chromatic dispersion compensation or reduce the length of necessary DCF fiber is to use optical fiber with lower CD coefficient. For example, non-zero dispersion shifted fibers (NZ-DSFs) were developed to simplify CD compensation while making a wide band of channels available. NZ-DSF fiber parameters are defined in ITU-T G.655 recommendation. Today NZ-DSF fibers are optimized for regional and metropolitan high speed optical networks operating in the C- and L- optical bands. For C band it is defined that wavelength range is from 1530 to 1565 nm, but for L band it is from 1565 to 1625 nm.

For commercially available NZ-DSF fiber chromatic dispersion coefficient can be from 2.6 to 6.0 ps/nm/km in C-band and from 4.0 to 8.9 ps/nm/km in L-band. At 1550 nm region typical CD coefficient is about 4 ps/nm/km for this type of fiber. It can be seen that for G.655 NZ-DSF fiber CD coefficient is about four times lower than for standard G.652 SMF fiber.Since these fibers have lower dispersion than conventional single mode, simpler modules are used that add only up to 5% to the transmission time for NZ-DSF.7 This enables a lower latency than using SMF fiber  for transmission. Another solution how to minimize need for extra CD compensation or reduce it to the necessary minimum is dispersion shifted fiber (DSF) which is specified in ITU-T G.653 recommendation. This fiber is optimized for use in 1550 nm region and has no chromatic dispersion at 1550 nm wavelength. Although, it is limited to single-wavelength operation due to non-linear four wave mixing (FWM), which causes optical signal distortions.

If CD is unavoidable another technology for compensation of accumulated CD is a deployment of fiber Bragg gratings (FBG). DCM with FBG can compensate several hundred kilometers of CD without any significant latency penalty and effectively remove all the additional latency that DCF-based networks add.In other words, a lot of valuable microseconds can be gained by migrating from DCF DCM to FBG DCM technology in optical metro network.Typical fiber length in an FBG used for dispersion compensation is about 10 cm. Therefore, normally FBG based DCM can introduce from 5 to 50 ns delay in fiber optical transmission line.

One of solutions how to avoid implementation of DCF DCM which introduces addition delay is coherent detection where complex transmission formats such as quadrature phase-shift keying (QPSK) can be used. However, it must be noticed that it can be a poor choice from a latency perspective because of the added digital signal processing (DSP) time it require. This additional introduced delay can be up to 1 μs.

Optical amplifiers

Another key optical component which adds additional time delay to optical transmission line is optical amplifier. Erbium doped fiber amplifiers (EDFA) is widely used in fiber optical access and long haul networks. EDFA can amplify signals over a band of almost 30 to 35 nm extending from 1530 to1565 nm, which is known as the C-band fiber amplifier, and from 1565 to 1605 nm, which is known as the L-band EDFA.The great advantage of EDFAs is that they are capable of amplifying many WDM channels simultaneously and there is no need to amplify each individual channel separately. EDFAs also remove the requirement for optical-electrical-optical (OEO) conversion, which is highly beneficial from a low-latency perspective. However it must be taken into account that EDFA contains few meters of erbium-doped optical fiber (Er3+) which adds extra latency, although this latency amount is small compared with other latency contributors. Typical EDFA amplifier contains up to 30 m long erbium doped fiber. These 30 m of additional fiber add 147 ns (about 0.15 μs) time delay.

Solution to how to avoid or reduce extra latency if amplification is necessary is use of Raman amplifier instead of EDFA or together (in tandem) with EDFA. This combination provides maximal signal amplification with minimal latency. Raman amplifiers use a different optical characteristic to amplify the optical signal.Raman amplification is realized by using stimulated Raman scattering. The Raman gain spectrum is rather broad, and the peak of the gain is centered about 13 THz (100 nm in wavelength) below the frequency of the pump signal used. Pumping a fiber using a high-power pump laser, we can provide gain to other signals, with a peak gain obtained 13 THz below the pump frequency. For example, using pumps around 1460–1480 nm wavelength provides Raman gain in the 1550–1600 nm window, which partly cover C and L bands. Accordingly, we can use the Raman effect to provide gain at any wavelength we want to amplify. The main benefit regarding to latency is that Raman amplifier pump optical signal without adding fiber to the signal path, therefore we can assume that Raman amplifier adds no latency.

Transponders and opto-electrical conversion

Any transmission line components which are performing opto-electrical conversion increase total latency. One of key elements used in opto-electrical conversion are transponders and muxponders. Transponders convert incoming signal from the client to a signal suitable for transmission over the WDM link and an incoming signal from the WDM link to a suitable signal toward the client.Muxponder basically do the same as transponder except that it has additional option to multiplex lower rate signals into a higher rate carrier (e.g. 10 Gbit/s services up to 40 Gbit/s transport) within the system in such a way saving valuable wavelengths in the optical metro network.

The latency of both transponders and muxponders varies depending on design, functionality, and other parameters. Muxponders typically operate in the 5 to 10 μs range per unit. The more complex transponders include additional functionality such as in-band management channels. This complexity forces the unit design and latency to be very similar to a muxponder, in the 5 to 10 μs range. If additional FEC is used in these elements then latency value can be higher.Several telecommunications equipment vendors offer simpler and lower-cost transponders that do not have FEC or in-band management channels or these options are improved in a way to lower device delay. These modules can operate at much lower latencies, from 4 ns to 30 ns. Some vendors also claim that their transponders operate with 2 ns latency which is equivalent to adding about a half meter of SMF to fiber optical path.

Optical signal regeneration

For low latency optical metro networks it is very important to avoid any regeneration and focus on keeping the signal in the optical domain once it is entered the fiber. An optical-electronic-optical (OEO) conversion takes about 100 μs, depending on how much processing is required in the electrical domain. Ideally a carrier would like to avoid use of FEC or full 3R (reamplification, reshaping, retiming) regeneration. 3R regeneration needs OEO conversion which adds unnecessary time delay. Need for optical signal regeneration is determined by transmission data rate involved, whether dispersion compensation or amplification is required, and how many nodes the signal must pass through along the fiber optical path.

Forward error correction and digital signal processing

It is necessary to minimize the amount of electrical processing at both ends of fiber optical connection. FEC, if used (for example, in transponders) will increase the latency due to the extra processing time. This approximate latency value can be from 15 to 150 μs based on the algorithm used, the amount of overhead, coding gain, processing time and other parameters.

Digital signal processing (DSP) can  be used to  deal with chromatic dispersion  (CD), polarization  mode dispersion (PMD) and remove critical optical impairments. But it must be taken into account that DSP adds extra latency to the path. It has been mentioned before that this additional introduced delay can be up to 1 μs.

Latency regarding OSI Levels

Latency is not added only by the physical medium but also because of data processing implemented in electronic part of fiber optical metro network (basically transmitter and receiver). All modern networks are based upon the Open System Interconnection (OSI) reference model which consists of a 7 layer protocol stack, see Fig. 3.

 

Figure 3. OSI reference model illustrating (a) total latency increase over each layer and (b) data way passing through all protocol layers in transmitter and receiver.

SUMMARY

Latency sources in optical metro network and typical induced time delay values.

 

 

Introduction

Automatic Protection Switching (APS) is one of the most valuable features of SONET and SDH networks. Networks with APS quickly react to failures, minimizing lost traffic, which minimizes lost revenue to service providers.  The network is said to be “self healing.”  This application note covers how to use Sunrise Telecom SONET and SDH analyzers to measure the amount of time it takes for a network to complete an automatic protection switchover.  This is important since ANSI T1.105.1 and ITU-T G.841 require that a protection switchover occur within 50 ms.  To understand the APS measurement, a brief review is first given.  This is followed by an explanation of the basis behind the APS measurement.  The final section covers how to operate your Sunrise Telecom SONET and SDH equipment to make an APS time measurement.

What Is APS?

Automatic Protection Switching keeps the network working even if a network element or link fails.  The Network Elements (NEs) in a SONET/SDH network constantly monitor the health of the network.  When a failure is detected by one or more network elements, the network proceeds through a coordinated predefined sequence of steps to transfer (or switchover) live traffic to the backup facility (also called “protection” facility).  This is done very quickly to minimize lost traffic.  Traffic remains on the protection facility until the primary facility (working facility) fault is cleared, at which time the traffic may revert to the working facility.

In a SONET or SDH network, the transmission is protected on optical sections from the Headend (the point at which

the Line/Multiplexer Section Overhead is inserted) to the Tailend (the point where the Line/Multiplexer Section Overhead is terminated).

The K1 and K2 Line/Multiplexer Section Overhead bytes carry an Automatic Protection Switching protocol used to coordinate protection switching between the headend and the tailend.

The protocol for the APS channel is summarized in Figure 1.  The 16 bits within the APS channel contain information on  the APS configuration, detection of network failure, APS commands, and revert commands.  When a network failure is detected, the Line/Multiplexer Section Terminating Equipment communicates and coordinates the protection switchover by changing certain bits within the K1 & K2 bytes.

During the protection switchover, the network elements signal an APS by sending AIS throughout the network.  AIS is  also present at the ADM drop points.  The AIS condition may come and go as the network elements progress through their algorithm to switch traffic to the protection circuit.

AIS signals an APS event. But what causes the network to initiate an automatic protection switchover? The three most common are:

  • • Detection of AIS (AIS is used to both initiate and signal an APS event)
  • • Detection of excessive B2 errors
  • • Initiation through a network management terminal

According to GR-253 and G.841, a network element is required to detect AIS and initiate an APS within 10 ms.  B2 errors should be detected according to a defined algorithm, and more than 10 ms is allowed.  This means that the entire time for both failure detection and traffic restoration may be 60 ms or more (10 ms or more detect time plus 50 ms switch time).

Protection Architectures

There are two types of protection for networks with APS:

  • Linear Protection, based on ANSI T1.105.1 and ITU-T G.783 for point-to-point (end-to-end) connections.
  • Ring Protection, based on ANSI T1.105.1 and ITU-T G.841 for ring structures (ring structures can also be found with two

types of protection mechanisms – Unidirectional and Bidirectional rings).

Refer to Figures 2-4 for APS architectures and unidirectional vs. bidirectional rings.

Protection Switching Schemes

The two most common schemes are 1+1 protection switching and 1:n protection switching.  In both structures, the K1 byte contains both the switching preemption priorities (in bits 1 to 4), and the channel number of the channel requesting action (in bits 5 to 8).  The K2 byte contains the channel number of the channel that is bridged onto protection (in bits 1 to 4), and the mode type (in bit 5); bits 6 to 8 contain various conditions such as AIS-L, RDI-L, indication of unidirectional or bidirectional switching.

 

1+1

In 1+1 protection switching, there is a protection facility (backup line) for each working facility.  At the headend, the optical signal is bridged permanently (split into two signals) and sent over both the working and the protection facilities simultaneously, producing a working signal and a protection signal that are identical.  At the tailend, both signals are monitored independently for failures.  The receiving equipment selects either the working or the protection signal.  This selection is based on the switch initiation criteria which are either a signal fail (hard failure such as the loss of frame (LOF) within an optical signal), or a signal degrade (soft failure caused by a bit error ratio exceeding some predefined value).

Refer to Figure 5.

Normally, 1+1 protection switching is unidirectional, although if the line terminating equipment at both ends supports bidirectional switching, the unidirectional default can be overridden.  Switching can be either revertive (the flow reverts to the working facility as soon as the failure has been corrected) or nonrevertive (the protection facility is treated as the working  facility)

In 1+1 protection architecture, all communications from the headend to the tailend are carried out over the APS channel via the K1 and K2 bytes. In 1+1 bidirectional switching, the K2 byte signaling indicates to the headend that a facility has been switched so that it can start to receive on the now active facility.

1:n

In 1:n protection switching, there is one protection facility for several working facilities (the range is from 1 to 14) and all communications from the headend to the tailend are carried out over the APS channel via the K1 and K2 bytes.  All switching is revertive, that is, the traffic reverts back to the working facility as soon as the failure has been corrected. Refer to Figure 6.

 

Optical signals are normally sent only over the working facilities, with the protection facility being kept free until a working facility fails.  Let us look at a failure in a bidirectional architecture.  Suppose the tailend detects a failure on working facility 2.  The tailend sends a message in bits 5-8 of the K1 byte to the headend over the protection facility requesting switch action.  The headend can then act directly, or if there is more than one problem, the headend decides which is top priority.  On a decision to act on the problem on working facility 2, the headend carries out the following steps:

  1. Bridges working facility 2 at the headend to the protection facility.
  2. Returns a message on the K2 byte indicating the channel number of the traffic on the  protection channel to the tailend
  3. Sends a Reverse Request to the tailend via the K1 byte to initiate bidirectional switch.

On receipt of this message, the tailend carries out the following steps:

    1. Switches to the protection facility to receive
    2. Bridges working facility 2 to the protection facility to transmit back

Now transmission is carried out over the new working facility.

Switch Action Comments

In unidirectional architecture, the tailend makes the decision on priorities. In bidirectional architecture, the headend makes the decision on priorities. If there is more than one failure at a time, a priority hierarchy determines which working facility will be backed up by the protection facility. The priorities that are indicated in bits 1-4 of the K1 byte are as follows:

  1. Lockout
  2. Forced switch (to protection channel regardless of its state) for span or ring; applies only to 1:n switching
  3. Signal fails (high then low priority) for span or ring
  4. Signal degrades (high then low priority) for span or ring; applies only to 1:n switching
  5. Manual switch (to an unoccupied fault-free protection channel) for span or ring; applies only to 1+1 LTE
  6. Wait-to-restore
  7. Exerciser for span or ring (may not apply to some linear APS systems)
  8. Reverse request (only for bidirectional)
  9. Do not revert (only 1+1 LTE provisioned for nonrevertive switching transmits Do Not Revert)
  10. No request

Depending on the protection architecture, K1 and K2 bytes can be decoded as shown in Tables 1-5.

Linear Protection (ITU-T G.783)

Ring Protection (ITU-T G.841)

collected from Sunrise Telecom APS application note..

  GENERIC FRAMING PROTOCOL

The Generic Framing Protocol (GFP), defined in ITU-T G.7041, is a mechanism for mapping constant and variable bit rate data into the synchronous SDH/SONET envelopes. GFP support many types of protocols including those used in local area network (LAN) and storage area network (SAN). In any case GFP adds a very low overhead to increase the efficiency of the optical layer.

Currently, two modes of client signal adaptation are defined for GFP:

  • Frame-Mapped GFP (GFP-F), a layer 2 encapsulation PDU-oriented adaptation mode. It is optimized for data packet protocols (e.g. Ethernet, PPP, DVB) that are encapsulated onto variable size frames.
  • Transparent GFP (GFP-T), a layer 1 encapsulation or block-code oriented adaptation mode. It is optimized for protocols using 8B/10B physical layer (e.g. Fiber Channel, ESCON, 1000BASE-T) that are encapsulated onto constant size frames.

GFP could be seen as a method to deploy metropolitan networks, and simultaneously to support mainframes and server storage protocols.

  Data packet aggregation using GFP. Packets are in queues waiting to be mapped onto a TDM channel. At the far-end packets are drop again to a queue and delivered. GFP frame multiplexing and sub multiplexing. The figure shows the encapsulation mechanism and the transport of the GFP frames into VC containers embedded in the STM frames

Figure       GFP frame formats and protocols

 

Framed-mapped GFP

In Frame-mapped GFP (GFP-F) one complete client packet is entirely mapped into one GFP frame. Idle packets are not transmitted resulting in more efficient transport. However, specific mechanisms are required to transport each type of protocol .

Figure   GFP mapping clients format

GFP-F can be used for Ethernet, PPP/IP and HDLC-like protocols where efficiency and flexibility are important. To perform the encapsulation process it is necessary to receive the complete client packet, but this procedure increases the latency, making GFP-F inappropriate for time-sensitive protocols.

Transparent GFP,GFP-T

Transparent GFP (GFP-T) is a protocol-independent encapsulation method in which all client code words are decoded and mapped into fixed-length GFP frames The frames are transmitted immediately without waiting for the entire client data packet to be received. Therefore it is also a Layer 1 transport mechanism because all the client characters are moved to the far-end independently it does not matter if they are information, headers, control or any kind of overhead.

GFP-T can adapt multiple protocols using the same hardware as long as they are based on 8B/10B line coding. This line codes are transcoded to 64B/65B and then encapsulated into fixed size GFP-T frames. Everything is transported, including inter-frame gaps that can have flow control characters or any additional information.

GFP-T is very good for isocronic or delay sensitive-protocols, and also for Storage Area Networks (SAN) such as ECON or FICON. This is because it is not necessary to process client frames or to wait for arrival of the complete frame. This advantage is counteracted by lost efficiency, because the source MSPP node still generates traffic when no data is being received from the client. 

GFP enables MSPP nodes to offer both TDM and packet-oriented services, managing transmission priorities and discard eligibility. GFP replaces legacy mappings, most of them of proprietary nature. In principal GFP is just an encapsulation procedure but robust and standardised for the transport of packetised data on SDH and OTN as well.

GFP uses a HEC-based delineation technique similar to ATM, it therefore does not need bit or byte stuffing. The frame size can be easily set up to a constant length.

When using GFP-F mode there is an optional GPF extension Header (eHEC) to be used by each specific protocol such us source/destination address, port numbers, class of service, etc. Among the EXI types – ‘linear’ supports submultiplexing onto a single path, ‘Channel ID’ (CID) enables sub-multiplexing over one VC channel GFP-F mode.

 

CONCATENATION

Concatenation is the process of summing the bandwidth of containers (C-i) into a larger container. This provides a bandwidth times bigger than C-i. It is well indicated for the transport of big payloads requiring a container greater than VC-4, but it is also possible to concatenate low-capacity containers, such as VC-11,VC-12, or VC-2.

Figure An example of contiguous concatenation and virtual concatenation. Contiguous concatenation requires support by all the nodes. Virtual concatenation allocates bandwidth more efficiently, and can be supported by legacy installations.

There are two concatenation methods:

  1. Contiguous concatenation: which creates big containers that cannot split into smaller pieces during transmission. For this, each NE must have a concatena- tion functionality.
  2. Virtual concatenation: which transports the individual VCs and aggregates them at the end point of the transmission path. For this, concatenation functionality is only needed at the path termination equipment.

Contiguous Concatenation of VC-4

A VC-4-Xc provides a payload area of containers of C-4 type. It uses the same HO-POH used in VC-4, and with identical functionality. This structure can be transported in an STM-n frame (where X). However, other combinations are also possible; for instance, VC-4-4c can be transported in STM-16 and STM-64 frames. Concatenation guarantees the integrity of a bit sequence, because the whole container is transported as a unit across the whole network .

Obviously, an AU-4-Xc pointer, just like any other AU pointer, indicates the position of J1, which is the first byte of the VC-4-Xc container. The pointer takes the same value as the AU-4 pointer, while the remaining bytes take fixed values equal to Y=1001SS11 to indicate concatenation. Pointer justification is carried out the same way for all the concatenated AU-4s and x 3 stuffing bytes .

However, contiguous concatenation, today, is more theory than practice, since other alternatives more bandwidth-efficient, such as virtual concatenation, are gaining more importance.

 

Virtual Concatenation

Connectionless and packet-oriented technologies, such as IP or Ethernet, do not match well the bandwidth granularity provided by contiguous concatenation. For example, to implement a transport requirement of 1 Gbps, it would be necessary to allocate a VC4-16c container, which has a 2.4-Gbps capacity. More than double the bandwidth that is needed.

 

Virtual concatenation (VCAT) is a solution that allows granular increments of bandwidth in single VC-units. At the MSSP source node VCAT creates a continuous payload equivalent to times the VC-n units . The set of X containers is known virtual container group (VCG) and each individual VC is a member of the VCG. All the VC members are sent to the MSSP destination node independently, using any available path if necessary. At the destination, all the VC-n are organized, according the indications provided by the H4 or the V5 byte, and finally delivered to the client

Differential delays between VCG member are likely because they are transported individually and may have used different paths with different latencies. Therefore, the destination MSSP must compensate for the different delays before reassembling the payload and delivering the service.

Virtual concatenation is required only at edge nodes, and is compatible with legacy SDH networks, despite the fact that they do not support concatenation. To get the full benefit from this, individual containers should be transported by different routes across the network, so if a link or a node fails the connection is only partially affected. This is also a way of providing a resilience service.

 Higher Order Virtual Concatenation

Higher Order Virtual Concatenation (HO-VCAT) uses X times VC-3 or VC4 containers (VC-3/4-Xv, X = 1 … 256), providing a payload capacity of X times 48384 or 149760 kbit/s.

 

The virtual concatenated container VC-3/4-Xv is mapped in independent VC-3 or VC-4 envelopes that are transported individually through the network. Delays could occur between the individual VCs, this obviously has to be compensated for when the original payload is reassembled .

A multiframe mechanism has been implemented in H4 to compensate for differential delays of up to 256ms:

  • Every individual VC has a H4 multiframe indicator (MFI) that denotes the virtual container they belong to
  • The VC also traces its position X in the virtual container using he SEQ number which is carried in H4. The SEQ is repeated every 16 frames

The H4 POH byte is used for the virtual concatenation-specific sequence and multiframe indication

 Lower Order Virtual Concatenation

Lower Order Virtual Concatenation LO-VCat. uses X times VC-11, VC12, or VC2 containers (VC-11/12/2-Xv, X = 1 … 64).

A VCG built with V11, VC12 or VC2 members provides a payload of X containers C11, C12 or C4; that is a capacity of X times 1600, 2176 or 6784 kbit/s. VCG members are transported individually through the network, therefore differential delays could occur between the individual components of a VCG, that will be compensated for at the destination node before reassembling the original continuous payload ).

A multiframe mechanism has been implemented in bit 2 of K4. It includes a sequence number (SQ) and the multiframe indicator (MFI), both enable the reordering of the VCG members. The MSSP destination node will wait until the last member arrives and then will compensate for delays up to 256ms. It is important to note that K4 is a multiframe itself, received every 500ms, then the whole multiframe sequence is repeated every 512 ms.

Testing VCAT

When installing or maintaining VCAT it is important to carry out a number of tests to verify not only the performance of the whole Virtual Concatenation, but also every single member of the VCG. For reassembling the original client data, all the members of the VCG must arrive at the far end, keeping the delay between the first and the last member of a VCG below 256 ms. A missing member prevents the

reconstruction of the payload, and if the problem persists causes a fault that would call for the reconfiguration of the VCAT pipe. Additionally, Jitter and Wander on individual paths can cause anomalies (errors) in the transport service.

BER, latency, and event tests should verify the capacity of the network to perform the service. The VCAT granularity capacity has to be checked as well, by adding/removing. To verify the reassembly operation it is necessary to use a tester with the capability to insert differential delays in individual members of a VC.

Single-mode fibre selection for Optical Communication System

 

 This is collected from article written by Mr.Joe Botha

Looking for a single-mode (SM) fibre to light-up your multi-terabit per second system? Probably not, but let’s say you were – the smart money is on your well-intended fibre sales rep instinctively flogging you ITU-T G.652D  fibre. Commonly referred to as standardSM fibre and also known as Non-Dispersion-Shifted Fibre (NDSF) – the oldest and most widely deployed fibre. Not a great choice, right? You bet.  So for now, let’s resist the notion that you can do whatever-you-want using standardSM fibre. A variety of  SM optical  fibres with  carefully optimised characteristics are  available  commercially: ITU-T G.652, 653, 654, 655, 656 or 657 compliant.

Designs of SM fibre have evolvedover the decadesand present-day optionswould have us deploy G.652D, G.655 or G.656 compliantfibres. Note that G.657A is essentially a more expressive version of G.652D,with a superiorbending loss performance and should you start feeling a little benevolent towards deploying it on a longish-haul – I  can immediately confirm that this allows for a glimpse into the workingsof silliness. Dispersion Shifted Fibre (DSF) in accordance with G.653 has no chromatic dispersion at 1550 nm. However,they are limited to single-wavelength operation due to non-linear four-wave mixing. G.654 compliant fibres were developed specifically for underseaun-regenerated systems and since our focus is directed toward terrestrial applications – let’s leave it at that.

In the above context, the plan is to briefly weigh up G.652D,G.655 and G.656 compliantfibres against three parameters we calculate (before installation) and measure (after installation). I must just point-out that the fibre coefficients used are what one would expect from the not too shabby brands availabletoday.

Attenuation

G.652D compliant G.655 compliant G.656 compliant
λ

nm

ATTN

dB/km

λ

nm

ATTN

dB/km

λ

nm

ATTN

dB/km

1310 0.30 1310 1310
1550 0.20 1550 0.18 1550 0.20
1625 0.23 1625 0.20 1625 0.22

Attenuation is the reduction or loss of opticalpower as  light travels through an opticalfibre and is measured in decibelsper kilometer (dB/km). G.652D offers respectable attenuation coefficients, when compared with G.655 and G.656. It should be remembered, however, that even a meagre 0.01 dB/km attenuation

improvement would reduce a 100 km loss budget by a full dB – but let’s not quibble. No attenuation coefficients for G.655 and G.656 at 1310? It was not, as you may immediately assume, an oversight. Both G.655 and G.656 are optimizedto support long-haul systems and thereforecould not care less about runningat 1310 nm. A cut-offwavelength is the minimum wavelength at which a particular fibre will support SM transmission. At ≤ 1260 nm, G.652 D has the lowest cut-off wavelength – with the cut-off wavelengths for G.655 and G.656 sittingat ≤ 1480 nmand ≤1450 respectively – which explainswhy we have no attenuation coefficient for them at 1310 nm.

PMD

G.652D compliant G.655 compliant G.656 compliant
PMD

ps / √km

PMD

ps / √km

PMD

ps / √km

≤ 0.06 ≤ 0.04 ≤ 0.04

Polarization-mode dispersion (PMD) is  an  unwanted effect caused by asymmetrical properties in an opticalfibre that spreads the optical pulse of a signal. Slight asymmetry in an

optical fibre causes the polarized modes of the light pulse to travel at marginally different speeds, distorting the signal and is reportedin ps / √km, or “ps per root km”. Oddly enough,G.652 and co all possess decent-looking PMD coefficients. Now then, popping a 40-Gbpslaser onto my fibre up againstan ultra-low 0.04 ps / √km, my calculator reveals that the PMD coefficient admissible fibre length is 3,900 km and even at 0.1 ps / √km, a distance of 625 km is achievable.

So far so good? But wait, there’s more. PMD is particularly troublesome for both high data-rate-per-channel and high wavelength channel count systems, largely because of its random nature.Fibre manufacturer’s PMD specifications are accuratefor the fibre itself,but do not incorporate PMD incurred as a result of installation, which in many cases can be many orders of magnitude larger. It is hardly surprising that questionable installation practices are  likely  to  cause imperfect fibre symmetry – the obvious implications are incomprehensible data streams and mental anguish.Moreover, PMD unlikechromatic dispersion (to be discussednext) is also affectedby environmental conditions, making it unpredictable and extremelydifficult to find ways to undo or offset its effect.

 

CD

652D compliant G.655 compliant G.656 compliant
λ

nm

CD

ps/(nm·km)

λ

nm

CD

ps/(nm·km)

λ

nm

CD

ps/(nm·km)

1550 ≤ 18 1550 2~6 1550 6.0~10
1625 ≤ 22 1625 8~11 1625 8.0~13

CD (calledchromatic dispersion to emphasiseits wavelength-dependent nature) has zip-zero to do with the loss of light. It occurs because different wavelengths  of light travel at differentspeeds. Thus, when the allowable

CD is exceeded – light pulses representing a bit-stream will be renderedillegible. It is expressedin ps/ (nm·km). At 2.5- Gbps CD is not an issue – however, lower data rates are seldom desirable. But at 10-Gbps,it is a big issue and the issue gets even bigger at 40-Gbps.

What’s troubling is G.652D’shigh CD coefficient – which one glumly has to concede, is very poor next  to  the competition. G.655 and G.656, variants of non-zerodispersion-shifted fibre (NZ-DSF), comprehensively address G.652D’s shortcomings. It should be noted that nowadays some optical fibre manufacturers don’t bother with distinguishing between G.655 and G.656 – referring to their offerings as G.655/6 compliant.

On the face of it, one might suggest that the answer to our CD problemis to send light along an optical fibre at a wavelength where the CD is zero (i.e. G.653).The result? It turns out that this approach creates more problems than it is likely to solve – by unacceptably amplifying non-linear four-wave mixing and limiting the fibre to single-wavelength operation- in other words, no DWDM. That, in fact, is why CD should not be completely lampooned. Research revealed that the fibre-friendly CD value l i e s in the range of 6-11 ps/nm·km. Therefore, and particularly for high-capacity transport, the best-suited fibre is one in which dispersion is kept within a tight range, being neither too high nor too low.

NZ-DSFs are available in both positive(+D) and negative (-D) varieties. Using NZ-DSF -D, a reverse behavior of the velocity per wavelength is createdand therefore, the effect of +CD can be cancelled out. I almost forgot to mention,by the way, that short wavelengths travel faster than long ones with +CD and longer wavelengths travel faster than short ones with -CD. New sophisticated modulation techniques such as dual-polarized quadrature phase-shift keying (DP-QPSK) using coherent detection, yields high quality CD compensation. However, because of the addedsignal processing time (versussimple on-off keying) they require,this can potentially be a poor choice from a latency perspective.

WDM multiplies capacity

The use of Dense Wavelength Division Multiplexing (DWDM) technology and 40-Gbps(and higher) transmission rates can push the information-carrying capacityof a single fibre to well over a terabit per second.One example is EASSy’s (a 4-fibre submarine cable serving sub-Saharan Africa) 4.72-Tbps capacity. Now then, should my maffs prove to be correct, 118 x 40-Gbpslasers (popped onto only 4-fibres!) should give us an aggregatecapacity of 4.72-Tbps?

Coarse Wavelength Division Multiplexing (CWDM) is a WDM technology that uses 4, 8, 16 or 18 wavelengths for transmission. CWDM is an economically sensible option, often used for short-haul applications on G.652D,where signal amplification is not necessary. CWDMs large 20 nm channelspacing allows for the use of cheaper, less powerfullasers that do not require cooling.

One of the most important considerations in the fibre selectionprocess is the fact that optical signals may need to be amplified along a route. Thanks in no small part (get the picture?)to CWDM’s large channelspacing – typicallyspanning over several spectralbands (1270 nm to 1610 nm) – its signals cannot be amplifiedusing Erbium Doped-Fibre Amplifiers (EDFAs).You see, EDFAs run only in the C and L bands (1520 nm to 1625 nm). WhereasCWDM breaks the optical spectrum up into large chunks – by contrast,DWDM slices it up finely, cramming4, 8, 16, 40, 80, or 160 wavelengths (on 2-fibres)into only the C- and L-bands (1520nm to 1625nm) – perfectfor the use of EDFAs. Each wavelength can without any obvious effort support a 40-Gbps laser and on top of this, 100-Gbpslasers are chompingat the bit to go mainstream.

Making the right choice

On the whole, it is hard not to concludethat the only thing that genuinelyseparates fibre types for high-bit-rate systems is CD. The 3-things – the only ones that I can think of – that is good about G.652D – is that it is affordable, cool for CWDM and perfect for short-haul environments.Top  of the to-do lists of infrastructure providers pushing the boundaries of DWDM enabled ultra high-capacity transport over short, long or ultra long-haul networks – needlessto say, will be to source G.655/6compliant fibres. The cross-tabbelow indicates: Green for OK and oddly enough, Red for Not-OK

ITU-T Compliant 10-Gbps CWDM 40-Gbps CWDM 10-Gbps DWDM 40-Gbps DWDM 100-Gbps DWDM
G.652 OK NOK NOK NOK NOK
G.655/6 OK OK OK OK OK