Animated CTA Banner
MapYourTech
MapYourTech has always been about YOUR tech journey, YOUR questions, YOUR thoughts, and most importantly, YOUR growth. It’s a space where we "Map YOUR Tech" experiences and empower YOUR ambitions.
To further enhance YOUR experience, we are working on delivering a professional, fully customized platform tailored to YOUR needs and expectations.
Thank you for the love and support over the years. It has always motivated us to write more, share practical industry insights, and bring content that empowers and inspires YOU to excel in YOUR career.
We truly believe in our tagline:
“Share, explore, and inspire with the tech inside YOU!”
Let us know what YOU would like to see next! Share YOUR thoughts and help us deliver content that matters most to YOU.
Share YOUR Feedback
Category

Technical

Category

“In analog world the standard test message is the sine wave, followed by the two-­tone signal  for more rigorous tests.  The property being optimized is generally  signal-to-noise ratio (SNR). Speech  is  interesting, but does not lend itself easily to mathematical analysis, or measurement. 

ln digital world a binary sequence, with a known pattern of ‘ 1’ and ‘0’ ,  i s common .   It i s more common  to measure Bit error  rates (BER) than  SNR, and this is simplified by the fact that  known binary sequences are easy to generate and reproduce. A common sequence is the pseudo random  binary sequence.”

**********************************************************************************************************************************************************

“A PRBS (Pseudo Random Binary Sequence) is a binary PN (Pseudo-Noise) signal. The sequence of binary 1’s and 0’s exhibits certain randomness and auto-correlation properties.Bit-sequences like PRBS are used for testing transmission lines and transmission equipment because of their randomness properties.Simple bit-sequences are used to test the DC compatibility of transmission lines and transmission equipment.”

**********************************************************************************************************************************************************

 Pseudo-Random-Bit-Sequence (PRBS) is used to simulate random data for transmission across the link.The different types of PRBS and the suggested data-rates for the different PRBS types are described in the ITU-T standards O.150, O.151, O.152 and O.153.In order to properly simulate real traffic, a pseudo-random bit sequence (PRBS) is also used. The rate of the PRBS can range between 2^-9 and 2^-31. Typically, for higher-bit-rate devices, a high-rate PBRS pattern is preferable so that the device under test is effectively stressed

**********************************************************************************************************************************************************

 Bit-error measurements are an important means of assessing the performance of digital transmission. It is necessary to specify reproducible test sequences that simulate real traffic as closely as possible. Reproducible test sequences are also a prerequisite to perform end-to-end measurement.  Pseudo-random bit sequences (PRBS) with lengths of 2n – 1 bits are the most common solution to this problem.

PRBS bit-pattern are generated in a linear feed-back shift-register. This is a shift-register with a xored– feedback of the output-values of specific flip-flops to the input of the first flip-flop.2*X (X = PRBS shift register length). 

Example : PRBS-Generation of the sequence 2^9  -1 :

 

PRBS_TYPE   

 

ERROR TYPE   

 

Note:(PRBS) of order 31 (PRBS31), which is the inverted bit stream.

G(x) = 1 + x28 + x31 (1)

The advantage of using a PRBS pattern for BER testing is that it is a deterministic signal with properties similar to those of a random signal for the link , i. e. of white noise.

Bit error counting

Whereas a mask of the bit errors in the stream can be created by ANDing the received bytes after coalescing them with the locally generated PRBS31 pattern, counting the number of bits set in this mask in order to calculate the BER is a bit tricky. So we need to follow this

 

Typical links are designed for BERs better than 10-12

The Bit Error Ratio (BER) is often specified as a performance parameter of a transmission system, which needs to be verified during investigation. Designing an experiment to demonstrate adequate BER performance is not, however, as straightforward as it appears since the number of errors detected over a practical measurement time is generally small. It is, therefore, not sufficient to quote the BER as simply the ratio of the number of errors divided by the number of bits transmitted during the measurement period, instead some knowledge of the statistical nature of the error distribution must first be assumed.

The bit error rate (BER) is the most significant performance parameter of any digital communications system. It is a measure of the probability that any given bit will have been received in error. For example a standard maximum bit error rate specified for many systems is 10-9. This means that the receiver is allowed to generate a maximum of 1 error in every 109 bits of information transmitted or, putting it another way, the probability that any received bit is in error is 10-9.

 The BER depends primarily on the signal to noise ratio (SNR) of the received signal which in turn is determined by the transmitted signal power, the attenuation of the link, the link dispersion and the receiver noise. The S/N ratio is generally quoted for analog links while the bit-error-rate (BER) is used for digital links. BER is practically an inverse function of S/N. There must be a minimum power at the receiver to provide an acceptable S/N or BER. As the power increases, the BER or S/N improves until the signal becomes so high it overloads the receiver and receiver performance degrades rapidly.

 The formula used to calculate residual BER assumes a gaussian error distribution:

C = 1 – e–nb

C = Degree of confidence required

(0.95 = 95% confidence)

n = No. of bits examined with no error found.

b = Upper bound on BER with a confidence C

(b = 10–15)

To determine the length of time, that is, the number of bits needed to test for (at a given bit rate), requires the above equation to be transposed:

n = loge(1 – C)/b

 

So, to test for a residual BER of 10–13 with a 95% confidence limit requires a test pattern equal to 3 x 1013 bits. This equates to only 0.72 hours using an OC-192c/STM-64c payload rather than 55.6 hours using an STS-3c/VC-4 bulk filled payload (149.76 Mb/s).The graph in Figure plots test time versus residual BER and shows the difference in test time for OC-192c/STM-64c payloads versus an OC-48c/STM-16c payload.The graphs are plotted for different confidence limits and they clearly indicate that the payload capacity is the dominant factor in improving the test time and not the confidence limit. Table 1 shows the exact test times for each BER threshold and confidence limit.

 

collected from::Product Note-OmniBER

FEC codes in optical communications are based on a class of codes know as Reed-Solomon.

Reed-Solomon code is specified as  RS (nk), which means that the encoder takes k data bytes and adds parity bytes to make an n bytes codeword. A Reed-Solomon decoder can correct up to t bytes in the codeword, where 2t=n – k.

 

ITU recommendation G.975 proposes a Reed-Solomon (255, 239). In this case 16 extra bytes are appended to 239 information-bearing bytes. The bit rate increase is about 7% [(255-239)/239 = 0.066], the code can correct up to 8 byte errors [255-239/2 =8] and the coding gain can be demonstrated to be about 6dB.

The same Reed-Solomon coding (RS (255,239)) is recommended in ITU-T G.709. The coding overhead is again about 7% for a 6dB coding gain. Both G.975 and G.709 improve the efficiency of the Reed-Solomon by interleaving data from different codewords. The interleaving technique carries an advantage for burst errors, because the errors can be shared across many different codewords. In the interleaving approach lies the main difference between G.709 and G.975: G.709 interleave approach is fully standardized,while G.975 is not.

The actual G.975 data overhead includes also one bit for framing overhead, therefore the bit rate exp ansion is [(255-238)/238 = 0.071]. In G.709 the frame overhead is higher than in G.975, hence an even higher bit rate expansion. One byte error occurs when 1 bit in a byte is wrong or when all the bits in a byte are wrong. Example: RS (255,239) can correct 8 byte errors. In the worst case, 8 bit errors may occur, each in a separate byte so that the decoder corrects 8 bit errors. In the best case, 8 complete byte errors occur so that the decoder corrects 8 x 8 bit errors.

There are other, more powerful and complex RS variants (like for example concatenating two RS codes) capable of Coding Gain 2 or 3 dB higher than the ITU-T FEC codes, but at the expense of an increased bit rate (sometimes as much as 25%).

FOR OTN FRAME: Calculation of RS( n,k) is as follows:-

*OPU1 payload rate= 2.488 Gbps (OC48/STM16)

 

*Add OPU1 and ODU1 16 bytes overhead:

 

3808/16 = 238, (3808+16)/16 = 239

ODU1 rate: 2.488 x 239/238** ~ 2.499Gbps

*Add FEC

OTU1 rate: ODU1 x 255/239 = 2.488 x 239/238 x 255/239

=2.488 x 255/238 ~2.667Gbps

 

NOTE:4080/16=(255)

**Multiplicative factor is just a simple math :eg. for ODU1/OPU1=3824/3808={(239*16)/(238*16)}

Here value of multiplication factor will give the number of times  for rise in the frame size after adding header/overhead.

As we are using Reed Soloman(255,239) i.e we are dividing 4080bytes in sixteen frames (The forward error correction for the OTU-k uses 16-byte interleaved codecs using a Reed- Solomon S(255,239) code. The RS(255,239) code operates on byte symbols.).

Hence 4080/16=255…I have understood it you need to do simpler maths to understand..)

Transparency here is transmission over network without altering original property of the client signal.

G.709 defines the OPUk which can contain the entire SDH signal. This means that one can transport 4 STM-16 signals in one OTU2 and not modify any of the SDH overhead.

Thus the transport of such client signals in the OTN is bit-transparent (i.e. the integrity of the whole client signal is maintained).

OTN is also timing transparent. The asynchronous mapping mode transfers the input timing (asynchronous mapping client) to the far end (asynchronous de-mapping client).

OTN is also delay transparent. For example if 4 STM-16 signals are mapped into ODU1’s and then multiplexed into an ODU2, their timing relationship is preserved until they are de-mapped back to ODU1’s.

Tandem Connection Monitoring (TCM)

Tandem system is also known as cascaded systems.

SDH monitoring is divided into section and path monitoring. A problem arises when you have “Carrier’s Carrier” situation where it is required to monitor a segment of the path that passes another carrier network.

 

Tandem Connection Monitoring

Here Operator A needs to have Operator B carries his signal. However he also needs a way of monitoring the signal as it passes through Operator B’s network. This is what a “Tandem connection” is. It is a layer between Line Monitoring and Path Monitoring. SDH was modified to allow a single Tandem connection. ITU-T rec. G.709 allows 6.

TCM1 is used by the User to monitor the Quality of Service (QoS) that they see. TCM2 is used by the first operator to monitor their end-to-end QoS. TCM3 is used by the various domains for Intra domain monitoring. Then TCM4 is used for protection monitoring by Operator B.

There is no standard on which TCM is used by whom. The operators have to have an agreement, so that they do not conflict.

TCM’s also support monitoring of ODUk connections for one or more of the following network applications (refer to ITU-T Rec. G.805 and ITU-T Rec. G.872):

–          optical UNI to UNI tandem connection monitoring ; monitoring the ODUk connection through the public transport network (from public network ingress network termination to egress network termination)

–          optical NNI to NNI tandem connection monitoring; monitoring the ODUk connection through the network of a network operator (from operator network ingress network termination to egress network termination)

–          sub-layer monitoring for linear 1+1, 1:1 and 1:n optical channel sub-network connection protection switching, to determine the signal fail and signal degrade conditions

–          sub-layer monitoring for optical channel shared protection ring (SPRING) protection switching, to determine the signal fail and signal degrade conditions

–          Monitoring an optical channel tandem connection for the purpose of detecting a signal fail or signal degrade condition in a switched optical channel connection, to initiate automatic restoration of the connection during fault and error conditions in the network

–          Monitoring an optical channel tandem connection for, e.g., fault localization or verification of delivered quality of service

A TCM field is assigned to a monitored connection. The number of monitored connections along an ODUk trail may vary between 0 and 6. Monitored connections can be nested, overlapping and/or cascaded.

 

ODUk monitored connections

Monitored connections A1-A2/B1-B2/C1-C2 and A1-A2/B3-B4 are nested, while monitored connections B1-B2/B3-B4 are cascaded.

Overlapping monitored connections are also supported.

 

Overlapping ODUk monitored connections

Channel Coding-A walkthrough

This article is just for revising Channel Coding concepts.

Channel coding is the process that transforms binary data bits into signal elements that can cross the transmission medium. In the simplest case, in a metallic wire a bi- nary 0 is represented by a lower voltage, and a binary 1 by a higher voltage. How- ever, before selecting a coding scheme it is necessary to identify some of the strengths and weaknesses of line codes:

  • High-frequency components are not desirable because they require more chan- nel bandwidth, suffer more attenuation, and generate crosstalk in electrical links.
  • Direct current (dc) components should be avoided because they require physi- cal coupling of transmission elements. Since the earth/ground potential usually varies between remote communication ends, dc provokes unwanted earth-re- turn loops.
  • The use of alternating current (ac) signals permits a desirable physical isola- tion using condensers and transformers.
  • Timing control permits the receiver to correctly identify each bit in the trans- mitted message. In synchronous transmission, the timing is referenced to the transmitter clock, which can be sent as a separate clock signal, or embedded into the line code. If the second option is used, then the receiver can extract its clock from the incoming data stream thereby avoiding the installation of an additional line.

 

Figure 1.1: Line encoding technologies. AMI and HDB3 are usual in electrical signals, while CMI is often used in optical signals.

In order to meet these requirements, line coding is needed before the signal is trans- mitted, along with the corresponding decoding process at the receiving end. There are a number of different line codes that apply to digital transmission, the most widely used ones are alternate mark inversion (AMI), high-density bipolar three ze- ros (HDB3), and coded mark inverted (CMI).

 Nonreturn to zero 

Nonreturn to zero (NRZ) is a simple method consisting of assigning the bit “1” to the positive value of the signal amplitude (voltage), and the bit “0” to the nega- tive value (see Figure 1.1 ). There are two serious disadvantages to this:

No timing information is included in the signal, which means that synchronism can easily be lost if, for instance, a long sequence of zeros is being received.

The spectrum of the signal includes a dc component.

Alternate mark inversion

Alternate mark inversion (AMI) is a transmission code, also known as pseudo- ternary, in which a “0” bit is transmitted as a null voltage and the “1” bits are represented alternately as positive and negative voltage. The digital signal coded in AMI is characterized as follows (see Figure 1.1):

            • The dc component of its spectrum is null.
            • It does not solve the problem of loss of synchronization with long sequences of zeros.

Bit eight-zero suppression

Bit eight-zero suppression (B8ZS) is a line code in which bipolar violations are de- liberately inserted if the user data contains a string of eight or more consecutive ze- ros. The objective is to ensure a sufficient number of transitions to maintain the synchronization when the user data stream contains a large number of consecutive zeros (see Figure 1.1 and Figure 1.2).

The coding has the following characteristics:

The coding has the following characteristics:

  • The timing information is preserved by embedding it in the line signal, even when long sequences of zeros are transmitted, which allows the clock to be re- covered properly on reception
  • The dc component of a signal that is coded in B8Z3 is null.

 

Figure 1.2     B8ZS and HDB3 coding. Bipolar violations are: V+ a positive level and V- negative.

High-density bipolar three zeroes

High-density bipolar three zeroes (HDB3) is similar to B8ZS, but limits the maxi- mum number of transmitted consecutive zeros to three (see Figure 1.5). The basic idea consists of replacing a series of four bits that are equal to “0” with a code word “000V” or “B00V,” where “V” is a pulse that violates the AMI law of alternate po- larity, and B it is for balancing the polarity.

  • “B00V” is used when, until the previous pulse, the coded signal presents a dc component that is not null (the number of positive pulses is not compensated by the number of negative pulses).
  • “000V” is used under the same conditions as above, when, until the previous pulse, the dc component is null (see Figure 1.6).
  • The pulse “B” (for balancing), which respects the AMI alternation rule and has positive or negative polarity, ensuring that two consecutive “V” pulses will have different polarity.

Coded mark inverted

The coded mark inverted (CMI) code, also based on AMI, is used instead of HDB3 at high transmission rates, because of the greater simplicity of CMI coding and de- coding circuits compared to the HDB3 for these rates. In this case, a “1” is transmit- ted according to the AMI rule of alternate polarity, with a negative level of voltage during the first half of the period of the pulse, and a positive level in the second half. The CMI code has the following characteristics (see Figure 1.1):

  • The spectrum of a CMI signal cancels out the components at very low frequencies.
  • It allows for the clock to be recovered properly, like the HDB3 code.
  • The bandwidth is greater than that of the spectrum of the same signal coded in AMI.

Rejuvenating PCM:Pulse Code Modulation

This article consists the very basic of PCM(Pulse Code Modulation).i.e foundation for Telecom Networks.

The pulse code modulation (PCM) technology (see Figure 1.1) was patented and developed in France in 1938, but could not be used because suitable technology was not available until World War II. This came about with the arrival of digital systems in the 1960s, when improving the performance of communications net- works became a real possibility. However, this technology was not completely adopted until the mid-1970s, due to the large amount of analog systems already in place and the high cost of digital systems, as semiconductors were very expensive. PCM’s initial goal was that of converting an analog voice telephone channel into a digital one based on the sampling theorem.

 

The sampling theorem states that for digitalization without information loss, the sampling frequency (fs) should be at least twice the maximum frequency component (fmax) of the analog information:

fs × fmax 

The frequency 2·fmax is called the Nyquist sampling rate. The sampling theorem is considered to have been articulated by Nyquist in 1928, and mathematically prov- en by Shannon in 1949. Some books use the term Nyquist sampling theorem, and others use Shannon sampling theorem. They are in fact the same theorem.

PCM involves three phases: sampling, encoding, and quantization:

In sampling, values are taken from the analog signal every 1/fs seconds (the sampling period).

 

Quantization assigns these samples a value by approximation, and in accordance with a quantization curve (i.e., A-law of ITU-T).

Encoding provides the binary value of each quantified sample.

 

If SDH is based on node and signal synchronization, why do fluctuations occur?Very general question for a Optics beginner.

The answer lies in the practical limitations of synchronization. SDH networks use high-quality clocks feeding network el- ements. However, we must consider the following:

  • A number of SDH islands use their own reference clocks, which may be nominally identical, but never exactly the same.
  • Cross services carried by two or more operators always generate offset and clock fluctuations whenever a common reference clock is not used.
  • Inside an SDH network, different types of breakdown may occur and cause a temporary loss of synchronization. When a node switches over to a secondary clock reference, it may be different from the original, and it could even be the internal clock of the node.
  • Jitter and wander effects

SDH/SONET:Maintenance and Performance Events

We know SDH/SONET is older technology now but just have a glimpse for the revision of basic FM process:

SDH SONET MAINTENANCE

SDH and SONET transmission systems are robust and reliable; however they are vulnerable to several effects that may cause malfunction. These effects can be clas- sified as follows:

  • Natural causes: This include thermal noise, always present in regeneration systems; solar radiation; humidity and Raleigh fading in radio systems; hardware aging; degraded lasers; degradation of electric connections; and electrostatic discharge.
  • A network design pitfall: Bit errors due to bad synchronization in SDH. Timing loops may collapse a transmission network partially, or even completely.
  • Human intervention: This includes fiber cuts, electrostatic discharges, power failure, and topology modifications.

 

Anomalies and defects management. (In regular characters for SDH; in italic for SONET.)

All these may produce changes in performance, and eventually collapse transmission services.

SDH/SONET Events

SDH/SONET events are classified as anomalies, defects, damage, failures, and alarms depending on how they affect the service:

  • Anomaly: This is the smallest disagreement that can be observed between mea- sured and expected characteristics. It could for instance be a bit error. If a single anomaly occurs, the service will not be interrupted. Anomalies are used to monitor performance and detect defects.

Defect: A defect level is reached when the density of anomalies is high enough to interrupt a function. Defects are used as input for performance monitoring, to con- trol consequent actions, and to determine fault causes.

  • Damage or fault: This is produced when a function cannot finish a requested action. This situation does not comprise incapacities caused by preventive maintenance.
  • Failure: Here, the fault cause has persisted long enough so that the ability of an item to perform a required function may be terminated. Protection mechanisms can now be activated.
  • Alarm: This is a human-observable indication that draws attention to a failure (detected fault), usually giving an indication of the depth of the damage. For example, a light emitting diode (LED), a siren, or an e-mail.
  • Indication: Here events are notified upstream to the peer layer for performance monitoring and eventually to request an action or a human intervention that can fix the situation .

Errors reflect anomalies, and alarms show defects. Terminology here is often used in a confusing way, in the sense that people may talk about errors but actually refer to anomalies, or use the word, “alarm” to refer to a defect.

 

OAM management. Signals are sent downstream and upstream when events are detected at the LP edge (1, 2); HP edge (3, 4); MS edge (5, 6); and RS edge (7, 8).

In order to support a single-end operation the defect status and the number of detected bit errors are sent back to the far-end termination by means of indications such an RDI, REI, or RFI

 Monitoring Events

SDH frames contain a lot of overhead information to monitor and manage events  When events are detected, overhead channels are used to notify peer layers to run network protection procedures or evaluate performance. Messages are also sent to higher layers to indicate the local detection of a service affecting fault to the far-end terminations.

Defects trigger a sequence of upstream messages using G1 and V2 bytes. Down- stream AIS signals are sent to indicate service unavailability. When defects are detected, upstream indications are sent to register and troubleshoot causes.

 Event Tables

 

PERFORMANCE MONITORING

SDH has performance monitoring capabilities based on bit error monitoring. A bit parity is calculated for all bits of the previous frame, and the result is sent as over- head. The far-end element repeats the calculation and compares it with the received

 

 

 

 

overhead. If the result is equal, there is considered to be no bit error; otherwise, a bit error indication is sent to the peer end.

A defect is understood as any serious or persistent event that holds up the transmission service. SDH defect processing reports and locates failures in either the complete end-to-end circuit (HP-RDI, LP-RDI) or on a specific multiplex section between adjacent SDH nodes (MS-RDI)

Alarm indication signal

An alarm indication signal (AIS) is activated under standardized criteria, and sent downstream in a path in the client layer to the next NE to inform about the event. The AIS will arrive finally at the NE at which that path terminates, where the client layer interfaces with the SDH network .

As an answer to a received AIS, a remote defect indication is sent backwards. An RDI is indicated in a specific byte, while an AIS is a sequence of “1s” in the payload space. The permanent sequence of “1s” tells the receiver that a defect affects the service, and no information can be provided.

 

Depending on which service is affected, the AIS signal adopts several forms:-

  • MS-AIS: All bits except for the RSOH are set to the binary value “1.”
  • AU-AIS: All bits of the administrative unit are set to “1” but the RSOH and MSOH maintain their codification.
  • TU-AIS: All bits in the tributary unit are set to “1,” but the unaffected tributar- ies and the RSOH and MSOH maintain their codification.
  • PDH-AIS: All the bits in the tributary are “1.”

Enhanced remote defect indication 

Enhanced remote defect indication (E-RDI) provides the SDH network with addi- tional information about the defect cause by means of differentiating:

  • Server defects: like AIS and LOP;
  • Connectivity defects: like TIM and UNEQ;
  • Payload defects: like PLM.

Enhanced RDI information is codified in G1 (bits 5-7) or in k4 (bits 5-7), depending on the path.

Many times we heard that we should implement Unidirectional or Bidirectional APS in the network.Just some of the advantages of these are:-

Unidirectional and bidirectional protection switching

Possible advantages of unidirectional protection switching include:

  • Unidirectional protection switching is a simple scheme to implement and does not require a protocol.
  • Unidirectional protection switching can be faster than bidirectional protection switching because it does not require a protocol.
  • Under multiple failure conditions there is a greater chance of restoring traffic by protection switching if unidirectional protection switching is used, than if bidirectional protection switching is used.

Possible advantages of bidirectional protection switching when uniform routing is used include:

  • With bidirectional protection switching operation, the same equipment is used for both directions of transmission after a failure. The number of breaks due to single failures will be less than if the path is delivered using the different equipment.
  • With bidirectional protection switching, if there is a fault in one path of the network, transmission of both paths between the affected nodes is switched to the alternative direction around the network. No traffic is then transmitted over the faulty section of the network and so it can be repaired without    further  protection switching.
  • Bidirectional protection switching is easier to manage because both directions of transmission use the same equipments along the full length of the trail.
  • Bidirectional protection switching maintains equal delays for both directions of transmission. This may be important where there is a significant  imbalance in the length of the trails e.g. transoceanic links where one trail is via a satellite link and the other via a cable link.
  •  Bidirectional protection switching also has the ability to carry extra traffic on the protection path.

@above is extracted form ITU-G.*841

CDC allows operators to future proof their network so they are able to optimize, scale and flexibly meet any future bandwidth demands.

  • Directionless: for the ability to route a wavelength across any viable path in the network
  • Colorless: for the ability to receive any wavelength on any port.
  • Contentionless: eliminates wavelength blocking, allowing the add/drop of a duplicate wavelength onto a single mux/demux
  • Flexible grid: for the ability to future-proof the network for any higher capacity channel that needs >50GHz spectrum

The CDC solution allows the operator to handle unpredictable A-Z services or temporary bandwidth demands over the full life of the network. Reconfigurations such as wavelength defragmentation and route optimization are also made possible to scale the network for support of more services. CDC also supports the transport of SuperChannels when these become available.

CDC can operate with photonic control plane for increased automation of operations as well as to support automated photonic restoration and other future capabilities.

Gridless networks are the evolution of photonic line systems to improve spectral efficiency and flexibility.i.e

Channel grid are no longer required to be centered at ITU wavelengths/frequencies .
Why Do We Need Gridless?
  • Improved spectral efficiency with existing 40G/100G technology.
  • Define a super channel that has multiple sub-channels within it, in order to fit the same channels in a smaller region of spectrum.
  • Support higher line-rate transponders.
  • In order to get the same reach/performance from 400 Gb/s and 1 Tb/s transponders we have no choice but to increase the spectral width of these signals well beyond 50GHz or even 100GHz spacing.

1.OverView

Availability is a probabilistic measure of the length of time a system or network is functioning.

  • Generally calculated as a percentage, e.g. 99.999% (referred to as 5 nines up time) is carrier grade availability.
  • A network has a high availability when downtime / repair times are minimal.
  • For example, high availability networks are down for minutes, where low availability networks are down for hours.
  • Unavailability is the percentage of time a system is not functioning or downtime and is generally expressed in minutes.
  • Unavailability = (1 – Availability)*365*24*60
  • Unavailability(U)=MTTR/MTBF
  • The unavailability of a 99.999% available system is 5.3 minutes per year.
  • Availability is generally measured as either failure rates  or mean time before failure (MTBF).
  • Availability calculations always assume a bi-directional system.

2.Circuit vs. Nodal Availability

Circuit and nodal availability measure different quantities.  To help explain this clearly un-availability (Unavailability=1-Availablity) will be used in this section.

  • Circuit un-availability is a measure of the average down time of a traffic demand / service.
    • A circuit is un-available only if traffic affecting components that help transport the demand / service have failed.
    • Circuit unavailability is calculated by considering the unavailabilities of components which are traffic affecting and by taking into consideration those components that are hardware protected.
    • For example, the failure of both 10G line cards on an NE can cause a traffic outage.
  • Nodal un-availability is a measure of the average down time of a node.
    • Each time there is a failure in a node, regardless if it is traffic affecting or not, an engineer is required to visit the node to fix the failure.
    • Therefore nodal un-availability is based on calculated failure rates, it is still a direct measure of an operational expenditure.
    • Nodal unavailability is calculated by adding all components of a network element regardless of hardware protection, i.e. in series.
    • For example, failure of a protected switch card is non-traffic affecting but still requires a site visit to be replaced.

3.Terms & Definitions

Failure rate

  •  Failure rate is usually measured as Failures in Time (FIT), where one FIT equals a single failure in one billion (109) hours of operation.
  •  FITs are calculated according to industry standard (Telcordia SR 332).

MTBF- (Mean time between failure)

  •  Average time between failures for a given component.
  •  Measured either in hours or years.
  • MTBF is inversely proportional to FITs.

MTTR-(Mean time to repair)

  •  Average time to repair a given failure.
  •  Measured in hours.
  •  Availability is always quoted in terms of number of nines
  •  For example, carrier grade is 5 9’s, which is 99.999%
  • Availability is better understood in terms of unavailability in minutes per year
  • Therefore for an availability of 99.999%, the unavailability or downtime is 5.3 minutes per year
Data Planes are set of network elements which receive, send, and switch the network data.
 As per the  Generalized Multi-Protocol Label Switching (GMPLS) standards, following nomenclature is used for various technological data planes:
  • Packet Switching Capable (PSC) layer
  • Layer-2 Switching Capable (L2SC) layer
  • Time Division Multiplexing (TDM) layer
  • Lambda Switching Capable (LSC) layer
  • Fiber-Switch Capable (FSC)

And as per Layered architectures concepts ;above technologies are correlated as:-

  • Layer 3 for PSC (IP Routing)
  • Layer 2.5 for PSC (MPLS)
  • Layer 2 for L2SC (often Ethernet)
  • Layer 1.5 for TDM (often SONET/SDH)
  • Layer 1 for LSC (often WDM switch elements)
  • Layer 0 for FSC (often port switching devices based on optical or mechanical technologies)

**********************************************************************************************

In a “N” Layered Network Architecture, the services are grouped in a hierarchy of layers

– Layer N uses services of layer N-1

– Layer N provides services to layer N+1

A communication layer is completely defined by

(a) A peer protocol which specifies how entities at layer-N communicate.

(b) The service interface which specifies how adjacent layers at the same system communicate

 When talking about two adjacent layers,

(a) the higher layer is a service user, and

(b) the lower layer is a service provider

– The communication between entities at the same layer is logical

– The physical flow of data is vertical.

Just prior to transmission, the entire SONET signal, with the exception of the framing bytes and the section trace byte, is scrambled.  Scrambling randomizes the bit stream is order to provide sufficient 0–>1 and 1–>0 transitions for the receiver to derive a clock with which to receive the digital information.

 

 

 

 

 

 

 

Actually every add/drop multiplexer sample incoming bits according to a particular clock frequency. Now this clock frequency is recovered by using transitions between 1s and 0s in the incoming OC-N signal. Suppose, incoming bit stream contains long strings of all 1s or all 0s. Then clock recovery would be difficult. So to enable clock recovery at the receiver such long strings of all 1s or 0s are avoided. This is achieved by a process called Scrambling.

Scrambler is designed as shown in the figure given below:-

It is a frame synchronous scrambler of sequence length 127. The generating polynomial is 1+x6+x7. The scrambler shall be reset to ‘1111111’ on the most significant byte following Z0 byte in the Nth STS-1. That bit and all subsequent bits to be scrambled shall be added, modulo 2, to the output from the x7 position of the scrambler, as shown in Figure above.Example:

The first 127 bits are:

111111100000010000011000010100 011110010001011001110101001111 010000011100010010011011010110 110111101100011010010111011100 0101010

The same operation is used for descrambling. For example, the input data is 000000000001111111111.

        00000000001111111111  <-- input data
        11111110000001000001  <-- scramble sequence
        --------------------  <-- exclusive OR (scramble operation)
        11111110001110111110  <-- scrambled data
        11111110000001000001  <-- scramble sequence
        --------------------  <-- exclusive OR (descramble operation)
        00000000001111111111  <-- original data

The framing bytes A1 and A2, Section Trace byte J0 and Section Growth byte Z0 are not scrambled to avoid possibility that bytes in the frame might duplicate A1/A2 and cause an error in framing. The receiver searches for A1/A2 bits pattern in multiple consecutive frames, allowing the receiver to gain bit and byte synchronization. Once bit synchronization is gained, everything is done, from there on, on byte boundaries – SONET/SDH is byte synchronous, not bit synchronous.

An identical operation called descrambling is done at the receiver to retrieve the bits.

Scrambling is performed by XORing the data signal with a pseudo-random bit sequence generated by the scrambler polynomial indicated above.  The scrambler is frame synchronous, which means that it starts every frame in the same state.

Descrambling is performed by the receiver by XORing the received signal with the same pseudo random bit sequence.  Note that since the scrambler is frame synchronous, the receiver must have found the frame alignment before the signal can be descrambled.  That is why the frame bytes(A1A2) are not scrambled.

References:http://www.electrosofts.com/sonet/scrambling.html

Frequency justification and pointers:+/-ve Stuffing mechanism in SONET/SDH

When the input data has a rate lower than the output data rate of a multiplexer, the positive stuffing will occur. The input is stored in a buffer at a rate which is controlled by the WRITE clock. Since the output (READ) clock rate is higher than the WRITE clock rate, the buffer content will be depleted or emptied. To avoid this condition, the buffer fill is constantly monitored and compared to a threshold. If the the content fill is below a threshold, the READ clock is inhibited and stuffed bit is inserted to the output stream. Meanwhile, the input data stream is still filling the buffer. The stuffed bit location information must be transmitted to the receiver so that the receiver can remove the stuffed bit.

When the input data has a rate higher than the output data rate of a multiplexer, the negative stuffing will occur. If negative stuffing occur, the extra data can be transmitted through an other channel. The receiver must need to kown how to retrieve the data.

AnchorPositive Stuffing

If the frame rate of the STS SPE is too slow with respect to the frame rate then the alignment of the envelope should periodically slip back or the pointer should be incremented by one periodically. This operation is indicated by inverting the I bits of the 10 bit pointer. The byte right after the H3 byte is the stuff byte and should be ignored. The following frames should contain the new pointer. For example, the 10 bit of the H1 and H2 pointer bytes has the value of ‘0010010011’ for STS-1 frame N.

        Frame #     IDIDIDIDID
        ----------------------
          N         0010010011
          N+1       1000111001  <-- the I bits are inverted, positive stuffing
                                    is required.
          N+2       0010010100  <-- the pointer is increased by 1
AnchorNegative Stuffing

If the frame rate of the STS SPE is too fast with respect to the frame rate then the alignment of the envelope should periodically advance or the pointer should be decremented by one periodically. This operation is indicated by inverting the D bits of the 10 bit pointer. The H3 byte is containing actual data. The following frames should contain the new pointer. For example, the 10 bit of the H1 and H2 pointer bytes has the value of ‘0010010011’ for STS-1 frame N.

        Frame #     IDIDIDIDID
        ----------------------
          N         0010010011
          N+1       0111000110  <-- the D bits are inverted, negative stuffing
                                    is required.
          N+2       0010010010  <-- the pointer is decreased by 1

Network Operation Center

network operations center (NOC, pronounced like the word knock), also known as a “network management center”, is one or more locations from which network monitoring and control, or network management, is exercised over a computertelecommunication orsatellite network.

NOCs are implemented by business organizationspublic utilitiesuniversities, and government agencies that oversee complex networking environments that require high availability. NOC personnel are responsible for monitoring one or many networks for certain conditions that may require special attention to avoid degraded service. Organizations may operate more than one NOC, either to manage different networks or to provide geographic redundancy in the event of one site becoming unavailable.

In addition to monitoring internal and external networks of related infrastructure, NOCs can monitor social networks to get a head-start on disruptive events.

NOCs analyze problems, perform troubleshooting, communicate with site technicians and other NOCs, and track problems through resolution. When necessary, NOCs escalate problems to the appropriate stakeholders. For severe conditions that are impossible to anticipate, such as a power failure or a cut optical fiber cable, NOCs have procedures in place to immediately contact technicians to remedy the problem.

Primary responsibilities of NOC personnel may include:

  • Network monitoring
  • Incident response
  • Communications management
  • Reporting

NOCs often escalate issues in a hierarchic manner, so if an issue is not resolved in a specific time frame, the next level is informed to speed up problem remediation. NOCs sometimes have multiple tiers of personnel, which define how experienced and/or skilled a NOC technician is. A newly hired NOC technician might be considered a “tier 1”, whereas a technician that has several years of experience may be considered “tier 3” or “tier 4”. As such, some problems are escalated within a NOC before a site technician or other network engineer is contacted.

NOC personnel may perform extra duties; a network with equipment in public areas (such as a mobile network Base Transceiver Station) may be required to have a telephone number attached to the equipment for emergencies; as the NOC may be the only continuously staffed part of the business, these calls will often be answered there.

A Network Operations Center rests at the heart of every telecom network or major data center, a place to keep an eye on everything.

Some of these NOCs are really “dressed to impress”, while others have taken a more mundane approach.

So, for inspiration, here is a set of pictures of different NOCs from telecom companies and data centers (and one content delivery network) that we here at Pingdom have collected from around the internet.

Dressed to impress

These NOCs are obviously designed to impress visitors on top of being useful. Also NOC constitutes the best Technical Experts of Networks as it acts as a heart for the network operations to run and to make human life more comfortable.

See the glimpse of some world’s best NOC across world!

Airtel Network Experience Center,Gurgaon,India

 

Reliance Communications’ NOC in India

 

AT&T’s Global NOC in Bedminster, New Jersey

 

Lucent’s Network Reliability Center in Aurora, Colorado (1998-99)

 

Conexim’s NOC in Australia

 

Akamai’s NOC in Cambridge, Massachusetts

 

Slightly more discreet

While still impressive on a smaller scale, these NOCs have taken a slightly more conventional approach. We noticed a divide here. Data centers tend to have more scaled-back NOCs while telecom companies often fall in the “dressed to impress” category, perhaps partly due to having more infrastructure to monitor than the average data center (and shareholders).

Easy CGI’s NOC in Pearl River, New York

 

Ensynch’s NOC in Tempe, Arizona

 

TWAREN’s NOC (Taiwan Advanced Research & Education Network)

 

The Planet’s NOC in Houston, Texas

 

KDL’s NOC in Evansville, Indiana

 

And the not-flashy-in-the-least award goes to…

Some of the small NOC’s could be seen as 

 

Image sources:

AT&T NOC from AT&T, Reliance NOC from Suraj, Lucent NOC from Evans Consoles, Conexim NOC from Conexim, Akamai NOC from Akamai via Bert Boerland’s blog, Easy CGI NOC from Easy CGI, Ensynch NOC from Ensynch, TWAREN NOC from TWAREN, The Planet NOC from The Planet’s blog, Rackspace NOC from Naveenium, KDL NOC from Kentucky Data Link.

http://royal.pingdom.com/2008/05/21/gallery-of-network-operations-centers/

Dispersion compensation-An Introduction

Fiber dispersion is one of the most critical parameters that need to be considered for high speed transmission design. Dispersion typically varies with wavelength and accumulates along a fiber length. Therefore it is difficult to precisely compensate for all the propagating channels with fixed optical compensation modules at the same time. When preferred, dispersion at each channel can be fully compensated electronically in the DSP of the receiver.

Dispersion compensation basically means eliminating the compounded dispersion originating from length of the fiber. There might be a misconception that having a fiber with zero dispersion would avoid such a drama. However this is not true as verified from older experience with dispersion-shifted fiber (DSF). A DSF is designed with zero dispersion between 1525 nm and 1575 nm. This would work perfectly for a single channel transmission inside this window. However in DWDM transmission, it gives rise to undesired levels Four Wave Mixing which renders DWDM transmission practically impossible. Therefore, the goal is not to reduce the dispersion to zero but to avoid excessive temporal broadening of the pulses so that the residual dispersion is still within the tolerable limits of the system.

Several technologies exist for chromatic dispersion compensation, such as dispersion compensating fiber/ unit (DCF/U), dispersion managed cables, higher-order mode DCF, fiber Bragg gratings and optical phase conjugation. The widely used compensation is via the dispersion compensating fiber. In general, chromatic dispersion can be compensated in lump or according to dispersion map/management. In lump compensation, the accumulated dispersion is compensated in bulk using in-line dispersion compensation while with dispersion management the local dispersion evolution along the link is compensated utilizing the DCFs as shown in the figure.

Dispersion compensating fiber, as the name implies, is actually a fiber with large negative dispersion parameter that can be inserted into the link at regular intervals. The compensating fiber typically has a dispersion of -100 ps/nm/km in the 1550 nm region and thus only a short length of this fiber would be required to compensate the accumulated dispersion arising from hundreds of km of transmission fiber. However, insertion of DCF is not addition of new fiber or doesn’t increase transmission distance. The added length of fiber is placed as a bulk at one end of the link. This nevertheless adds attenuation and additional amplification may be needed to compensate and achieve the desired reach. A typical DCF attenuation is about 0.5 dB/km.

As mentioned before, in a DWDM system, it is quite difficult to balance dispersion characteristic over a range of wavelengths i.e. for all the co-propagating channels. When residual dispersion for center channel of the DWDM spectrum has been compensated to zero, other channels at extremes will have significant finite dispersion. The dispersion compensation is therefore optimized in such a way such that the finite dispersion in the neighboring channel is either less than the tolerable range or then compensated electronically.

original link:http://blog.cubeoptics.com/index.php/2014/06/dispersion-compensation

Timing loop is that a NE traces to a timing signal whose source is the NE. Timing loop would deteriorate the timing signals of related network elements(NEs) and should be avoided as possible in the synchronization network. In the PDH environment, timing loop is avoided by synchronization network plan, such as the method of assigning a substratum level to each NE.But in the SDH environment where the complicated architectures such as ring are involved, timing loop can not be avoided only using the engineering plan. So synchronization statue messages(SSMs) is used to avoid timing loop and provide a quantity level to the traced timing signal.

Synchronization Statue Message(SSM) is defined in the S1 byte in the SOH of STM-N frame in the ITU-T G.707 referred to Table1.In PDH, the similar function is defined, for example, in the any bit of No.5~8 bits in the TS0 time slot of even frame in the 2Mb/s timing signal as SSM function. A detailed algorithm of SSM has not be defined by ITU-T, but the criteria of SSM can be simply described as follows.

To external timing or line timing, SDH Network Element(NE) generally selects the digital signal with the highest quality level indication(SSM) as the reference timing signal from several available digital signals(To several signals with the same quality level, select one of the highest preset priority), sets SSM=DNU on the output port of the SDH interface which is selected as reference timing signal; and other output timing signals trace the reference signal and its SSM.

To pass timing, NE put the SSM from west STM-N signal into that of east STM-N signal, and put the SSM from east STM-N signal into that of west STM-N signal.

Regardless of the timing mode, if external reference timing signals and timing from line signals are both not qualified, the NE clock will enter holdover;

Sl bits b5-b8 SDH synchronization  quality level description

0

Quality unknown(Existing  Sync.Network)

10

G.81 l(QL-PRC)

100

G.812 transit(QL-SSUT)

1000

G.812 local(QL-SSUL)

1011

Synchronous Equipment Timing Source   C QL-SEC )

1111

Don’t use for synchronization(DUS)

others

Reserved

The above example is used to explain the SSM function. Figure 1(a) is the normal situation; Figure 1(b) is the situation of a fiber cut without using SSM, now a timing loop emerges between the NE2 and NE3; Figure 1(c) the situation is same as Figure 1(b) but using SSM, the timing loop is avoided.

Counter-attack for Timing Loop

To avoid timing loop, the SSM function of S1 byte should be improved and DUS/DNU scheme can be considered as an option. 

Here we will discuss what are the advantages of OTN(Optical Transport Network) over SDH/SONET.

The OTN architecture concept was developed by the ITU-T initially a decade ago, to build upon the Synchronous Digital Hierarchy (SDH) and Dense Wavelength-Division Multiplexing (DWDM) experience and provide bit  rate efficiency,  resiliency and  management  at  high capacity.  OTN therefore looks a  lot like Synchronous Optical Networking (SONET) / SDH in structure, with less overhead and more management features.

It is a common misconception that OTN is just SDH with a few insignificant changes. Although the multiplexing structure and terminology look the same, the changes in OTN have a great impact on its use in, for example, a multi-vendor, multi-domain environment. OTN was created to be a carrier technology, which is why emphasis was put on enhancing transparency, reach, scalability and monitoring of signals carried over large distances and through several administrative and vendor domains.

The advantages of OTN compared to SDH are mainly related to the introduction of the following changes:

Transparent Client Signals:

In OTN the Optical Channel Payload Unit-k (OPUk) container is defined to include the entire SONET/SDH and Ethernet signal, including associated overhead bytes, which is why no modification of the overhead is required when transporting through OTN. This allows the end user to view exactly what was transmitted at the far end and decreases the complexity of troubleshooting as transport and client protocols aren’t the same technology.

OTN uses asynchronous mapping and demapping of client signals, which is another reason why OTN is timing transparent.

Better Forward Error Correction:

OTN has increased the number of bytes reserved for Forward Error Correction (FEC), allowing a theoretical improvement of the Signal-to-Noise Ratio (SNR) by 6.2 dB. This improvement can be used to enhance the optical systems in the following areas:

  • Increase the reach of optical systems by increasing span length or increasing the number of spans.
  • Increase the number of channels in the optical systems, as the required power theoretical has been lowered 6.2 dB, thus also reducing the non-        linear effects, which are dependent on the total power in the system.
  • The increased power budget can ease the introduction of transparent optical network elements, which can’t be introduced without a penalty.    These elements include Optical Add-Drop Multiplexers (OADMs), Photonic Cross Connects (PXCs), splitters, etc., which are fundamental for the  evolution from point-to-point optical networks to meshed ones.
  • The FEC part of OTN has been utilised on the line side of DWDM transponders for at least the last 5 years, allowing a significant increase in reach/capacity.

Better scalability:

The old transport technologies like SONET/SDH were created to carry voice circuits, which is why the granularity was very dense – down to 1.5 Mb/s. OTN is designed to carry a payload of greater bulk, which is why the granularity is coarser and the multiplexing structure less complicated.

Tandem Connection Monitoring:

The introduction of additional (six) Tandem Connection Monitoring (TCM) combined with the decoupling of transport and payload protocols allow a significant improvement in monitoring signals that are transported through several administrative domains, e.g. a meshed network topology where the signals are transported through several other operators before reaching the end users.

In a multi-domain scenario – “a classic carrier’s carrier scenario”, where the originating domain can’t ensure performance or even monitor the signal when it passes to another domain – TCM introduces a performance monitoring layer between line and path monitoring allowing each involved network to be monitored, thus reducing the complexity of troubleshooting as performance data is accessible for each individual part of the route.

Also a major drawback with regards to SDH is that a lot of capacity during packet transport is wasted in overhead and stuffing, which can also create delays in the transmission, leading to problems for the end application, especially if it is designed for asynchronous, bursty communications behavior. This over-complexity is probably one of the reasons why the evolution of SDH has stopped at STM 256 (40 Gbps).

References: OTN and NG-OTN: Overview by GEANT

Lot of my friends discuss why we don’t required synchronization in OTN. So, I decided to blog on this topic here:-

Here we will discuss the timing aspects of optical transport networks as defined by ITU-T SG15 Q13

At the time the OTN was first developed, network synchronization was carried over SDH. Because of this, a key decision made during the definition of the first generation of the OTN hierarchy was that the OTN must be transparent to the payloads transported within the ODUk and that the OTN layer itself does not need to transport network synchronization. The network synchronization should still be carried within the payload, mainly by SDH/synchronous optical network (SONET) client tributaries. The main concern was then that the synchronization char-acteristics of the SDH tributaries are preserved when carried across the OTN network.

Figure 1. SDH Timing transparency across the OTN.

However, since SDH networks were widely deployed, an approach where the timing is directly carried by the SDH clients was preferable. The reason  behind this decision was that a single synchronization layer  based on SDH was considered simpler to technocrats. Such a solution requires that the timing of the SDH clients is carried transparently across the OTN network, and that the phase error and wander generated by the transport through the OTN remains with- in defined limits (Fig. 1).

The consequences of this choice are that the OTN was defined to be an asynchronous network. The clocks within the OTN equipment are free running and the accuracy of their oscillator has been defined consistent with the accuracy of the client and the amount of offset that can be accommodated by the OTN frame.

In addition, in order to simplify the future development of new mappings, a new container type, the ODUflex, was developed. New clients whose rates are above ODU1 can be mapped synchronously into the ODUflex in a process called the bit-synchronous mapping procedure. The ODUflex is then mapped to a higher-order ODU using GMP.

Here the generic timing capabilities of OTN clocks are supported, similarly as for SDH transport. To support the new clients, the new OTN now defines three mapping methods:

  • Bit-synchronous mapping procedure (BMP): bit-synchronous mapping into the server layer (used for ODUflex and ODU2E)
  • Asynchronous mapping procedure (AMP): asynchronous mapping with dedicated stuff byte positions in the server layer ODU (used for payloads with frequency tolerance of up to ±20 ppm)
  • Generic mapping procedure (GMP): delta- sigma modulator-based approach, with equal distribution of stuff and data in the transport container and asynchronous map- ping into ODU payload with ±20 ppm ODU clock and ±100 ppm client accuracy.

All the above mappings support the transport of synchronization.

In particular, the OTN frame has been defined so that the justification process can accommodate an input signal with a frequency offset of up to ±20 ppm of the nominal frequency and mapped with an internal oscillator with a frequency range up to ±20 ppm. In addition, the frame had to support the case of ODUk multi- plexing, for which both ODUk signal timings may vary within ±20 ppm.As a result, the G.709 frame was defined to accommodate up to ±65 ppm of offset.

There is not very tricky reason behind the synchronization but to carry the legacy information transparently and also bitrate of OTN is very high compared to 125us of SDH/SONET.

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

What  ITU-T G.798.1 2003 Edition says on OTN TIMING FUNCTION

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

The OTN does not require any synchronization functionality. The OTN – specifically the mapping/demapping/desynchronizing and multiplexing/demultiplexing processes and justification granularity information – is designed to transport synchronous client signals, like synchronous STM-N and synchronous Ethernet signals. When those signals are bit synchronously mapped into the ODUk (using BMP), this ODUk will be traceable to the same clock to which the synchronous client signal is traceable (i.e., PRC, SSU, SEC/EEC and under a signal fail condition of the synchronous client the AIS/LF clock). When those signals are asynchronously mapped into the ODUk (using AMP or GMP), this ODUk will be plesiochronous with a frequency/bit rate tolerance of ±20 ppm.

Non-synchronous constant bit rate client signals can be mapped bit synchronous (using BMP) or asynchronous (using AMP, GMP) into the ODUk. In the former case, the frequency/bit rate tolerance of the ODUk will be the frequency/bit rate tolerance of the client signal, with a maximum of ±45 ppm for k=0, 1, 2, 3, 4 and ±100 ppm for k=2e, flex. In the latter case, the frequency/bit rate tolerance of the ODUk will be ±20 ppm.

Multiplexing of low order ODUs into a high order ODUk uses an asynchronous mapping (either AMP or GMP). The frequency/bit rate tolerance of the high order ODUk signal is ±20 ppm.

Variable rate packet client signals are mapped into the ODUk using the generic framing procedure (GFP-F). The frequency/bit rate tolerance of the ODUk is ±20 ppm for k=0, 1, 2, 3, 4 and ±100 ppm for k=flex.

NOTE – It is possible to use the clock from an EEC or SEC function to generate the ODUk carrying clients mapped with AMP, GMP, or GFP-F or a multiplex of low order ODUs. Such ODUk is then traceable to an EEC, SSU or PRC. At this point in time, such ODUk does not provide support for a Synchronization Status Message (ODUk SSM), and consequently cannot be used as a synchronous-ODUk, i.e., as a synchronous STM-N or synchronous Ethernet replacement signal.

ODUk signals are mapped frame-synchronously into OTUk, thus the frequency/bit rate tolerance of the OTUk signals depends on the frequency/bit rate tolerance of the ODUk signal being carried.

===================================================================================

References:

[1] ITU-T Rec. G.709/Y.1331, “Interfaces for the Optical Transport Network (OTN),” Dec. 2009.

[2] ITU-T Rec. G.8251, “The Control of Jitter and Wander within the Optical Transport Network (OTN),” Nov. 2001.

[3] ITU-T Rec. G. 810, “Definitions and Terminology for Synchronization Networks,” 1996.

[4] ITU-T Rec. G.811 “Timing Requirements at the Outputs of Primary Reference Clocks Suitable for Plesiochronous Operation of International Digital Links,” 1988.

[5] ITU-T Rec. G.813, “Timing Characteristics of SDH Equip- ment Slave Clocks (SEC),” 2003.

[6] IEEE Communications Magazine • September 2010

[7] ITU-T G.798.1 excerpt 7.3

 

What is Q-factor ?

Q-factor measurement occupies an intermediate position between the classical optical parameters (power, OSNR, and wavelength) and the digital end-to-end performance parameters based on BER.A Q-factor is measured in the time domain by analyzing the statistics of the pulse shape of the optical signal. A Q-factor is a comprehensive measure for the signal quality of an optical channel taking into account the effects of noise, filtering, and linear/non-linear distortions on the pulse shape, which is not possible with simple optical parameters alone.

Definition 1:

The Q-factor, a function of the OSNR, provides a qualitative description of the receiver performance. The Q-factor suggests the minimum signal-to-noise ratio (SNR) required to obtain a specific BER for a given signal. OSNR is measured in decibels. The higher the bit rate, the higher the OSNR ratio required. For OC-192 transmissions, the OSNR should be at least 27 to 31 dB compared to 18 to 21 dB for OC-48.

 Definition 2:

The Quality factor is a measure of how noisy a pulse is for diagnostic purposes. The eye pattern oscilloscope will typically generate a report that shows what the Q factor number is. The Q factor is defined as shown in the figure: the difference of the mean values of the two signal levels (level for a “1” bit and level for a “0” bit) divided by the sum of the noise standard deviations at the two signal levels. A larger number in the result means that the pulse is relatively free from noise.

 Definition 3:

Q is defined as follows: The ratio between the sums of the distance from the decision point within the eye (D) to each edge of the eye, and the sum of the RMS noise on each edge of the eye.

This definition can be derived from the following definition, which in turn comes from ITU-T G.976 (ref. 3).

where m1,0 are the mean positions of each rail of the eye, and s1,0 are the S.D., or RMS noise, present on each of these rails.

For an illustration of where these values lie within the eye see the following figure:

 

As Q is a ratio it is reported as a unit-less positive value greater than 1 (Q>1). A Q of 1 represents complete closure of the received optical eye. To give some idea of the associated raw BER a Q of 6 corresponds to a raw BER of 10-9.

Q factor as defined in ITU-T G.976

The Q factor is the signal-to-noise ratio at the decision circuit in voltage or current units, and is typically expressed by:

                                                                                                                                                                                                   (A-1)

where µ1,0, is the mean value of the marks/spaces voltages or currents, and s1,0 is the standard deviation.

The mathematic relations to BER when the threshold is set to the optimum value are:

    

                                                                                                                          (A-2)

with:

    (A-3)

 

The Q factor can be written in terms of decibels rather than in linear values:

                            (A-4)

 

Calculation of Q-Factor from OSNR

The OSNR is the most important parameter that is associated with a given optical signal. It is a measurable (practical) quantity for a given network, and it can be calculated from the given system parameters. The following sections show you how to calculate OSNR. This section discusses the relationship of OSNR to the Q-factor.

The logarithmic value of Q (in dB) is related to the OSNR by following  Equation

 

In the equation, B0 is the optical bandwidth of the end device (photodetector) and Bc is the electrical bandwidth of the receiver filter.

Therefore, Q(dB) is shown in

 

In other words, Q is somewhat proportional to the OSNR. Generally, noise calculations are performed by optical spectrum analyzers (OSAs) or sampling oscilloscopes, and these measurements are carried over a particular measuring range of Bm. Typically, Bmis approximately 0.1 nm or 12.5 GHz for a given OSA. From Equation showing Q in dB in terms of OSNR, it can be understood that if B0 < Bc, then OSNR (dB )> Q (dB). For practical designs OSNR(dB) > Q(dB), by at least 1–2 dB. Typically, while designing a high-bit rate system, the margin at the receiver is approximately 2 dB, such that Q is about 2 dB smaller than OSNR (dB).

The Q-Factor, is in fact a metric to identify the attenuation in the receiving signal and determine a potential LOS and it is an estimate of the Optical-Signal-to-Noise-Ratio (OSNR) at the optical receiver.   As attenuation in the receiving signal increases, the dBQ value drops and vice-versa.  Hence a drop in the dBQ value can mean that there is an increase in the Pre FEC BER, and a possible LOS could occur if the problem is not corrected in time.

Reference:

ITU-T G.976

What is an eye diagram?

An eye diagram is an overlay of all possible received bit sequences, e.g. the sum of…

eyediagram

Note: should really be an overlap of “infinitely long” bit sequences to get a true eye.  This catches all potential inter-symbol interference

results in…

(the above is considered to be an “open” eye)

 Eye diagrams can be used to evaluate distortion in the received signal, e.g. a “closed” eye

Note:More the wide open eye more the network is error-free

The hex code F628 is transmitted in every frame of every STS-1.

This allows a receiver to locate the alignment of the 125usec frame within the received serial bit stream.  Initially, the receiver scans the serial stream for the code F628.  Once it is detected, the receiver watches to verify that the pattern repeats after exactly 810 STS-1 bytes, after which it can declare the frame found. Once the frame alignment is found, the remaining signal can be descrambled and the various overhead bytes extracted and processed.

Just prior to transmission, the entire SONET signal, with the exception of the framing bytes and the section trace byte, is scrambled.  Scrambling randomizes the bit stream is order to provide sufficient 0-1 and 1-0 transitions for the receiver to derive a clock with which to receive the digital information.

Scrambling is performed by XORing the data signal with a pseudo-random bit sequence generated by the scrambler polynomial indicated above.  The scrambler is frame synchronous, which means that it starts every frame in the same state.

Descrambling is performed by the receiver by XORing the received signal with the same pseudo random bit sequence.  Note that since the scrambler is frame synchronous, the receiver must have found the frame alignment before the signal can be descrambled.  That is why the frame bytes are not scrambled.

 

One more interesting answer from Standardisation Expert ITU-T Huub van Helvoort:-

The initial “F” is to provide four consecutive bits for the clock recovery circuit to lock the clock. The number of “0” and “1”in F628 is equal (=8) to take care that there is a DC balance in the signal (after line coding).

It has nothing to do with the scrambling, the pattern may even occur in the scrambled signal, however it is very unlikely that that occurs exactly every 125 usec so there will not be a false
lock to this pattern.

Explanation to Huub’s dc balancing w.r.t NEXT GENERATION TRANSPORT NETWORKS Data, Management, and Control Planes by Manohar Naidu Ellanti:

A factor that was not initially anticipated when the Al and A2 bit patterns were chosen was the potential affect on the laser for higher rate signals. For an ST S-N signal, the frame begins with N adjacent Al bytes followed by N adjacent A2 bytes. Note that for Al, there are more ls than Os, which means that the laser is on for a higher percentage of the time when the Al byte than for A2, in which there are more Os than ls. The A2 byte has fewer ls than Os. As a result, if the laser is directly modulated, for large values of N, the lack of balance between Os and 1s causes the transmitting laser to become hotter during the string of A 1 bytes and cooler during the string of A2 bytes. The thermal drift affects the laser performance such that the signal level changes, making it difficult for the receiver threshold detector to track. Most high-speed systems have addressed this problem by using a laser that is continuously on and modulate the signal with a shutter after the laser.

We know that in SDH frame rate is fixed i.e. 125us.

But in case of OTN, it is variable rather frame size is fixed.

So, frame rate calculation for OTN could be done by following method:-

Frame Rate (us) =ODUk frame size(bits)/ODUk bit rate(bits/s)…………………………………….(1)

 Also, we know that 

STM16=OPU1==16*9*270*8*8000=2488320000 bits/s

 Now assume that multiplicative factor (Mk)** for rate calculation of various rates

Mk= OPUk=(238/239-k) ODUk=239/(239-k) OTUk=255/(239-k)

Now, Master Formula to calculate bit rate for different O(P/D/T)Uk will be

Bit Rate for O(P/D/T)Uk b/s =Mk*X*STM16=Mk*X*2488320000 b/s………..(2)

Where X=granularity in order of STM16 for OTN bit rates(x=1,4,40,160)

Putting values of equation(2) in equation(1) we will get OTN frame rates.

Eg:-

otn-rates

For further queries revert:)

**Multiplicative factor is just a simple math :eg. for ODU1/OPU1=3824/3808={(239*16)/(238*16)}

Here value of multiplication factor will give the number of times  for rise in the frame size after adding header/overhead.

As we are using Reed Soloman(255,239) i.e we are dividing 4080bytes in sixteen frames (The forward error correction for the OTU-k uses 16-byte interleaved codecs using a Reed- Solomon S(255,239) code. The RS(255,239) code operates on byte symbols.).

Hence 4080/16=255…I have understood it you need to do simpler maths to understand..)