Author

MapYourTech

Browsing
Share and Explore the Tech Inside You!!!

Lot of my friends discuss why we don’t required synchronization in OTN. So, I decided to blog on this topic here:-

Here we will discuss the timing aspects of optical transport networks as defined by ITU-T SG15 Q13

At the time the OTN was first developed, network synchronization was carried over SDH. Because of this, a key decision made during the definition of the first generation of the OTN hierarchy was that the OTN must be transparent to the payloads transported within the ODUk and that the OTN layer itself does not need to transport network synchronization. The network synchronization should still be carried within the payload, mainly by SDH/synchronous optical network (SONET) client tributaries. The main concern was then that the synchronization char-acteristics of the SDH tributaries are preserved when carried across the OTN network.

Figure 1. SDH Timing transparency across the OTN.

However, since SDH networks were widely deployed, an approach where the timing is directly carried by the SDH clients was preferable. The reason  behind this decision was that a single synchronization layer  based on SDH was considered simpler to technocrats. Such a solution requires that the timing of the SDH clients is carried transparently across the OTN network, and that the phase error and wander generated by the transport through the OTN remains with- in defined limits (Fig. 1).

The consequences of this choice are that the OTN was defined to be an asynchronous network. The clocks within the OTN equipment are free running and the accuracy of their oscillator has been defined consistent with the accuracy of the client and the amount of offset that can be accommodated by the OTN frame.

In addition, in order to simplify the future development of new mappings, a new container type, the ODUflex, was developed. New clients whose rates are above ODU1 can be mapped synchronously into the ODUflex in a process called the bit-synchronous mapping procedure. The ODUflex is then mapped to a higher-order ODU using GMP.

Here the generic timing capabilities of OTN clocks are supported, similarly as for SDH transport. To support the new clients, the new OTN now defines three mapping methods:

  • Bit-synchronous mapping procedure (BMP): bit-synchronous mapping into the server layer (used for ODUflex and ODU2E)
  • Asynchronous mapping procedure (AMP): asynchronous mapping with dedicated stuff byte positions in the server layer ODU (used for payloads with frequency tolerance of up to ±20 ppm)
  • Generic mapping procedure (GMP): delta- sigma modulator-based approach, with equal distribution of stuff and data in the transport container and asynchronous map- ping into ODU payload with ±20 ppm ODU clock and ±100 ppm client accuracy.

All the above mappings support the transport of synchronization.

In particular, the OTN frame has been defined so that the justification process can accommodate an input signal with a frequency offset of up to ±20 ppm of the nominal frequency and mapped with an internal oscillator with a frequency range up to ±20 ppm. In addition, the frame had to support the case of ODUk multi- plexing, for which both ODUk signal timings may vary within ±20 ppm.As a result, the G.709 frame was defined to accommodate up to ±65 ppm of offset.

There is not very tricky reason behind the synchronization but to carry the legacy information transparently and also bitrate of OTN is very high compared to 125us of SDH/SONET.

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

What  ITU-T G.798.1 2003 Edition says on OTN TIMING FUNCTION

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

The OTN does not require any synchronization functionality. The OTN – specifically the mapping/demapping/desynchronizing and multiplexing/demultiplexing processes and justification granularity information – is designed to transport synchronous client signals, like synchronous STM-N and synchronous Ethernet signals. When those signals are bit synchronously mapped into the ODUk (using BMP), this ODUk will be traceable to the same clock to which the synchronous client signal is traceable (i.e., PRC, SSU, SEC/EEC and under a signal fail condition of the synchronous client the AIS/LF clock). When those signals are asynchronously mapped into the ODUk (using AMP or GMP), this ODUk will be plesiochronous with a frequency/bit rate tolerance of ±20 ppm.

Non-synchronous constant bit rate client signals can be mapped bit synchronous (using BMP) or asynchronous (using AMP, GMP) into the ODUk. In the former case, the frequency/bit rate tolerance of the ODUk will be the frequency/bit rate tolerance of the client signal, with a maximum of ±45 ppm for k=0, 1, 2, 3, 4 and ±100 ppm for k=2e, flex. In the latter case, the frequency/bit rate tolerance of the ODUk will be ±20 ppm.

Multiplexing of low order ODUs into a high order ODUk uses an asynchronous mapping (either AMP or GMP). The frequency/bit rate tolerance of the high order ODUk signal is ±20 ppm.

Variable rate packet client signals are mapped into the ODUk using the generic framing procedure (GFP-F). The frequency/bit rate tolerance of the ODUk is ±20 ppm for k=0, 1, 2, 3, 4 and ±100 ppm for k=flex.

NOTE – It is possible to use the clock from an EEC or SEC function to generate the ODUk carrying clients mapped with AMP, GMP, or GFP-F or a multiplex of low order ODUs. Such ODUk is then traceable to an EEC, SSU or PRC. At this point in time, such ODUk does not provide support for a Synchronization Status Message (ODUk SSM), and consequently cannot be used as a synchronous-ODUk, i.e., as a synchronous STM-N or synchronous Ethernet replacement signal.

ODUk signals are mapped frame-synchronously into OTUk, thus the frequency/bit rate tolerance of the OTUk signals depends on the frequency/bit rate tolerance of the ODUk signal being carried.

===================================================================================

References:

[1] ITU-T Rec. G.709/Y.1331, “Interfaces for the Optical Transport Network (OTN),” Dec. 2009.

[2] ITU-T Rec. G.8251, “The Control of Jitter and Wander within the Optical Transport Network (OTN),” Nov. 2001.

[3] ITU-T Rec. G. 810, “Definitions and Terminology for Synchronization Networks,” 1996.

[4] ITU-T Rec. G.811 “Timing Requirements at the Outputs of Primary Reference Clocks Suitable for Plesiochronous Operation of International Digital Links,” 1988.

[5] ITU-T Rec. G.813, “Timing Characteristics of SDH Equip- ment Slave Clocks (SEC),” 2003.

[6] IEEE Communications Magazine • September 2010

[7] ITU-T G.798.1 excerpt 7.3

 

What is Q-factor ?

Q-factor measurement occupies an intermediate position between the classical optical parameters (power, OSNR, and wavelength) and the digital end-to-end performance parameters based on BER.A Q-factor is measured in the time domain by analyzing the statistics of the pulse shape of the optical signal. A Q-factor is a comprehensive measure for the signal quality of an optical channel taking into account the effects of noise, filtering, and linear/non-linear distortions on the pulse shape, which is not possible with simple optical parameters alone.

Definition 1:

The Q-factor, a function of the OSNR, provides a qualitative description of the receiver performance. The Q-factor suggests the minimum signal-to-noise ratio (SNR) required to obtain a specific BER for a given signal. OSNR is measured in decibels. The higher the bit rate, the higher the OSNR ratio required. For OC-192 transmissions, the OSNR should be at least 27 to 31 dB compared to 18 to 21 dB for OC-48.

 Definition 2:

The Quality factor is a measure of how noisy a pulse is for diagnostic purposes. The eye pattern oscilloscope will typically generate a report that shows what the Q factor number is. The Q factor is defined as shown in the figure: the difference of the mean values of the two signal levels (level for a “1” bit and level for a “0” bit) divided by the sum of the noise standard deviations at the two signal levels. A larger number in the result means that the pulse is relatively free from noise.

 Definition 3:

Q is defined as follows: The ratio between the sums of the distance from the decision point within the eye (D) to each edge of the eye, and the sum of the RMS noise on each edge of the eye.

This definition can be derived from the following definition, which in turn comes from ITU-T G.976 (ref. 3).

where m1,0 are the mean positions of each rail of the eye, and s1,0 are the S.D., or RMS noise, present on each of these rails.

For an illustration of where these values lie within the eye see the following figure:

 

As Q is a ratio it is reported as a unit-less positive value greater than 1 (Q>1). A Q of 1 represents complete closure of the received optical eye. To give some idea of the associated raw BER a Q of 6 corresponds to a raw BER of 10-9.

Q factor as defined in ITU-T G.976

The Q factor is the signal-to-noise ratio at the decision circuit in voltage or current units, and is typically expressed by:

                                                                                                                                                                                                   (A-1)

where µ1,0, is the mean value of the marks/spaces voltages or currents, and s1,0 is the standard deviation.

The mathematic relations to BER when the threshold is set to the optimum value are:

    

                                                                                                                          (A-2)

with:

    (A-3)

 

The Q factor can be written in terms of decibels rather than in linear values:

                            (A-4)

 

Calculation of Q-Factor from OSNR

The OSNR is the most important parameter that is associated with a given optical signal. It is a measurable (practical) quantity for a given network, and it can be calculated from the given system parameters. The following sections show you how to calculate OSNR. This section discusses the relationship of OSNR to the Q-factor.

The logarithmic value of Q (in dB) is related to the OSNR by following  Equation

 

In the equation, B0 is the optical bandwidth of the end device (photodetector) and Bc is the electrical bandwidth of the receiver filter.

Therefore, Q(dB) is shown in

 

In other words, Q is somewhat proportional to the OSNR. Generally, noise calculations are performed by optical spectrum analyzers (OSAs) or sampling oscilloscopes, and these measurements are carried over a particular measuring range of Bm. Typically, Bmis approximately 0.1 nm or 12.5 GHz for a given OSA. From Equation showing Q in dB in terms of OSNR, it can be understood that if B0 < Bc, then OSNR (dB )> Q (dB). For practical designs OSNR(dB) > Q(dB), by at least 1–2 dB. Typically, while designing a high-bit rate system, the margin at the receiver is approximately 2 dB, such that Q is about 2 dB smaller than OSNR (dB).

The Q-Factor, is in fact a metric to identify the attenuation in the receiving signal and determine a potential LOS and it is an estimate of the Optical-Signal-to-Noise-Ratio (OSNR) at the optical receiver.   As attenuation in the receiving signal increases, the dBQ value drops and vice-versa.  Hence a drop in the dBQ value can mean that there is an increase in the Pre FEC BER, and a possible LOS could occur if the problem is not corrected in time.

Reference:

ITU-T G.976

What is an eye diagram?

An eye diagram is an overlay of all possible received bit sequences, e.g. the sum of…

eyediagram

Note: should really be an overlap of “infinitely long” bit sequences to get a true eye.  This catches all potential inter-symbol interference

results in…

(the above is considered to be an “open” eye)

 Eye diagrams can be used to evaluate distortion in the received signal, e.g. a “closed” eye

Note:More the wide open eye more the network is error-free

Query on Linkedin:Wondering about the difference in use of the OSC and GCC in OTN+WDM systems

First thing I would like to share is that transport equipment typically contains integrated external DCN ports for standalone management. However, as the network infrastructure equipment continues growth with remote locations, it may not be feasible to have a local external DCN network at each and every remote site. Thus OSC/GCC/DCC can provide a cost-effective in-band/Outband data communications link back to a gateway NE (GNE) and back to a centralize network management stations for remote maintenance from customer NMS/OSS.

Now brief on GCC.
GCC are general Communication channel used in OTN.

In OTN, the GCC0 is used on the OTU overhead while GCC1 and GCC2 are used from the ODU overhead. GCC0 provides terminal-regen and regen-regen communications. GCC1/GCC2 provides direct terminal-terminal communications.It allows an in-band management solution between various elements in network.As they are inband communication mechanism,bytes inside frame are responsible for the communication as different levels as in case of DCC in SDH/SONET.

Now brief on OSC

ITU-T G.692,defines OSC.As we donot have framing concept for DWDM rather it acts as a carrier we require some outband communication systems for maintenace and remote operation.OSC is generated from special supervisory cards that are capable to transmit communication information over bandwidth say 2Mbps,155Mbps etc.Bandwidth rate is vendor specific.

Again question arises what is OSC?

The OSC is carried on a separate wavelength, different from the wavelengths carrying the actual traffic.It is separated from the other wavelengths at each amplifier stage and received, processed, and retransmitted,

The OSC provides an out-of-band, full-duplex communications channel for remote node management, monitoring and control similar to the Data Communications Channel (DCC) of SONET/SDH and the General Communications Channel (GCC) of OTN. The OSC optically segregates network management and control from user data, so even if the OSC is lost, data forwarding continues uninterrupted.

For WDM systems operating in the C-band, the popular choices for the OSC wavelength include 1310 nm, 1480 nm, 1510 nm, or 1620 nm. Using the 1310 nm band for the OSC precludes the use of this band for carrying traffic.

Both functionality/working principles are part of ITU-T standards.

 

The hex code F628 is transmitted in every frame of every STS-1.

This allows a receiver to locate the alignment of the 125usec frame within the received serial bit stream.  Initially, the receiver scans the serial stream for the code F628.  Once it is detected, the receiver watches to verify that the pattern repeats after exactly 810 STS-1 bytes, after which it can declare the frame found. Once the frame alignment is found, the remaining signal can be descrambled and the various overhead bytes extracted and processed.

Just prior to transmission, the entire SONET signal, with the exception of the framing bytes and the section trace byte, is scrambled.  Scrambling randomizes the bit stream is order to provide sufficient 0-1 and 1-0 transitions for the receiver to derive a clock with which to receive the digital information.

Scrambling is performed by XORing the data signal with a pseudo-random bit sequence generated by the scrambler polynomial indicated above.  The scrambler is frame synchronous, which means that it starts every frame in the same state.

Descrambling is performed by the receiver by XORing the received signal with the same pseudo random bit sequence.  Note that since the scrambler is frame synchronous, the receiver must have found the frame alignment before the signal can be descrambled.  That is why the frame bytes are not scrambled.

 

One more interesting answer from Standardisation Expert ITU-T Huub van Helvoort:-

The initial “F” is to provide four consecutive bits for the clock recovery circuit to lock the clock. The number of “0” and “1”in F628 is equal (=8) to take care that there is a DC balance in the signal (after line coding).

It has nothing to do with the scrambling, the pattern may even occur in the scrambled signal, however it is very unlikely that that occurs exactly every 125 usec so there will not be a false
lock to this pattern.

Explanation to Huub’s dc balancing w.r.t NEXT GENERATION TRANSPORT NETWORKS Data, Management, and Control Planes by Manohar Naidu Ellanti:

A factor that was not initially anticipated when the Al and A2 bit patterns were chosen was the potential affect on the laser for higher rate signals. For an ST S-N signal, the frame begins with N adjacent Al bytes followed by N adjacent A2 bytes. Note that for Al, there are more ls than Os, which means that the laser is on for a higher percentage of the time when the Al byte than for A2, in which there are more Os than ls. The A2 byte has fewer ls than Os. As a result, if the laser is directly modulated, for large values of N, the lack of balance between Os and 1s causes the transmitting laser to become hotter during the string of A 1 bytes and cooler during the string of A2 bytes. The thermal drift affects the laser performance such that the signal level changes, making it difficult for the receiver threshold detector to track. Most high-speed systems have addressed this problem by using a laser that is continuously on and modulate the signal with a shutter after the laser.

We know that in SDH frame rate is fixed i.e. 125us.

But in case of OTN, it is variable rather frame size is fixed.

So, frame rate calculation for OTN could be done by following method:-

Frame Rate (us) =ODUk frame size(bits)/ODUk bit rate(bits/s)…………………………………….(1)

 Also, we know that 

STM16=OPU1==16*9*270*8*8000=2488320000 bits/s

 Now assume that multiplicative factor (Mk)** for rate calculation of various rates

Mk= OPUk=(238/239-k) ODUk=239/(239-k) OTUk=255/(239-k)

Now, Master Formula to calculate bit rate for different O(P/D/T)Uk will be

Bit Rate for O(P/D/T)Uk b/s =Mk*X*STM16=Mk*X*2488320000 b/s………..(2)

Where X=granularity in order of STM16 for OTN bit rates(x=1,4,40,160)

Putting values of equation(2) in equation(1) we will get OTN frame rates.

Eg:-

otn-rates

For further queries revert:)

**Multiplicative factor is just a simple math :eg. for ODU1/OPU1=3824/3808={(239*16)/(238*16)}

Here value of multiplication factor will give the number of times  for rise in the frame size after adding header/overhead.

As we are using Reed Soloman(255,239) i.e we are dividing 4080bytes in sixteen frames (The forward error correction for the OTU-k uses 16-byte interleaved codecs using a Reed- Solomon S(255,239) code. The RS(255,239) code operates on byte symbols.).

Hence 4080/16=255…I have understood it you need to do simpler maths to understand..)

>> SPECIFICATION COMPARISION

>CHARACTERISTICS COMPARISON 

>>WDM BAND

*****************************************************************************************************************************References

EDFA-Erbium Doped Fiber Amplifier

EDFA block

RAMAN AMPLIFIER

SOA-Semiconductor Optical Amplifier

Latency in Fiber Optic Networks

As we are very much aware that Internet traffic is growing very fast. The more information we are transmitting the more we need to think about parameters like available bandwidth and latency. Bandwidth is usually understood by end-users as the important indicator and measure of network performance. It is surely a reliable figure of merit, but it mainly depends on the characteristics of the equipment. Unlike bandwidth, latency and jitter depend on the specific context of transmission network topology and traffic conditions.

Latency we understand delay from the time of packet transmission at the sender to the end of packet reception at the receiver. If latency is too high it spreads data packets over the time and can create an impression that an optical metro network is not operating at data transmission speed which was expected. Data packets are still being transported at the same bit rate but due to latency they are delayed and affect the overall transmission system performance.

It should be pointed out, that there is need for low latency optical networks in almost all industries where any data transmission is realized. It is becoming a critical requirement for a wide set of applications like financial transactions, videoconferencing, gaming, telemedicine and cloud services which requires transmission line with almost no delay performance. These industries are summarized and shown in table below, please see Table 1.

Table 1. Industries where low latency services are very important .

In fiber optical networks latency consists of three main components which adds extra time delay:

  •  the optical fiber itself,
  •  optical components
  •  opto-electrical components.

Therefore, for the service provider it is extremely important to choose best network components and think on efficient low latency transport strategy.

Latency is a critical requirement for a wide set of applications mentioned above. Even latency of 250 ns can make the difference between winning and losing a trade. Latency reduction is very important in financial sector, for example, in the stock exchange market where 10 ms of latency could potentially result in a 10% drop in revenues for a company. No matter how fast you can execute a trade command, if your market data is delayed relative to competing traders, you will not achieve the expected fill rates and your revenue will drop. Low latency trading has moved from executing a transaction within several seconds to milliseconds, microseconds, and now even to nanoseconds.

LATENCY SOURCES IN OPTICAL NETWORKS

Latency is a time delay experienced in system and it describes how long it takes for data to get from transmission side to receiver side. In a fiber optical communication systems it is essentially the length of optical fiber divided by the speed of light in fiber core, supplemented with delay induced by optical and electro optical elements plus any extra processing time required by system, also called overhead.Signal processing delay can be reduced by using parallel processing based on large scale integration CMOS technologies.

Added to the latency due to propagation in the fiber, there are other path building blocks that affect the total data transport time. These elements include

  •   opto-electrical conversion,
  •   switching and routing,
  •   signal regeneration,
  •   amplification,
  •   chromatic dispersion (CD) compensation,
  •   polarization mode dispersion (PMD) compensation,
  •   data packing, digital signal processing (DSP),
  •   protocols and addition forward error correction (FEC)

Data transmission speed over optical metro network must be carefully chosen. If we upgrade 2.5 Gbit/s link to 10 Gbit/s link then CD compensation or amplification may be necessary, but it also will increase overall latency. For optical lines with transmission speed more than 10 Gbit/s (e.g. 40 Gbit/s) a need for coherent detection arises. In coherent detection systems CD can be electrically compensated using DSP which also adds latency. Therefore, some companies avoid using coherent detection for their low-latency network solutions.

From the standpoint of personal communications, effective dialogue requires latency < 200 ms, an echo needs > 80 ms to be distinguished from its source, remote music lessons require latency < 20 ms, and remote performance < 5 ms. It has been reported that in virtual environments, human beings can detect latencies as low as 10 to 20 ms. In trading industry or in telehealth every microsecond matters. But in all cases, the lower latency we can get the better system performance will be.

Single mode optical fiber

In standard single-mode fiber, a major part of light signal travels in the core while a small amount of light travels in the cladding. Optical fiber with lower group index of refraction provides an advantage in low latency applications.It is useful to use a parameter “effective group index of refraction (neff) instead of “index of refraction (n)” which only defines the refractive index of core or cladding of single mode fiber. The neff parameter is a weighted average of all the indices of refraction encountered by light as it travels within the fiber, and therefore it represents the actual behavior of light within a given fiber.The impact of profile shape on neff by comparing its values for several Corning single mode fiber (SMF) products with different refractive index profiles is illustrated in Fig. 2.

 

Figure 2. Effective group index of refraction impact of various commercially available Corning single mode fiber types.

It is known that speed of light in vacuum is 299792.458 km/s. Assuming ideal propagation at the speed of light in vacuum, an unavoidable latency value can be calculated as following in Equation (1):

 

However, due to the fiber’s refractive index light travels more slowly in optical fiber than in vacuum. In standard single mode fiber defined by ITU-T G.652 recommendation the effective group index of refraction (neff), for example, can be equal to 1.4676 for transmission on 1310 nm and 1.4682 for transmission on 1550 nm wavelength. By knowing neff we can express the speed of light in selected optical fiber at 1310 and 1550 nm wavelengths, see Equations (2) and (3):

 

By knowing speed of light in optical fiber at different wavelengths (see Equation (2) and (3) ) optical delay which is caused by 1 km long optical fiber can be calculated as following:

 

As one can see from Equations (4) and (5), propagation delay of optical signal is affected not only by the fiber type with certain neff, but also with the wavelength which is used for data transmission over fiber optical network. It is seen that optical signal delay values in single mode optical fiber is about 4.9 μs. This value is the practically lower limit of latency achievable for 1 km of fiber in length if it were possible to remove all other sources of latency caused by other elements and data processing overhead.

Photonic crystal fibers (PCFs) can have very low effective refractive index, and can propagate light much faster than in SMFs. For example, hollow core fiber (HCF) may provide up to 31% reduced latency relative to traditional fiber optics. But there is a problem that attenuation in HCF fibers is much higher compared to already implemented standard single mode fibers (for SMF α=0.2 dB/km but for HCF α=3.3 dB/km at 1550 nm). However, it is reported even 1.2 dB/km attenuation obtained in hollow-core photonic crystal fiber.

Chromatic Dispersion Compensation

Chromatic dispersion (CD) occurs because different wavelengths of light travel at different speeds in optical fiber. CD can be compensated by dispersion compensation module (DCM) where dispersion compensating fiber (DCF) or fiber Bragg grating (FBG) is employed.

A typical long reach metro access fiber optical network will require DCF approximately 15 to 25% of the overall fiber length. It means that use of DCF fiber adds about 15 to 25% to the latency of the fiber.For example, 100 km long optical metro network where standard single mode fiber (SMF) is used, can accumulate chromatic dispersion in value about 1800 ps/nm at 1550 nm wavelength.For full CD compensation is needed about 22.5 km long DCF fiber spool with large negative dispersion value (typical value is -80 ps/nm/km).If we assume that light propagation speed in DCF fiber is close to speed in SMF then total latency of 100 km long optical metro network with CD compensation using DCF DCM is about 0.6 ms.

Solution for how to avoid need for chromatic dispersion compensation or reduce the length of necessary DCF fiber is to use optical fiber with lower CD coefficient. For example, non-zero dispersion shifted fibers (NZ-DSFs) were developed to simplify CD compensation while making a wide band of channels available. NZ-DSF fiber parameters are defined in ITU-T G.655 recommendation. Today NZ-DSF fibers are optimized for regional and metropolitan high speed optical networks operating in the C- and L- optical bands. For C band it is defined that wavelength range is from 1530 to 1565 nm, but for L band it is from 1565 to 1625 nm.

For commercially available NZ-DSF fiber chromatic dispersion coefficient can be from 2.6 to 6.0 ps/nm/km in C-band and from 4.0 to 8.9 ps/nm/km in L-band. At 1550 nm region typical CD coefficient is about 4 ps/nm/km for this type of fiber. It can be seen that for G.655 NZ-DSF fiber CD coefficient is about four times lower than for standard G.652 SMF fiber.Since these fibers have lower dispersion than conventional single mode, simpler modules are used that add only up to 5% to the transmission time for NZ-DSF.7 This enables a lower latency than using SMF fiber  for transmission. Another solution how to minimize need for extra CD compensation or reduce it to the necessary minimum is dispersion shifted fiber (DSF) which is specified in ITU-T G.653 recommendation. This fiber is optimized for use in 1550 nm region and has no chromatic dispersion at 1550 nm wavelength. Although, it is limited to single-wavelength operation due to non-linear four wave mixing (FWM), which causes optical signal distortions.

If CD is unavoidable another technology for compensation of accumulated CD is a deployment of fiber Bragg gratings (FBG). DCM with FBG can compensate several hundred kilometers of CD without any significant latency penalty and effectively remove all the additional latency that DCF-based networks add.In other words, a lot of valuable microseconds can be gained by migrating from DCF DCM to FBG DCM technology in optical metro network.Typical fiber length in an FBG used for dispersion compensation is about 10 cm. Therefore, normally FBG based DCM can introduce from 5 to 50 ns delay in fiber optical transmission line.

One of solutions how to avoid implementation of DCF DCM which introduces addition delay is coherent detection where complex transmission formats such as quadrature phase-shift keying (QPSK) can be used. However, it must be noticed that it can be a poor choice from a latency perspective because of the added digital signal processing (DSP) time it require. This additional introduced delay can be up to 1 μs.

Optical amplifiers

Another key optical component which adds additional time delay to optical transmission line is optical amplifier. Erbium doped fiber amplifiers (EDFA) is widely used in fiber optical access and long haul networks. EDFA can amplify signals over a band of almost 30 to 35 nm extending from 1530 to1565 nm, which is known as the C-band fiber amplifier, and from 1565 to 1605 nm, which is known as the L-band EDFA.The great advantage of EDFAs is that they are capable of amplifying many WDM channels simultaneously and there is no need to amplify each individual channel separately. EDFAs also remove the requirement for optical-electrical-optical (OEO) conversion, which is highly beneficial from a low-latency perspective. However it must be taken into account that EDFA contains few meters of erbium-doped optical fiber (Er3+) which adds extra latency, although this latency amount is small compared with other latency contributors. Typical EDFA amplifier contains up to 30 m long erbium doped fiber. These 30 m of additional fiber add 147 ns (about 0.15 μs) time delay.

Solution to how to avoid or reduce extra latency if amplification is necessary is use of Raman amplifier instead of EDFA or together (in tandem) with EDFA. This combination provides maximal signal amplification with minimal latency. Raman amplifiers use a different optical characteristic to amplify the optical signal.Raman amplification is realized by using stimulated Raman scattering. The Raman gain spectrum is rather broad, and the peak of the gain is centered about 13 THz (100 nm in wavelength) below the frequency of the pump signal used. Pumping a fiber using a high-power pump laser, we can provide gain to other signals, with a peak gain obtained 13 THz below the pump frequency. For example, using pumps around 1460–1480 nm wavelength provides Raman gain in the 1550–1600 nm window, which partly cover C and L bands. Accordingly, we can use the Raman effect to provide gain at any wavelength we want to amplify. The main benefit regarding to latency is that Raman amplifier pump optical signal without adding fiber to the signal path, therefore we can assume that Raman amplifier adds no latency.

Transponders and opto-electrical conversion

Any transmission line components which are performing opto-electrical conversion increase total latency. One of key elements used in opto-electrical conversion are transponders and muxponders. Transponders convert incoming signal from the client to a signal suitable for transmission over the WDM link and an incoming signal from the WDM link to a suitable signal toward the client.Muxponder basically do the same as transponder except that it has additional option to multiplex lower rate signals into a higher rate carrier (e.g. 10 Gbit/s services up to 40 Gbit/s transport) within the system in such a way saving valuable wavelengths in the optical metro network.

The latency of both transponders and muxponders varies depending on design, functionality, and other parameters. Muxponders typically operate in the 5 to 10 μs range per unit. The more complex transponders include additional functionality such as in-band management channels. This complexity forces the unit design and latency to be very similar to a muxponder, in the 5 to 10 μs range. If additional FEC is used in these elements then latency value can be higher.Several telecommunications equipment vendors offer simpler and lower-cost transponders that do not have FEC or in-band management channels or these options are improved in a way to lower device delay. These modules can operate at much lower latencies, from 4 ns to 30 ns. Some vendors also claim that their transponders operate with 2 ns latency which is equivalent to adding about a half meter of SMF to fiber optical path.

Optical signal regeneration

For low latency optical metro networks it is very important to avoid any regeneration and focus on keeping the signal in the optical domain once it is entered the fiber. An optical-electronic-optical (OEO) conversion takes about 100 μs, depending on how much processing is required in the electrical domain. Ideally a carrier would like to avoid use of FEC or full 3R (reamplification, reshaping, retiming) regeneration. 3R regeneration needs OEO conversion which adds unnecessary time delay. Need for optical signal regeneration is determined by transmission data rate involved, whether dispersion compensation or amplification is required, and how many nodes the signal must pass through along the fiber optical path.

Forward error correction and digital signal processing

It is necessary to minimize the amount of electrical processing at both ends of fiber optical connection. FEC, if used (for example, in transponders) will increase the latency due to the extra processing time. This approximate latency value can be from 15 to 150 μs based on the algorithm used, the amount of overhead, coding gain, processing time and other parameters.

Digital signal processing (DSP) can  be used to  deal with chromatic dispersion  (CD), polarization  mode dispersion (PMD) and remove critical optical impairments. But it must be taken into account that DSP adds extra latency to the path. It has been mentioned before that this additional introduced delay can be up to 1 μs.

Latency regarding OSI Levels

Latency is not added only by the physical medium but also because of data processing implemented in electronic part of fiber optical metro network (basically transmitter and receiver). All modern networks are based upon the Open System Interconnection (OSI) reference model which consists of a 7 layer protocol stack, see Fig. 3.

 

Figure 3. OSI reference model illustrating (a) total latency increase over each layer and (b) data way passing through all protocol layers in transmitter and receiver.

SUMMARY

Latency sources in optical metro network and typical induced time delay values.

 

 

Introduction

Automatic Protection Switching (APS) is one of the most valuable features of SONET and SDH networks. Networks with APS quickly react to failures, minimizing lost traffic, which minimizes lost revenue to service providers.  The network is said to be “self healing.”  This application note covers how to use Sunrise Telecom SONET and SDH analyzers to measure the amount of time it takes for a network to complete an automatic protection switchover.  This is important since ANSI T1.105.1 and ITU-T G.841 require that a protection switchover occur within 50 ms.  To understand the APS measurement, a brief review is first given.  This is followed by an explanation of the basis behind the APS measurement.  The final section covers how to operate your Sunrise Telecom SONET and SDH equipment to make an APS time measurement.

What Is APS?

Automatic Protection Switching keeps the network working even if a network element or link fails.  The Network Elements (NEs) in a SONET/SDH network constantly monitor the health of the network.  When a failure is detected by one or more network elements, the network proceeds through a coordinated predefined sequence of steps to transfer (or switchover) live traffic to the backup facility (also called “protection” facility).  This is done very quickly to minimize lost traffic.  Traffic remains on the protection facility until the primary facility (working facility) fault is cleared, at which time the traffic may revert to the working facility.

In a SONET or SDH network, the transmission is protected on optical sections from the Headend (the point at which

the Line/Multiplexer Section Overhead is inserted) to the Tailend (the point where the Line/Multiplexer Section Overhead is terminated).

The K1 and K2 Line/Multiplexer Section Overhead bytes carry an Automatic Protection Switching protocol used to coordinate protection switching between the headend and the tailend.

The protocol for the APS channel is summarized in Figure 1.  The 16 bits within the APS channel contain information on  the APS configuration, detection of network failure, APS commands, and revert commands.  When a network failure is detected, the Line/Multiplexer Section Terminating Equipment communicates and coordinates the protection switchover by changing certain bits within the K1 & K2 bytes.

During the protection switchover, the network elements signal an APS by sending AIS throughout the network.  AIS is  also present at the ADM drop points.  The AIS condition may come and go as the network elements progress through their algorithm to switch traffic to the protection circuit.

AIS signals an APS event. But what causes the network to initiate an automatic protection switchover? The three most common are:

  • • Detection of AIS (AIS is used to both initiate and signal an APS event)
  • • Detection of excessive B2 errors
  • • Initiation through a network management terminal

According to GR-253 and G.841, a network element is required to detect AIS and initiate an APS within 10 ms.  B2 errors should be detected according to a defined algorithm, and more than 10 ms is allowed.  This means that the entire time for both failure detection and traffic restoration may be 60 ms or more (10 ms or more detect time plus 50 ms switch time).

Protection Architectures

There are two types of protection for networks with APS:

  • Linear Protection, based on ANSI T1.105.1 and ITU-T G.783 for point-to-point (end-to-end) connections.
  • Ring Protection, based on ANSI T1.105.1 and ITU-T G.841 for ring structures (ring structures can also be found with two

types of protection mechanisms – Unidirectional and Bidirectional rings).

Refer to Figures 2-4 for APS architectures and unidirectional vs. bidirectional rings.

Protection Switching Schemes

The two most common schemes are 1+1 protection switching and 1:n protection switching.  In both structures, the K1 byte contains both the switching preemption priorities (in bits 1 to 4), and the channel number of the channel requesting action (in bits 5 to 8).  The K2 byte contains the channel number of the channel that is bridged onto protection (in bits 1 to 4), and the mode type (in bit 5); bits 6 to 8 contain various conditions such as AIS-L, RDI-L, indication of unidirectional or bidirectional switching.

 

1+1

In 1+1 protection switching, there is a protection facility (backup line) for each working facility.  At the headend, the optical signal is bridged permanently (split into two signals) and sent over both the working and the protection facilities simultaneously, producing a working signal and a protection signal that are identical.  At the tailend, both signals are monitored independently for failures.  The receiving equipment selects either the working or the protection signal.  This selection is based on the switch initiation criteria which are either a signal fail (hard failure such as the loss of frame (LOF) within an optical signal), or a signal degrade (soft failure caused by a bit error ratio exceeding some predefined value).

Refer to Figure 5.

Normally, 1+1 protection switching is unidirectional, although if the line terminating equipment at both ends supports bidirectional switching, the unidirectional default can be overridden.  Switching can be either revertive (the flow reverts to the working facility as soon as the failure has been corrected) or nonrevertive (the protection facility is treated as the working  facility)

In 1+1 protection architecture, all communications from the headend to the tailend are carried out over the APS channel via the K1 and K2 bytes. In 1+1 bidirectional switching, the K2 byte signaling indicates to the headend that a facility has been switched so that it can start to receive on the now active facility.

1:n

In 1:n protection switching, there is one protection facility for several working facilities (the range is from 1 to 14) and all communications from the headend to the tailend are carried out over the APS channel via the K1 and K2 bytes.  All switching is revertive, that is, the traffic reverts back to the working facility as soon as the failure has been corrected. Refer to Figure 6.

 

Optical signals are normally sent only over the working facilities, with the protection facility being kept free until a working facility fails.  Let us look at a failure in a bidirectional architecture.  Suppose the tailend detects a failure on working facility 2.  The tailend sends a message in bits 5-8 of the K1 byte to the headend over the protection facility requesting switch action.  The headend can then act directly, or if there is more than one problem, the headend decides which is top priority.  On a decision to act on the problem on working facility 2, the headend carries out the following steps:

  1. Bridges working facility 2 at the headend to the protection facility.
  2. Returns a message on the K2 byte indicating the channel number of the traffic on the  protection channel to the tailend
  3. Sends a Reverse Request to the tailend via the K1 byte to initiate bidirectional switch.

On receipt of this message, the tailend carries out the following steps:

    1. Switches to the protection facility to receive
    2. Bridges working facility 2 to the protection facility to transmit back

Now transmission is carried out over the new working facility.

Switch Action Comments

In unidirectional architecture, the tailend makes the decision on priorities. In bidirectional architecture, the headend makes the decision on priorities. If there is more than one failure at a time, a priority hierarchy determines which working facility will be backed up by the protection facility. The priorities that are indicated in bits 1-4 of the K1 byte are as follows:

  1. Lockout
  2. Forced switch (to protection channel regardless of its state) for span or ring; applies only to 1:n switching
  3. Signal fails (high then low priority) for span or ring
  4. Signal degrades (high then low priority) for span or ring; applies only to 1:n switching
  5. Manual switch (to an unoccupied fault-free protection channel) for span or ring; applies only to 1+1 LTE
  6. Wait-to-restore
  7. Exerciser for span or ring (may not apply to some linear APS systems)
  8. Reverse request (only for bidirectional)
  9. Do not revert (only 1+1 LTE provisioned for nonrevertive switching transmits Do Not Revert)
  10. No request

Depending on the protection architecture, K1 and K2 bytes can be decoded as shown in Tables 1-5.

Linear Protection (ITU-T G.783)

Ring Protection (ITU-T G.841)

collected from Sunrise Telecom APS application note..

  GENERIC FRAMING PROTOCOL

The Generic Framing Protocol (GFP), defined in ITU-T G.7041, is a mechanism for mapping constant and variable bit rate data into the synchronous SDH/SONET envelopes. GFP support many types of protocols including those used in local area network (LAN) and storage area network (SAN). In any case GFP adds a very low overhead to increase the efficiency of the optical layer.

Currently, two modes of client signal adaptation are defined for GFP:

  • Frame-Mapped GFP (GFP-F), a layer 2 encapsulation PDU-oriented adaptation mode. It is optimized for data packet protocols (e.g. Ethernet, PPP, DVB) that are encapsulated onto variable size frames.
  • Transparent GFP (GFP-T), a layer 1 encapsulation or block-code oriented adaptation mode. It is optimized for protocols using 8B/10B physical layer (e.g. Fiber Channel, ESCON, 1000BASE-T) that are encapsulated onto constant size frames.

GFP could be seen as a method to deploy metropolitan networks, and simultaneously to support mainframes and server storage protocols.

  Data packet aggregation using GFP. Packets are in queues waiting to be mapped onto a TDM channel. At the far-end packets are drop again to a queue and delivered. GFP frame multiplexing and sub multiplexing. The figure shows the encapsulation mechanism and the transport of the GFP frames into VC containers embedded in the STM frames

Figure       GFP frame formats and protocols

 

Framed-mapped GFP

In Frame-mapped GFP (GFP-F) one complete client packet is entirely mapped into one GFP frame. Idle packets are not transmitted resulting in more efficient transport. However, specific mechanisms are required to transport each type of protocol .

Figure   GFP mapping clients format

GFP-F can be used for Ethernet, PPP/IP and HDLC-like protocols where efficiency and flexibility are important. To perform the encapsulation process it is necessary to receive the complete client packet, but this procedure increases the latency, making GFP-F inappropriate for time-sensitive protocols.

Transparent GFP,GFP-T

Transparent GFP (GFP-T) is a protocol-independent encapsulation method in which all client code words are decoded and mapped into fixed-length GFP frames The frames are transmitted immediately without waiting for the entire client data packet to be received. Therefore it is also a Layer 1 transport mechanism because all the client characters are moved to the far-end independently it does not matter if they are information, headers, control or any kind of overhead.

GFP-T can adapt multiple protocols using the same hardware as long as they are based on 8B/10B line coding. This line codes are transcoded to 64B/65B and then encapsulated into fixed size GFP-T frames. Everything is transported, including inter-frame gaps that can have flow control characters or any additional information.

GFP-T is very good for isocronic or delay sensitive-protocols, and also for Storage Area Networks (SAN) such as ECON or FICON. This is because it is not necessary to process client frames or to wait for arrival of the complete frame. This advantage is counteracted by lost efficiency, because the source MSPP node still generates traffic when no data is being received from the client. 

GFP enables MSPP nodes to offer both TDM and packet-oriented services, managing transmission priorities and discard eligibility. GFP replaces legacy mappings, most of them of proprietary nature. In principal GFP is just an encapsulation procedure but robust and standardised for the transport of packetised data on SDH and OTN as well.

GFP uses a HEC-based delineation technique similar to ATM, it therefore does not need bit or byte stuffing. The frame size can be easily set up to a constant length.

When using GFP-F mode there is an optional GPF extension Header (eHEC) to be used by each specific protocol such us source/destination address, port numbers, class of service, etc. Among the EXI types – ‘linear’ supports submultiplexing onto a single path, ‘Channel ID’ (CID) enables sub-multiplexing over one VC channel GFP-F mode.

 

CONCATENATION

Concatenation is the process of summing the bandwidth of containers (C-i) into a larger container. This provides a bandwidth times bigger than C-i. It is well indicated for the transport of big payloads requiring a container greater than VC-4, but it is also possible to concatenate low-capacity containers, such as VC-11,VC-12, or VC-2.

Figure An example of contiguous concatenation and virtual concatenation. Contiguous concatenation requires support by all the nodes. Virtual concatenation allocates bandwidth more efficiently, and can be supported by legacy installations.

There are two concatenation methods:

  1. Contiguous concatenation: which creates big containers that cannot split into smaller pieces during transmission. For this, each NE must have a concatena- tion functionality.
  2. Virtual concatenation: which transports the individual VCs and aggregates them at the end point of the transmission path. For this, concatenation functionality is only needed at the path termination equipment.

Contiguous Concatenation of VC-4

A VC-4-Xc provides a payload area of containers of C-4 type. It uses the same HO-POH used in VC-4, and with identical functionality. This structure can be transported in an STM-n frame (where X). However, other combinations are also possible; for instance, VC-4-4c can be transported in STM-16 and STM-64 frames. Concatenation guarantees the integrity of a bit sequence, because the whole container is transported as a unit across the whole network .

Obviously, an AU-4-Xc pointer, just like any other AU pointer, indicates the position of J1, which is the first byte of the VC-4-Xc container. The pointer takes the same value as the AU-4 pointer, while the remaining bytes take fixed values equal to Y=1001SS11 to indicate concatenation. Pointer justification is carried out the same way for all the concatenated AU-4s and x 3 stuffing bytes .

However, contiguous concatenation, today, is more theory than practice, since other alternatives more bandwidth-efficient, such as virtual concatenation, are gaining more importance.

 

Virtual Concatenation

Connectionless and packet-oriented technologies, such as IP or Ethernet, do not match well the bandwidth granularity provided by contiguous concatenation. For example, to implement a transport requirement of 1 Gbps, it would be necessary to allocate a VC4-16c container, which has a 2.4-Gbps capacity. More than double the bandwidth that is needed.

 

Virtual concatenation (VCAT) is a solution that allows granular increments of bandwidth in single VC-units. At the MSSP source node VCAT creates a continuous payload equivalent to times the VC-n units . The set of X containers is known virtual container group (VCG) and each individual VC is a member of the VCG. All the VC members are sent to the MSSP destination node independently, using any available path if necessary. At the destination, all the VC-n are organized, according the indications provided by the H4 or the V5 byte, and finally delivered to the client

Differential delays between VCG member are likely because they are transported individually and may have used different paths with different latencies. Therefore, the destination MSSP must compensate for the different delays before reassembling the payload and delivering the service.

Virtual concatenation is required only at edge nodes, and is compatible with legacy SDH networks, despite the fact that they do not support concatenation. To get the full benefit from this, individual containers should be transported by different routes across the network, so if a link or a node fails the connection is only partially affected. This is also a way of providing a resilience service.

 Higher Order Virtual Concatenation

Higher Order Virtual Concatenation (HO-VCAT) uses X times VC-3 or VC4 containers (VC-3/4-Xv, X = 1 … 256), providing a payload capacity of X times 48384 or 149760 kbit/s.

 

The virtual concatenated container VC-3/4-Xv is mapped in independent VC-3 or VC-4 envelopes that are transported individually through the network. Delays could occur between the individual VCs, this obviously has to be compensated for when the original payload is reassembled .

A multiframe mechanism has been implemented in H4 to compensate for differential delays of up to 256ms:

  • Every individual VC has a H4 multiframe indicator (MFI) that denotes the virtual container they belong to
  • The VC also traces its position X in the virtual container using he SEQ number which is carried in H4. The SEQ is repeated every 16 frames

The H4 POH byte is used for the virtual concatenation-specific sequence and multiframe indication

 Lower Order Virtual Concatenation

Lower Order Virtual Concatenation LO-VCat. uses X times VC-11, VC12, or VC2 containers (VC-11/12/2-Xv, X = 1 … 64).

A VCG built with V11, VC12 or VC2 members provides a payload of X containers C11, C12 or C4; that is a capacity of X times 1600, 2176 or 6784 kbit/s. VCG members are transported individually through the network, therefore differential delays could occur between the individual components of a VCG, that will be compensated for at the destination node before reassembling the original continuous payload ).

A multiframe mechanism has been implemented in bit 2 of K4. It includes a sequence number (SQ) and the multiframe indicator (MFI), both enable the reordering of the VCG members. The MSSP destination node will wait until the last member arrives and then will compensate for delays up to 256ms. It is important to note that K4 is a multiframe itself, received every 500ms, then the whole multiframe sequence is repeated every 512 ms.

Testing VCAT

When installing or maintaining VCAT it is important to carry out a number of tests to verify not only the performance of the whole Virtual Concatenation, but also every single member of the VCG. For reassembling the original client data, all the members of the VCG must arrive at the far end, keeping the delay between the first and the last member of a VCG below 256 ms. A missing member prevents the

reconstruction of the payload, and if the problem persists causes a fault that would call for the reconfiguration of the VCAT pipe. Additionally, Jitter and Wander on individual paths can cause anomalies (errors) in the transport service.

BER, latency, and event tests should verify the capacity of the network to perform the service. The VCAT granularity capacity has to be checked as well, by adding/removing. To verify the reassembly operation it is necessary to use a tester with the capability to insert differential delays in individual members of a VC.

Single-mode fibre selection for Optical Communication System

 

 This is collected from article written by Mr.Joe Botha

Looking for a single-mode (SM) fibre to light-up your multi-terabit per second system? Probably not, but let’s say you were – the smart money is on your well-intended fibre sales rep instinctively flogging you ITU-T G.652D  fibre. Commonly referred to as standardSM fibre and also known as Non-Dispersion-Shifted Fibre (NDSF) – the oldest and most widely deployed fibre. Not a great choice, right? You bet.  So for now, let’s resist the notion that you can do whatever-you-want using standardSM fibre. A variety of  SM optical  fibres with  carefully optimised characteristics are  available  commercially: ITU-T G.652, 653, 654, 655, 656 or 657 compliant.

Designs of SM fibre have evolvedover the decadesand present-day optionswould have us deploy G.652D, G.655 or G.656 compliantfibres. Note that G.657A is essentially a more expressive version of G.652D,with a superiorbending loss performance and should you start feeling a little benevolent towards deploying it on a longish-haul – I  can immediately confirm that this allows for a glimpse into the workingsof silliness. Dispersion Shifted Fibre (DSF) in accordance with G.653 has no chromatic dispersion at 1550 nm. However,they are limited to single-wavelength operation due to non-linear four-wave mixing. G.654 compliant fibres were developed specifically for underseaun-regenerated systems and since our focus is directed toward terrestrial applications – let’s leave it at that.

In the above context, the plan is to briefly weigh up G.652D,G.655 and G.656 compliantfibres against three parameters we calculate (before installation) and measure (after installation). I must just point-out that the fibre coefficients used are what one would expect from the not too shabby brands availabletoday.

Attenuation

G.652D compliant G.655 compliant G.656 compliant
λ

nm

ATTN

dB/km

λ

nm

ATTN

dB/km

λ

nm

ATTN

dB/km

1310 0.30 1310 1310
1550 0.20 1550 0.18 1550 0.20
1625 0.23 1625 0.20 1625 0.22

Attenuation is the reduction or loss of opticalpower as  light travels through an opticalfibre and is measured in decibelsper kilometer (dB/km). G.652D offers respectable attenuation coefficients, when compared with G.655 and G.656. It should be remembered, however, that even a meagre 0.01 dB/km attenuation

improvement would reduce a 100 km loss budget by a full dB – but let’s not quibble. No attenuation coefficients for G.655 and G.656 at 1310? It was not, as you may immediately assume, an oversight. Both G.655 and G.656 are optimizedto support long-haul systems and thereforecould not care less about runningat 1310 nm. A cut-offwavelength is the minimum wavelength at which a particular fibre will support SM transmission. At ≤ 1260 nm, G.652 D has the lowest cut-off wavelength – with the cut-off wavelengths for G.655 and G.656 sittingat ≤ 1480 nmand ≤1450 respectively – which explainswhy we have no attenuation coefficient for them at 1310 nm.

PMD

G.652D compliant G.655 compliant G.656 compliant
PMD

ps / √km

PMD

ps / √km

PMD

ps / √km

≤ 0.06 ≤ 0.04 ≤ 0.04

Polarization-mode dispersion (PMD) is  an  unwanted effect caused by asymmetrical properties in an opticalfibre that spreads the optical pulse of a signal. Slight asymmetry in an

optical fibre causes the polarized modes of the light pulse to travel at marginally different speeds, distorting the signal and is reportedin ps / √km, or “ps per root km”. Oddly enough,G.652 and co all possess decent-looking PMD coefficients. Now then, popping a 40-Gbpslaser onto my fibre up againstan ultra-low 0.04 ps / √km, my calculator reveals that the PMD coefficient admissible fibre length is 3,900 km and even at 0.1 ps / √km, a distance of 625 km is achievable.

So far so good? But wait, there’s more. PMD is particularly troublesome for both high data-rate-per-channel and high wavelength channel count systems, largely because of its random nature.Fibre manufacturer’s PMD specifications are accuratefor the fibre itself,but do not incorporate PMD incurred as a result of installation, which in many cases can be many orders of magnitude larger. It is hardly surprising that questionable installation practices are  likely  to  cause imperfect fibre symmetry – the obvious implications are incomprehensible data streams and mental anguish.Moreover, PMD unlikechromatic dispersion (to be discussednext) is also affectedby environmental conditions, making it unpredictable and extremelydifficult to find ways to undo or offset its effect.

 

CD

652D compliant G.655 compliant G.656 compliant
λ

nm

CD

ps/(nm·km)

λ

nm

CD

ps/(nm·km)

λ

nm

CD

ps/(nm·km)

1550 ≤ 18 1550 2~6 1550 6.0~10
1625 ≤ 22 1625 8~11 1625 8.0~13

CD (calledchromatic dispersion to emphasiseits wavelength-dependent nature) has zip-zero to do with the loss of light. It occurs because different wavelengths  of light travel at differentspeeds. Thus, when the allowable

CD is exceeded – light pulses representing a bit-stream will be renderedillegible. It is expressedin ps/ (nm·km). At 2.5- Gbps CD is not an issue – however, lower data rates are seldom desirable. But at 10-Gbps,it is a big issue and the issue gets even bigger at 40-Gbps.

What’s troubling is G.652D’shigh CD coefficient – which one glumly has to concede, is very poor next  to  the competition. G.655 and G.656, variants of non-zerodispersion-shifted fibre (NZ-DSF), comprehensively address G.652D’s shortcomings. It should be noted that nowadays some optical fibre manufacturers don’t bother with distinguishing between G.655 and G.656 – referring to their offerings as G.655/6 compliant.

On the face of it, one might suggest that the answer to our CD problemis to send light along an optical fibre at a wavelength where the CD is zero (i.e. G.653).The result? It turns out that this approach creates more problems than it is likely to solve – by unacceptably amplifying non-linear four-wave mixing and limiting the fibre to single-wavelength operation- in other words, no DWDM. That, in fact, is why CD should not be completely lampooned. Research revealed that the fibre-friendly CD value l i e s in the range of 6-11 ps/nm·km. Therefore, and particularly for high-capacity transport, the best-suited fibre is one in which dispersion is kept within a tight range, being neither too high nor too low.

NZ-DSFs are available in both positive(+D) and negative (-D) varieties. Using NZ-DSF -D, a reverse behavior of the velocity per wavelength is createdand therefore, the effect of +CD can be cancelled out. I almost forgot to mention,by the way, that short wavelengths travel faster than long ones with +CD and longer wavelengths travel faster than short ones with -CD. New sophisticated modulation techniques such as dual-polarized quadrature phase-shift keying (DP-QPSK) using coherent detection, yields high quality CD compensation. However, because of the addedsignal processing time (versussimple on-off keying) they require,this can potentially be a poor choice from a latency perspective.

WDM multiplies capacity

The use of Dense Wavelength Division Multiplexing (DWDM) technology and 40-Gbps(and higher) transmission rates can push the information-carrying capacityof a single fibre to well over a terabit per second.One example is EASSy’s (a 4-fibre submarine cable serving sub-Saharan Africa) 4.72-Tbps capacity. Now then, should my maffs prove to be correct, 118 x 40-Gbpslasers (popped onto only 4-fibres!) should give us an aggregatecapacity of 4.72-Tbps?

Coarse Wavelength Division Multiplexing (CWDM) is a WDM technology that uses 4, 8, 16 or 18 wavelengths for transmission. CWDM is an economically sensible option, often used for short-haul applications on G.652D,where signal amplification is not necessary. CWDMs large 20 nm channelspacing allows for the use of cheaper, less powerfullasers that do not require cooling.

One of the most important considerations in the fibre selectionprocess is the fact that optical signals may need to be amplified along a route. Thanks in no small part (get the picture?)to CWDM’s large channelspacing – typicallyspanning over several spectralbands (1270 nm to 1610 nm) – its signals cannot be amplifiedusing Erbium Doped-Fibre Amplifiers (EDFAs).You see, EDFAs run only in the C and L bands (1520 nm to 1625 nm). WhereasCWDM breaks the optical spectrum up into large chunks – by contrast,DWDM slices it up finely, cramming4, 8, 16, 40, 80, or 160 wavelengths (on 2-fibres)into only the C- and L-bands (1520nm to 1625nm) – perfectfor the use of EDFAs. Each wavelength can without any obvious effort support a 40-Gbps laser and on top of this, 100-Gbpslasers are chompingat the bit to go mainstream.

Making the right choice

On the whole, it is hard not to concludethat the only thing that genuinelyseparates fibre types for high-bit-rate systems is CD. The 3-things – the only ones that I can think of – that is good about G.652D – is that it is affordable, cool for CWDM and perfect for short-haul environments.Top  of the to-do lists of infrastructure providers pushing the boundaries of DWDM enabled ultra high-capacity transport over short, long or ultra long-haul networks – needlessto say, will be to source G.655/6compliant fibres. The cross-tabbelow indicates: Green for OK and oddly enough, Red for Not-OK

ITU-T Compliant 10-Gbps CWDM 40-Gbps CWDM 10-Gbps DWDM 40-Gbps DWDM 100-Gbps DWDM
G.652 OK NOK NOK NOK NOK
G.655/6 OK OK OK OK OK


Advantage of Coherent Optical Transmission System

 
*High Chromatic Dispersion (CD) Robustness
•Can avoid Dispersion Compensation Units (DCUs)
•No need to have precise Fiber Characterization
•Simpler Network Design
•Latency improvement due to no DCUs
*High Polarization Mode Dispersion (PMD) Robustness
•High Bit Rate Wavelengths deployable on all Fiber types
•No need for “fancy”PMD Compensator devices
•No need to have precise Fiber Characterization
*Low Optical Signal-to-Noise Ratio (OSNR) Needed
•More capacity at greater distances w/o OEO Regeneration
•Possibility to launch lower per-channel Power
•Higher tolerance to Channels Interferences

This is a tricky question because 12 a.m. and 12 p.m. are ambiguous and should not be used.

To illustrate this, consider that “a.m.” and “p.m.” are abbreviations for “ante meridiem” and “post meridiem,” which mean “before noon” and “after noon,” respectively. Since noon is neither before noon nor after noon, a designation of either a.m. or p.m. is incorrect. Also, midnight is both twelve hours before noon and twelve hours after noon.

It is fair to say, however, that the shortest measurable duration after noon should be designated as p.m. For example, it would be applicable for a digital clock changing from 11:59:59 a.m. to 12:00:00 to indicate p.m. as soon as it the 12:00 appears, and not delay the display of the p.m. by a minute, or even a second. The same is true for midnight, but there is an added issue of which day midnight refers to (see below).

Hours of operation for a business or other references to a block of time should also follow this designation rule.
For example, a business might be open on Saturdays from 8 a.m. to noon or weekends from 3:30 p.m. until midnight.

Dawn occurs at the time that the geometric center of the Sun is 18° below the horizon in the morning. Respectively, dusk occurs when the geometric center of the Sun is 18° below the horizon in the evening. Twilight refers to the periods between the dawn and sunrise and sunset and dusk, where sunrise and sunset are defined as the exact times when the upper edge of the disc of the Sun is at the horizon. The hazy light during this period is an effect caused by the scattering of the sunlight in the upper atmosphere and reflecting towards Earth. It is very subjective as far as the time of day that twilight occurs, because it depends on the location and elevation of the observer, the time of year and local weather conditions.

In addition, twilight is divided into three durations based on the angle of the Sun below the horizon. Astronomical twilight is the period when the Sun is between 18° and 12° below the horizon. Nautical twilight is the period when the Sun is between 12° and 6° below the horizon, and civil twilight is the period when the Sun is between 6° below the horizon until sunrise. The same designations are used for periods of evening twilight.

Evolution to flexible grid WDM

November 26, 2013|By RANDY EISENACH
WDM networks operate by transmitting multiple wavelengths, or channels, over a fiber simultaneously. Each channel is assigned a slightly different wavelength, preventing interference between channels. Modern DWDM networks typically support 88 channels, with each channel spaced at 50 GHz, as defined by industry standard ITU G.694. Each channel is an independent wavelength.
The fixed 50-GHz grid pattern has served WDM networks and the industry well for many, many years. It helps carriers easily plan their services, capacity, and available spare capacity across their WDM systems. In addition, the technology used to add and drop channels on a ROADM network is based on arrayed-waveguide-grating (AWG) mux/demux technology, a simple and relatively low-cost technique particularly well suited to networks based on 50-GHz grid patterns.

 

FIGURE 1. The ITU’s fixed 50-GHz grid and its 100-GHz variant form the foundation for today’s optical networks.

 

WDM networks currently support optical rates of 10G, 40G, and 100G per wavelength (with the occasional 2.5G still popping up), all of which fit within existing 50-GHz channels. In the future, higher-speed 400-Gbps and 1-Tbps optical rates will be deployed over optical networks. These interfaces beyond 100G require larger channel sizes than used on current WDM networks. The transition to these higher optical rates is leading to the adoption of a new, flexible grid pattern capable of supporting 100G, 400G, and 1T wavelengths.

Current generation

The fixed 50-GHz grid pattern specified by ITU standards is shown in Figure 1. Any 10G, 40G, or 100G optical service can be carried over any of the 50-GHz channels, which enables carriers to mix and match service rates and channels as needed on their networks.
A look inside each channel reveals some interesting differences between the optical rates and resulting efficiency of the optical channel (see Figure 2). A 10G optical signal easily fits within the 50-GHz-channel size, using about half the available spectrum. The remaining space within the 50-GHz channel is unused and unavailable. Meanwhile, the 40G and 100G signals use almost the entire 50-GHz spectrum.

 

FIGURE 2. Optical rates and their spectral efficiency.

 

Spectral efficiency is one measure of how effectively or efficiently a fiber network transmits information and is calculated as the number of bits transmitted per Hz of optical spectrum. With 10G wavelengths the spectral efficiency is only 0.2 bit/Hz, while the 100G wavelength provides a 10X improvement in spectral efficiency to 2 bits/Hz. The more bits that can be transmitted per channel, the greater the improvement in spectral efficiency and increase in overall network capacity and the lower the cost per bit of optical transport.
While 100G wavelengths are becoming more common, carriers are already planning for higher-speed 400G and 1T channels on their future ROADM networks, with the expectation that spectral efficiency will at least remain the same, if not improve. New ways of allocating bandwidth will be needed to meet these expectations.

Superchannels

As mentioned, WDM networks currently transmit each 10G, 40G, and 100G optical signal as a single optical carrier that fits within a standard 50-GHz channel. At higher data rates, including 400G and 1T, the signals will be transmitted over multiple subcarrier channels (see Figure 3). The group of subcarrier wavelengths is commonly referred to as a “superchannel.” Although composed of individual subcarriers, each 400G superchannel is provisioned, transmitted, and switched across the network as a single entity or block.

 

FIGURE 3. 400G modulation options and superchannels.

 

While 400G standards are still in preliminary definition stage, two modulation techniques are emerging as the most likely candidates: dual polarization quadrature phase-shift keying (DP-QPSK) using four subcarriers and DP-16 quadrature amplitude modulation (QAM) with two subcarriers. Due to the differences in optical signal-to-noise-ratio requirements, each modulation type is optimized for different network applications. The 4×100G DP-QPSK approach is better suited to long-haul networks because of its superior optical reach, while the 2×200G DP-16QAM method is ideal for metro distances.
Since 400G signals are treated as a single superchannel or block, the 400G signals shown in Figure 3 require 150-GHz- and 75-GHz- channel sizes, respectively. It’s this transition to higher data rates that leads to the requirement for and adoption of new flexible grid channel assignments to accommodate mixed 100G, 400G, and 1T networks (see Figure 4).
A new flexible grid pattern has been defined and adopted by ITU G694.1. While commonly referred to as “gridless” channel spacing, in reality the newly defined flexible channel plan is actually based on a 12.5-GHz grid pattern. The new standard supports mixed channel sizes, in increments of n×12.5 GHz and easily accommodates existing 100G services (4×12.5 GHz = 50 GHz) and future 400G (12×12.5 GHz) and 1T optical rates.

 

FIGURE 4. Flexible grid channel plan.

 

One of the advantages of the flexible grid pattern is the improvement in spectral efficiency enabled by more closely matching the channel size with the signals being transported and by improved filtering that allows the subcarriers to be more closely squeezed together. As shown in Figure 5, four 100G subcarriers have been squeezed into 150-GHz spacing, as opposed to the 200 GHz (4×50 GHz) required if the subcarriers were transported as independent 50-GHz channels. The net effect of the flexible channel plan and closer subcarrier spacing is an improvement in network capacity of up to 30%.
One common “myth” in the industry is that legacy networks must be upgraded, or “flexible grid-ready,” to support 400G optical rates and superchannels. While having flexible grid-capable ROADMs can improve spectral efficiency, they’re not a requirement to support 400G or superchannels on a network. Since the subcarriers are fully tunable to any wavelength, they can simply be tuned to the existing 50-GHz grid pattern, allowing full backward compatibility with existing ROADM networks.

 

FIGURE 5. Transmitting 400G on legacy WDM networks.

 

CDC ROADMs

Closely associated with flexible grid channel spacing are colorless/directionless/gridless (CDG) and colorless/directionless/contentionless/gridless (CDCG) ROADM architectures. Along with gridless channel spacing, CDC ROADMs enable a great deal of flexibility at the optical layer.
Recall that existing ROADM systems are based on fixed 50-GHz-channel spacing and AWG mux/demux technology. The mux/demux combines and separates individual wavelengths into different physical input and output ports. While the transponder and muxponder themselves are full-band tunable and can be provisioned to any transmit wavelength, they must be connected to a specific port on the mux/demux unit. A transponder connected to the west mux/demux only supports services connected in the west direction. To reassign wavelengths – either to new channels or to reroute them to a different direction – requires technician involvement to physically unplug the transponder from one port on the mux/demux and plug it into a different physical mux/demux port.
CDC ROADMs enable much greater flexibility at the optical layer. The transponders may be connected to any add/drop port and can be routed to any degree or direction. Wavelength reassignment or rerouting can be implemented automatically from a network management system, or based on a network fault, without the need for manual technician involvement. The tradeoffs with CDC ROADMs are more complex architectures and costs.

Flexing network muscles

The existing 50-GHz-channel plan based on ITU G.694 has served the industry well for many years. But as the industry plans for the introduction of even faster 400G, and eventually 1T, optical interfaces, there’s a need to adopt larger channel sizes and a more flexible WDM spacing plan.
These higher-speed optical interfaces rely on a new technique involving superchannels that comprise multiple subcarrier wavelengths. These subcarriers are provisioned, transported, and switched across a network as a single block or entity. Flexible grid systems enable the larger channel sizes required by 400G and 1T interfaces, but also allow the channel size to be closely matched to the signal being transported to optimize spectral efficiency.
No discussion of gridless ROADMs would be complete without including new next generation CDC ROADM architectures. These new ROADMs will enable a great deal more flexibility and efficiency at the optical layer.
As the industry continues to push the boundaries of optical transmission speeds, the concept of “Gridless Networking” has emerged as a key requirement for tomorrow’s flexible and dynamic transport networks.  What is gridless networking?  Brian Lavallée, who heads up submarine networks industry marketing for Ciena, provides some insight.
Current optical transmission systems rely on fixed grid filtering to carve the available bandwidth into a number of channels that can be used to carry traffic. The filtering can be achieved using passive filters, active configurable MUX/DeMUX elements (e.g. Wavelength Selective Switch) or a combination of passive and configurable elements. The traditional terrestrial grid spacing is typically on theITU 50GHz or 100GHz grids. A tighter grid spacing of 25GHz or 33GHz is also used today, such as submarine application systems.
The goal is simple, squeeze as much information as possible into the available light spectrum on the fiber to improve the “spectral efficiency” of the optical network and achieve economies of scale.  Figure 1 illustrates the difference between a fixed grid spectrum and a gridless spectrum, where the latter squeezes the channels as close together as possible for maximum spectral efficiency.
Why do we want maximum spectral efficiency in the first place? To leverage a single line system and lower overall network costs.
The future of optical networks
As optical transmission technologies evolve, the bandwidth required for each optical channel, and the optimal spacing between them, will also evolve.  This is because some optical transmission technologies may require more or less channel space, or spectrum, than others. For instance, a 1Tb/s (1 Terabit) super channel will likely require more optical bandwidth than a 10Gb/s channel as they co-propagate down the optical line system. Today’s common use of fixed channel filter sizes can also lead to non-optimal channel spacing, resulting in potential “wasted” bandwidth and a lower resultant spectral efficiency.  As a result, the use of a gridless network will not only enable newer and higher bandwidth channels, but will allow more efficient spacing of today’s existing channels such as 10G.
Gridless Multiplexing
The WSS (Wavelength Selective Switch) can be used as the active MUX/DeMUX to enable remote control and automation of wavelength routing within a fiber plant. The WSS is a fixed grid product on either a 50GHz or 100GHz grid. To enable gridless transmission while still retaining remote network control and automation, a new type of active MUX/DeMUX is required and is referred to as an FSS (Flexible Spectrum Switch), which allows for variable channel spacing and the elimination of wasted bandwidth. In other words, various optical channel speeds can coexist, utilizing maximum spectral efficiency, on the same optical fiber network. This will achieve economies of scale and lower the overall network cost.  The deployment of the FSS works best in combination with the application of coherent detection, effectively reproducing the electrical baseband filtering centered exactly on the optical carrier.
The use of gridless technology results in the ability to provide flexible DWDM filtering for advanced modulation formats, such coherent detection. By being able to handle various bandwidth sizes per channel, this opens up the optical network for advances in modulation formats, as the network is no longer constrained by traditional fixed optical line filtering. The evolution of the transmission systems from today’s fixed grid structure to tomorrow’s gridless configuration allows the optical system to use channel bandwidth and pre-FEC error rates to dynamically select the best channel spacing while simultaneously eliminating potentially wasted bandwidth for optimized spectral efficiency.
The end result is a truly dynamic and intelligent optical transport infrastructure that can adapt to new optical technologies and bandwidth demands while ensuring the most efficient use of fiber spectrum as possible.

Optical modulation for High Baud Rate networks…. 40G/100G  Speed Networks……

>> What is Higher-Order Modulation Method?

A range of newly developed fundamental communications technologies must be employed in order to reliably transmit signals of 40/43 Gbps and even 100 Gbps in the near future using telecommunications networks.
One of these technologies involves the use of higher-level modulation methods on the optical side, similar to those which have been used for many years successfully on the electrical side in xDSL broadband access technology, for example.
Until now, just one modulation method was used for transmission rates of up to 10 Gbps, namely on/off keying or OOK for short.
Put simply, this means that the laser light used for transmission was either on or off depending on the logical state 1 or 0 respectively of the data signal. This is the simplest form of amplitude modulation.
Additional external modulation is used at 10 Gbps. The laser itself is switched to give a continuous light output and the coding is achieved by means of a subsequent modulator.

>> Why Do We Need Higher-Order Modulation Methods?

Why are higher-level modulation methods with their attendant complexity needed at 40/43 Gbps? There are many reasons for this.
1. Greater Bandwidth and Noise Power Level
Every method of modulation broadens the width of the laser spectrum. At 10 Gbps this means that about 120 pm bandwidth is needed for OOK. If the transmission rate is quadrupled to 40 Gbps, the necessary bandwidth also quadruples, i.e. to around 480 pm. The greater bandwidth results in a linear increase in the noise power level in the communications channel. A four-fold increase in the noise power level corresponds to 6 dB and would result in a decrease in the minimum sensitivity of the system by this same factor. This results in a much shorter transmission range at 40 Gbps, and a consequent need for more regenerators.
Increasing the laser power in sufficient measure to compensate for the missing balance in the system compared to 10 Gbps is not possible. Nonlinear effects in the glass fiber, such as four-wave mixing (FWM), self-phase modulation (SPM), and cross-phase modulation (XPM) would also adversely affect the transmission quality to a significant degree.
Higher-level modulation methods reduce the modulation bandwidth and thus provide a way out of this dilemma.
2. Integrate 40/43 Gbps into Existing DWDM Infrastructure
One absolute necessity is the need to integrate the 40/43 Gbps systems into the existing DWDM infrastructure. The bandwidth required by OOK or optical dual binary (ODB) modulation only allows a best case channel spacing of 100 GHz (= approx. 0.8 nm) in a DWDM system. Systems with a channel spacing of 50 GHz (= approx. 0.4 nm) have long been implemented in order to optimize the number of communications channels in the DWDM system.
For both technologies to be integrated into a single DWDM system, the multiplexers/demultiplexers (MUX/DEMUX) would have to be reconfigured back to a channel spacing of 100 GHz and the corresponding channel bandwidths, or hybrid MUX/DEMUX would have to be installed. Both these solutions are far from ideal, since they either result in a reduction in the number of communications channels or the loss of flexibility in the configuration of the DWDM system.
Here, too, the answer is to use higher-level modulation methods that reduce the required bandwidth.
3. Other Factors
As well as other factors, the transmission quality of a communications path also depends on polarization mode dispersion (PMD) and chromatic dispersion (CD).
CD depends on the fiber and can be compensated for relatively simply by switching in dispersion-compensating fibers. However, this once again degrades the loss budget. This is within acceptable limits for realizing the usual range distances in 10 Gbps systems. But this is not the case with 40 Gbps, where the system budget is already reduced anyway. For this reason, other compensation methods must be used, subject to the additional requirement for exact compensation at all wavelengths of a DWDM system because the maximum acceptable value for CD is a factor of 16 lower than that for 10 Gbps.
The maximum acceptable PMD value for 40 Gbps is reduced by a factor of four. The PMD value is largely affected by external influences on the fiber, such as temperature and mechanical stress, and is also dependent on the quality of manufacture of the fiber itself. A requirement for any new modulation method would be a corresponding tolerance to PMD and CD.

>> A Brief Tutorial on Higher-Order Modulation Methods

When you take a look at the data sheets issued by systems manufacturers or in other technical publications, it is easy to be confused by the number of abbreviations used for new modulation methods.
How do these methods differ, and which of them are really suitable for future transmission speeds? Unfortunately, there is no easy answer to that either. Apart from the technical requirements, such as
  • Significant improvement in minimum OSNR by reducing the signal bandwidth
  • Compatibility with the 50 GHz ITU-T channel spacing or at least with a spacing of 100 GHz
  • Coexistence with 10 Gbps systems
  • Transmission in networks that use ROADMs
  • Scalable for 100 Gbps
The degree of technical difficulty and hence the economic viability also have to be taken into account.
>> Categories of Modulation Methods
The modulation methods can be basically divided into different categories (see figure 1 below).
1. Amplitude Modulation
– NRZ/RZ on/off Keying (OOK) —- Baud Rate = Bit Rate
image
2. Single Polarization State Phase Modulation (DPSK)
Normalized phase and amplitude at the bit center. DPSK differential phase shift keying. —–  Baud Rate = Bit Rate.
image
3. Differential Quadrature Phase Shift Keying (DQPSK). —- Baud Rate = 1/2 Bit Rate
image
4. Dual Polarization State Phase Modulation (DP-QPSK)
Absolute phase and amplitude at the bit center. 3D phase constellation diagram. —–  Baud Rate = 1/4 Bit Rate.
image
OOK amplitude modulation and optical dual binary (ODB) modulation can only be used in a very restricted sense for 40/43 Gbps for the reasons described above. Higher-level phase modulation methods represent the next category.
DPSK improves the system balance by means of a much reduced OSNR limit value. In all the other aspects mentioned, this modulation method has similar characteristics to OOK. This modulation method can therefore only be used for DWDM systems with 100 GHz channel spacing because of the bandwidth it requires. It can only be employed with restrictions in ROADM based networks.
Reconfigurable optical add/drop multiplexers allow routing of individual wavelengths in a DWDM system at network nodes. The basic components of a ROADM are multiplexers and demultiplexers with wavelength-selective filter characteristics and a switch matrix. The cascading of bandpass filters unavoidably leads to a narrowing of the communications channel pass band, with the resultant truncation of the DPSK modulated signal.
Adaptive DPSK takes account of these restrictions and results in clear improvements when used in complex network structures. Improvements in all areas are brought about by modulation methods in the next category, that of quadrature phase shift keying QPSK.
Return-to-zero DQPSK (RZ-DQPSK) has been around for some time now. The RZ coding requires slightly higher complexity on the modulation side compared with the more usual non-return-to-zero (NRZ) coding, but it considerably reduces the susceptibility to PMD and nonlinear effects.
QPSK modulated signals use four possible positions in the constellation diagram. Each phase state now encodes two bits. The baud rate (symbol rate) is therefore halved, so the bandwidth requirement is also halved. Use in systems with 50 GHz channel spacing and in ROADM based networks is assured, with a simultaneous improvement in susceptibility to PMD.
image
The technical complexity required in the realization of this modulation method is admittedly greater. The figure above shows the principle of modulation and demodulation of a QPSK signal and outlines the technical outlay on the optical side.
Systems using dual polarization state DP-QPSK modulation methods have been tried out recently. This opens the way towards a coherent system of transmission and detection. Although this is by far the most complex method, the advantages are significant. Using a total of eight positions in what is now a three-dimensional constellation diagram, the baud rate is thus reduced by a factor of four. Each state encodes four bits.
This makes the method ideally suited for 100 Gbps, and the bandwidth requirement is within a range that would fit within existing DWDM structures. An additional forward error correction (FEC) is applied to 100 Gbps signals, so the actual transmission rate is more likely to be around 112 Gbps. The symbol rate using DP-QPSK modulation would be in the range of 28 GBaud, which requires a bandwidth of about 40 GHz.
The table below compares the characteristics of the different modulation methods.
image

>> Current Higher-Order Optical Modulation Methods Status

Implementation of higher-level modulation methods for optical communications is still in the early stages. It is to be expected that further innovations will be triggered by the next level in the transmission rate hierarchy.
In order to be as widely useful as possible, the measurement equipment would have to include facilities for testing the complete range of modulation methods. It is true that there will always be standardized interfaces on the client side of the network; these are 40 Gbps in SDH and 43 Gbps in OTN according to ITU-T Recommendation G.709 for the 40G hierarchy.
However, there is an increase in the diversity of non-standardized solutions on the line side. Not only do the optical parameters vary, but manufacturer-specific coding is being used more and more frequently for FEC. Use of through mode in an analyzer for insertion into the communications path has so far been an important approach:
It is important to check that the payload signal is correctly mapped into the communications frame on the line side, that the FEC is generated correctly, and that alarms are consistent. Or that the correct signaling procedure is followed in the receiver when an error message is received, and that error-free reception is possible in the presence of clock offsets or jittered signals.
It is now the time to decide quickly on using just a few modulation methods, otherwise the cost of investment in measuring equipment will rise to astronomical levels. In contrast to the wide variety in electrical multiplexers for 10 Gbps, optical modulation methods each require a corresponding optical transponder. The cost of these transponders largely determines the price of the test equipment. The greater the diversity, the less likely it is that investment will be made in a tester for a particular optical interface. This will mean that important tests will be omitted from systems using the latest technology.
Access to the line side is probably the easiest route for network operators who in any case have had to keep up with a diversity of systems manufacturers over the years. The most important tests on installed communications systems are end-to-end measurements. Fully developed test equipment for such measurements is available for 40/43 Gbps.

Four Wave Mixing (FWM) in WDM System..

>> Nonlinear Effects in High Power, High Bit Rate Fiber Optic Communication Systems

When optical communication systems are operated at moderate power (a few milliwatts) and at bit rates up to about 2.5 Gb/s, they can be assumed as linear systems. However, at higher bit rates such as 10 Gb/s and above and/or at higher transmitted powers, it is important to consider the effect of nonlinearities. In case of WDM systems, nonlinear effects can become important even at moderate powers and bit rates.
There are two categories of nonlinear effects. The first category happens because of the interaction of light waves with phonons (molecular vibrations) in the silica medium of optical fiber. The two main effects in this category are stimulated Brillouin scattering (SBS) and stimulated Raman scattering (SRS).
The second category of nonlinear effects are caused by the dependence of refractive index on the intensity of the optical power (applied electric field). The most important nonlinear effects in this category are self-phase modulation (SPM) andfour-wave mixing (FWM).
Four-Wave-Mixing-Products

>> Basic Principles of Four-Wave Mixing

1. How the Fourth Wave is Generated
In a WDM system with multiple channels, one important nonlinear effect is four-wave mixing. Four-wave mixing is an intermodulation phenomenon, whereby interactions between 3 wavelengths produce a 4th wavelength.
In a WDM system using the angular frequencies ω1, … ωn, the intensity dependence of the refractive index not only induces phase shifts within a channel but also gives rise to signals at new frequencies such as 2ωij and ωi + ωj – ωk. This phenomenon is called four-wave mixing.
In contrast to Self-Phase Modulation (SPM) and Cross-Phase Modulation (CPM), which are significant mainly for high-bit-rate systems, the four-wave mixing effect is independent of the bit rate but is critically dependent on the channel spacing and fiber chromatic dispersion. Decreasing the channel spacing increases the four-wave mixing effect, and so does decreasing the chromatic dispersion. Thus the effects of Four-Wave Mixing must be considered even for moderate-bit-rate systems when the channels are closely spaced and/or dispersion-shifted fibers are used.
To understand the effects of four-wave mixing, consider a WDM signal that is the sum of n monochromatic plane waves. Thus the electric field of this signal can be written as
inline-equation-1
The nonlinear dielectric polarization PNL(r,t) is given by
image
where χ(3) is called the third-order nonlinear susceptibility and is assumed to be a constant (independent of t).
Using the above two equations, the nonlinear dielectric polarization is given by
equation-2.28-to-2.35
Thus the nonlinear susceptibility of the fiber generates new fields (waves) at the frequencies ωi ± ωj ± ωk (ωi, ωj, ωknot necessarily distinct). This phenomenon is termed four-wave mixing.
The reason for this term is that three waves with the frequencies ωi, ωj, and ωk combine to generate a fourth wave at a frequency ωi ± ωj ± ωk. For equal frequency spacing, and certain choices of I,j, and k, the fourth wave contaminates ωi. For example, for a frequency spacing Δω, taking ω1, ω2, and ω3 to be successive frequencies, that is, ω2 = ω1 + Δω and ω3 = ω1 + 2Δω, we have ω123 = ω2, and 2ω213.
In the above equation, the term (28) represents the effect of SPM and CPM. The terms (29), (31), and (32) can be neglected because of lack of phase matching. Under suitable circumstances, it is possible to approximately satisfy the phase-matching condition for the remaining terms, which are all of the form ωi + ωj – ωk, I,j Not Equalk (ωi, ωj not necessarily distinct).
For example, if the wavelengths in the WDM system are closely spaced, or are spaced near the dispersion zero of the fiber, then β is nearly constant over these frequencies and the phase-matching condition is nearly satisfied. When this is so, the power generated at these frequencies can be quite significant.
2. Power Penalty Due to Four-Wave Mixing
From the above discussion, we can see that the nonlinear polarization causes three signals at frequencies ωi, ωj, and ωkto interact to produce signals at frequencies ωi ± ωj ± ωk. Among these signals, the most troublesome one is the signal corresponding to
ωijk = ωi + ωj – ωk,       i Not Equalk, j Not Equalk
Depending on the individual frequencies, this beat signal may lie on or very close to one of the individual channels in frequency, resulting in significant crosstalk to that channel. In a multichannel system with W channels, this effect results in a large number (W(W-1)2) of interfering signals corresponding to i ,j,k varying from 1 to W. In a system with three channels, for example, 12 interfering terms are produced, as shown in the following figure.
four-wave-mixing-5.29
Interestingly, the effect of four-wave mixing depends on the phase relationship between the interacting signals. If all the interfering signals travel with the same group velocity, as would be the case if there were no chromatic dispersion, the effect is reinforced. On the other hand, with chromatic dispersion present, the different signals travel with different group velocities. Thus the different waves alternately overlap in and out of phase, and the net effect is to reduce the mixing efficiency. The velocity difference is greater when the channels are space farther apart (in systems with chromatic dispersion).
To quantify the power penalty due to four-wave mixing, we can start from the following equation
image
This equation assumes a link of length L without any loss and chromatic dispersion. Here Pi, Pj, and Pk denote the powers of the mixing waves and Pijk the power of the resulting new wave, image is the nonlinear refractive index (3.0x 10-8 μm2/W), and dijk is the so-called degeneracy factor.
In a real system, both loss and chromatic dispersion are present. To take the loss into account, we replace L with the effective length Le, which is given by the following equation for a system of length L with amplifiers spaced l km apart.
image
The presence of chromatic dispersion reduces the efficiency of the mixing. We can model this by assuming a parameter ηijk, which represents the efficiency of mixing of the three waves at frequencies ωi, ωj, and ωk. Taking these two into account, we can modify the preceding equation to
image
For on-off keying (OOK) signals, this represents the worst-case power at frequency ωijk, assuming a 1 bit has been transmitted simultaneously on frequencies ωi, ωj, and ωk.
The efficiency ηijk goes down as the phase mismatch Δβ between the interfering signals increases. We can obtain the efficiency as
image
Here, Δβ is the difference in propagation constants between the different waves, and D is the chromatic dispersion. Note that the efficiency has a component that varies periodically with the length as the interfering waves go in and out of phase. In this example, we will assume the maximum value for this component. The phase mismatch can be calculated as
Δβ = βi + βj – βk – βijk
where βr represents the propagation constant at wavelength λr.
Four-wave mixing manifests itself as intrachannel crosstalk. The total crosstalk power for a given channel ωc is given as
inline-equation-2
Assume the amplifier gains are chosen to match the link loss so that the output power per channel is the same as the input power. The crosstalk penalty can therefore be calculated from the following equation.
image
Assume that the channels are equally spaced and transmitted with equal power, and the maximum allowable penalty due to Four-Wave Mixing (FWM) is 1 dB. Then if the transmitted power in each channel is P, the maximum FWM power in any channel must be < εP, where ε can be calculated to be 0.034 for a 1 dB penalty using the above equation. Since the generated FWM power increases with link length, this sets a limit on the transmit power per channel as a function of the link length. This limit is plotted in the following figure for both standard single mode fiber (SMF) and dispersion-shifted fiber (DSF) for three cases
(1) 8 channels spaced 100 GHz apart
(2) 32 channels spaced 100 GHz part
(3) 32 channels spaced 50 GHz apart
For standard single mode fiber (SMF) the chromatic dispersion parameter is taken to be D = 17 ps/nm-km, and for DSF the chromatic dispersion zero is assumed to lie in the middle of the transmitted band of channels. The slope of the chromatic dispersion curve, dD/dλ, is taken to be 0.055 ps/nm-km2.
four-wave-mixing-power-penalty-5.30
We can get several conclusions from the above power penalty figure.
1). The limit is significantly worse in the case of dispersion-shifted fiber than it is for standard single mode fiber. This is because the four-wave mixing efficiencies are much higher in dispersion-shifted fiber due to the low value of the chromatic dispersion.
2). The power limit gets worse with an increasing number of channels, as can be seen by comparing the limits for 8-channel and 32 channel systems for the same 100 GHz spacing. This effect is due to the much larger number of four-wave mixing terms that are generated when the number of channels is increases. In the case of dispersion-shifted fiber, this difference due to the number of four-wave mixing terms is imperceptible since, even though there are many more terms for the 32 channel case, the same 8 channels around the dispersion zero as in the 8 channel case contribute almost all the four-wave mixing power. The four-wave mixing power contribution from the other channels is small because there is much more chromatic dispersion at these wavelengths.
3) The power limit decreases significantly if the channel spacing is reduce, as can be seen by comparing the curves for the two 32-channel systems with channel spacing of 100 GHz and 50 GHz. This decrease in the allowable transmit power arises because the four-wave mixing efficiency increases with a decrease in the channel spacing since the phase mismatch Δβ is reduced.  (For SMF, though the efficiencies at both 50 GHz and 100 GHz are small, the efficiency is much higher at 50 GHz than at 100 GHz.)
3. Solutions for Four-Wave Mixing
Four-wave mixing is a severe problem in WDM systems using dispersion-shifted fiber but does not usually pose major problem in systems using standard fiber. In face, it motivated the development of None-Zero Dispersion-Shifted Fiber (NZ-DSF). In general, the following actions alleviate the penalty due to four-wave mixing.
1) Unequal channel spacing. The positions of the channels can be chosen carefully so that the beat terms do not overlap with the data channels inside the receiver bandwidth. This may be possible for a small number of channels in some cases but needs careful computation of the exact channel positions.
unequal-channel-spacing-for-four-wave-mixing
2) Increases channel spacing. This increases the group velocity mismatch between channels. This has the drawback of increasing the overall system bandwidth, requiring the optical amplifiers to be flat over a wider bandwidth, and increases the penalty due to Stimulated Raman Scattering (SRS).
3) Using higher wavelengths beyond 1560nm with DSF. Even with DSF, a significant amount of chromatic dispersion is present in this range, which reduces the effect of four-wave mixing. The newly developed L-band amplifiers can be used for long-distance transmission over DSF.
4) As with other nonlinearities, reducing transmitter power and the amplifier spacing will decrease the penalty
5) If the wavelengths can be demultiplexed and multiplexed in the middle of the transmission path, we can introduce difference delays for each wavelength. This randomizes the phase relationship between the different wavelengths. Effectively, the FWM powers introduced before and after this point are summed instead of the electric fields being added in phase, resulting in a smaller FWM penalty.

How to Test a Fiber Optic System with an OTDR (Optical Time Domain Reflectomer)

>> The Optical Time Domain Reflectometer (OTDR)

OTDR is connected to one end of any fiber optic system up to 250km in length. Within a few seconds, we are able to measure the overall loss, or the loss of any part of a system, the overall length of the fiber and the distance between any points of interest. OTDR is a amazing test instrument for fiber optic systems.
1. A Use for Rayleigh Scatter
As light travels along the fiber, a small proportion of it is lost by Rayleigh scattering. As the light is scattered in all directions, some of it just happens to return back along the fiber towards the light source. This returned light is calledbackscatter as shown below.
image
The backscatter power is a fixed proportion of the incoming power and as the losses take their toll on the incoming power, the returned power also diminishes as shown in the following figure.
image
The OTDR can continuously measure the returned power level and hence deduce the losses encountered on the fiber. Any additional losses such as connectors and fusion splices have the effect of suddenly reducing the transmitted power on the fiber and hence causing a corresponding change in backscatter power. The position and degree of the losses can be ascertained.
2. Measuring Distances
The OTDR uses a system rather like a radar set. It sends out a pulse of light and ‘listens’ for echoes from the fiber.
If it knows the speed of light and can measure the time taken for the light to travel along the fiber, it is an easy job to calculate the length of the fiber.
image
3. To Find the Speed of the Light
Assuming the refractive index of the core is 1.5, the infrared light travels at a speed of
image
This means that it will take
image
This is a useful figure to remember, 5 nanoseconds per meter (5 nsm-1).
If the OTDR measures a time delay of 1.4us, then the distance travelled by the light is
image
The 280 meters is the total distance traveled by the light and is the ‘there and back’ distance. The length of the fiber is therefore only 140m. This adjustment is performed automatically by the OTDR – it just displays the final result of 140m.
4. Inside the OTDR
image
A. Timer
The timer produces a voltage pulse which is used to start the timing process in the display at the same moment as the laser is activated.
B. Pulsed Laser
The laser is switched on for a brief moment. The ‘on’ time being between 1ns and 10us. We will look at the significance of the choice of ‘on’ time or pulsewidth a little bit later. The wavelength of the laser can be switched to suit the system to be investigated.
C. Directive Coupler
The directive coupler allows the laser light to pass straight through into the fiber under test. The backscatter from the whole length of the fiber approaches the directive coupler from the opposite direction. In this case the mirror surface reflects the light into the avalanche photodiode (APD). The light has now been converted into an electrical signal.
D. Amplifying and Averaging
The electrical signal from the APD is very weak and requires amplification before it can be displayed. The averaging feature is quite interesting and we will look at it separately towards the end of this tutorial.
E. Display
The amplified signals are passed on to the display. The display is either a CRT like an oscilloscope, or a LCD as in laptop computers. They display the returned signals on a simple XY plot with the range across the bottom and the power level in dB up the side.
The following figure shows a typical display. The current parameter settings are shown over the grid. They can be changed to suit the measurements being undertaken. The range scale displayed shows a 50km length of fiber. In this case it is from 0 to 50km but it could be any other 50km slice, for example, from 20km to 70km. It can also be expanded to give a detailed view of a shorter length of fiber such as 0-5m, or 25-30m.
image
The range can be read from the horizontal scale but for more precision, a variable range marker is used. This is a movable line which can be switched on and positioned anywhere on the trace. Its range is shown on the screen together with the power level of the received signal at that point. To find the length of the fiber, the marker is simply positioned at the end of the fiber and the distance is read off the screen. It is usual to provide up to five markers so that several points can be measured simultaneously.
F. Data Handling
An internal memory or floppy disk can store the data for later analysis. The output is also available via RS232 link for downloading to a computer. In addition, many OTDRs have an onboard printer to provide hard copies of the information on the screen. This provides useful ‘before and after’ images for fault repair as well as a record of the initial installation.
5. A Simple Measurement
If we were to connect a length of fiber, say 300m, to the OTDR the result would look as shown in the following figure.
image
Whenever the light passes through a cleaved end of a piece of fiber, a Fresnel reflection occurs. This is seen at the far end of the fiber and also at the launch connector. Indeed, it is quite usual to obtain a Fresnel reflection from the end of the fiber without actually cleaving it. Just breaking the fiber is usually enough.
The Fresnel at the launch connector occurs at the front panel of the OTDR and, since the laser power is high at this point, the reflection is also high. The result of this is a relatively high pulse of energy passing through the receiver amplifier. The amplifier output voltage swings above and below the real level, in an effect called ringing. This is a normal amplifier response to a sudden change of input level. The receiver takes a few nanoseconds to recover from this sudden change of signal level.
6. Dead Zones
The Fresnel reflection and the subsequent amplifier recovery time results in a short period during which the amplifier cannot respond to any further input signals. This period of time is called a dead zone. It occurs to some extent whenever a sudden change of signal amplitude occurs. The one at the start of the fiber where the signal is being launched is called the launch dead zone and others are called event dead zones or just dead zones.
image
7. Overcoming the Launch Dead Zone
As the launch dead zone occupies a distance of up to 20 meters or so, this means that, given the job of checking a 300m fiber, we may only be able to check 280m of it. The customer would not be delighted.
To overcome this problem, we add our own patch cord at the beginning of the system. If we make this patch cord about 100m in length, we can guarantee that all launch dead zone problems have finished before the customers’ fiber is reached.
image
The patch cord is joined to the main system by a connector which will show up on the OTDR readout as a small Fresnel reflection and a power loss. The power loss is indicated by the sudden drop in the power level on the OTDR trace.
8. Length and Attenuation
The end of the fiber appears to be at 400m on the horizontal scale but we must deduct 100m to account for our patch cord. This gives an actual length of 300m for the fiber being tested.
Immediately after the patch cord Fresnel reflection the power level shown on the vertical scale is about –10.8dB and at the end of the 300m run, the power has fallen to about –11.3 dB. A reduction in power level of 0.5 dB in 300 meters indicates a fiber attenuation of:
image
Most OTDRs provide a loss measuring system using two markers. The two makers are switched on and positioned on a length of fiber which does not include any other events like connectors or whatever as shown in the following figure.
image
The OTDR then reads the difference in power level at the two positions and the distance between them, performs the above calculation for us and displays the loss per kilometer for the fiber. This provides a more accurate result than trying to read off the decibel and range values from the scales on the display and having to do our own calculations.
9. An OTDR Display of a Typical System
The OTDR can ‘see’ Fresnel reflections and losses. With this information, we can deduce the appearance of various events on an OTDR trace as seen below.
image
A. Connectors
A pair of connectors will give rise to a power loss and also a Fresnel reflection due to the polished end of the fiber.
B. Fusion Splice
Fusion splices do not cause any Fresnel reflections as the cleaved ends of the fiber are now fused into a single piece of fiber. They do, however, show a loss of power. A good quality fusion splice will actually be difficult to spot owing to the low losses. Any signs of a Fresnel reflection is a sure sign of a very poor fusion splice.
C. Mechanical Splice
Mechanical splices appear similar to a poor quality fusion splice. The fibers do have cleaved ends of course but the Fresnel reflection is avoided by the use of index marching gel within the splice. The losses to be expected are similar to the least acceptable fusion splices.
D. Bend Loss
This is simply a loss of power in the area of the bend. If the loss is very localized, the result is indistinguishable from a fusion or mechanic splice.
10. Ghost Echoes (False Reflection)
In the following figure, some of the launched energy is reflected back from the connectors at the end of the patch cord at a range of 100m. This light returns and strikes the polished face of the launch fiber on the OTDR front panel. Some of this energy is again reflected to be re-launched along the fiber and will cause another indication from the end of the patch cord, giving a false, or ghost, Fresnel reflection at a range of 200m and a false ‘end’ to the fiber at 500m.
image
As there is a polished end at both ends of the patch cord, it is theoretically possible for the light to bounce to and fro along this length of fiber giving rise to a whole series of ghost reflections. In the figure a second reflection is shown at a range of 300m.
It is very rare for any further reflections to be seen. The maximum amplitude of the Fresnel reflection is 4% of the incoming signal, and is usually much less. Looking at the calculations, even assuming the worst reflection, the returned energy is 4% or 0.04 of the launched energy. The re-launched energy, as a result of another reflection is 4% of the 4% or 0.042 = 0.0016 x input energy. This shows that we need a lot of input energy to cause a ghost reflection.
A second ghost would require another two reflections giving rise to a signal of only 0.00000256 of the launched energy. Subsequent reflections die out very quickly as we could imagine.
Ghost reflections can be recognized by their even spacing. If we have a reflection at 387m and another at 774 then we have either a strange coincidence or a ghost. Ghost reflections have a Fresnel reflection but do not show any loss. The loss signal is actually of too small an energy level to be seen on the display. If a reflection shows up after the end of the fiber, it has got to be a ghost.
11. Effects of Changing the Pulsewidth
The maximum range that can be measured is determined by the energy contained within the pulse of laser light. The light has to be able to travel the full length of the fiber, be reflected, and return to the OTDR and still be of larger amplitude than the background noise. Now, the energy contained in the pulse is proportional to the length of the pulse so to obtain the greatest range the longest possible pulsewidth should be used as illustrated in the following figure.
image
This cannot be the whole story, as OTDRs offer a wide range of pulsewidths.
We have seen that light covers a distance of 1 meter every 5 nanoseconds so a pulsewidth of 100nm would extend for a distance of 20 meters along the fiber (see the following figure). When the light reaches an event, such as a connector, there is a reflection and a sudden fall in power level. The reflection occurs over the whole of the 20m of the outgoing pulse. Returning to the OTDR is therefore a 20m reflection. Each event on the fiber system will also cause a 20m pulse to be reflected back towards the OTDR.
image
Now imagine two such events separated by a distance of 10m or less as in the following figure. The two reflections will overlap and join up on the return path to the OTDR. The OTDR will simply receive a single burst of light and will be quite unable to detect that two different events have occurred. The losses will add, so two fusion splices for example, each with a loss of 0.2dB will be shown as a single splice with a loss of 0.4dB.
The minimum distance separating two events that can be displayed separately is called the range discrimination of the OTDR.
The shortest pulsewidth on an OTDR may well be in the order of 10ns so at a rate of 5nsm-1 this will provide a pulse length in the fiber of 2m. The range discrimination is half this figure so that two events separated by a distance greater than 1m can be displayed as separate events. At the other end of the scale, a maximum pulsewidth of 10us would result in a range discrimination of 1 km.
Another effect of changing the pulsewidth is on dead zones. Increasing the energy in the pulse will cause a larger Fresnel reflection. This, in turn, means that the amplifier will take longer to recover and hence the event dead zones will become larger as shown in the next figure.
12. Which Pulsewidth to Choose?
image
Most OTDRs give a choice of at least five different pulse length from which to select.
Low pulse widths mean good separation of events but the pulse has a low energy content so the maximum range is very poor. A pulse width of 10ns may well provide a maximum range of only a kilometer with a range discrimination of 1 meter.
The wider the pulse, the longer the range but the worse the range discrimination. A 1us pulse width will have a range of 40 km but cannot separate events closer together than 100 m.
As a general guide, use the shortest pulse that will provide the required range.
13. Averaging
The instantaneous value of the backscatter returning from the fiber is very weak and contains a high noise level which tends to mask the return signal.
As the noise is random, its amplitude should average out to zero over a period of time. This is the idea behind the averaging circuit.
The incoming signals are stored and averaged before being displayed. The larger the number of signals averaged, the cleaner will be the final result but the slower will be the response to any changes that occur during the test. The mathematical process used to perform the effect is called least squares averaging or LSA.
The following figure shows the enormous benefit of employing averaging to reduce the noise effect.
image
Occasionally it is useful to switch the averaging off to see a real time signal from the fiber to see the effects of making adjustments to connectors etc. This is an easy way to optimize connectors, mechanical splices, bends etc. Simply fiddle with it and watch the OTDR screen.
14. OTDR Dynamic Range
When a close range reflection, such as the launch Fresnel occurs, the reflected energy must not be too high otherwise it could damage the OTDR receiving circuit. The power levels decrease as the light travels along the fiber and eventually the reflections are similar in level to that of the noise and can no longer be used.
The difference between the highest safe value of the input power and the minimum detectable power is called thedynamic range of the OTDR and, along with the pulse width and the fiber losses, determines the useful range of the equipment.
If an OTDR was quoted as having a dynamic range of 36 dB, it could measure an 18km run of fiber with a loss of 2 dB/km, or alternatively a 72 km length of fiber having a loss of 0.5 dB/km, or any other combination that multiplies out to 36 dB.

Why is BER difficult to simulate or calculate? 

For a given design at a BER (such as 10-12 and a line rate of OC-3, or 155 Mbps), the network would have one error in approximately 10 days. It would take 1000 days to record a steady state BER value. That is why BER calculations are quite difficult. On the other hand, Q-factor analysis is comparatively easy. Q is often measured in dB. The next question is how to dynamically calculate Q. This is done from OSNR.

In other words, Q is somewhat proportional to the OSNR. Generally, noise calculations are performed by optical spectrum analyzers (OSAs) or sampling oscilloscopes, and these

measurements are carried over a particular measuring range of Bm. Typically, Bm is approximately 0.1 nm or 12.5 GHz for a given OSA. From Equation 4-12, showing Q in dB in

terms of OSNR, it can be understood that if B0 < Bc, then OSNR (dB )> Q (dB). For practical designs OSNR(dB) > Q(dB), by at least 1–2 dB. Typically, while designing a high bitrate system, the margin at the receiver is approximately 2 dB, such that Q is about 2 dB smaller than OSNR (dB).