Share and Explore the Tech Inside You!!!
Menu


General Optical Signal flow

Recently I came across a non-optical background candidate who asked me what are OTU, OCH, ODU, etc in optical. So I came up with this diagram which helped him to understand in the simplest form and hope it will help many others too.

 

 

The above image shows how a signal received from the router is sent over the Optical layer and then received back at another end. It consists of a few terminologies that are:-

Client Signal 

Client Signal is the actual payload that is to be carried over the optical layer and is generated from a router mostly and feed into client ports of an optical card called a transponder or muxponder. The client signal or actual payload to be transported could be of any existing protocol such as SONET/SDH, GFP, IP, and GbE.

 

Optical Channel Payload Unit (OPU)

Client signal alongwith added overhead OH makes an OPU.OPU supports various types of clients and provides information on the type of signal transported over the next layer. The ITU-T G.709 currently supports asynchronous as well as synchronous mappings of client signals into the payload.

 

Optical Channel Data Unit (ODU)

OPU alongwith overhead makes ODU. Added overhead allows users to support Path monitoring (PM), Tandem connection monitoring (TCM), and Automatic Protection Switching (APS).

 

Optical Channel Transport Unit (OTU)

ODU with OH and FEC(Forward error correction)  makes OTU. OTU is used in the OTN to support transport via one or more optical channel connections. FEC enables the correction and detection of errors in an optical link.

 

Optical Channel (OCH)

OTU with FEC another overhead makes OCH.OCH is generally a frequency/wavelength/color that will be sent to a MUX multiplexer where different channels with be multiplexed to be carried over a single strand of optical fiber.

 

Optical Channel Multiplexing Section (OMS)

Once the optical channel is created, additional non-associated overhead is added to each OCh frequency and thus OMS is formed. This is the output of mixed signals called composite signal on MUX, multiplexer. This is between MUX to DEMUX.

 

Optical Transmission Section(OTS)

The optical transmission section (OTS) layer transports the OTS payload as well as the OTS overhead (OTS-OH). The OTS OH, although not fully defined, is used for maintenance and operational functions. The OTS layer allows the network operator to perform monitoring and maintenance tasks between the Optical Network elements, which include: OADMs, multiplexers, de-multiplexers, and optical switches.

 

 

For more details , read on https://www.itu.int/ITU-T/recommendations/rec.aspx?rec=7060&lang=en 

Why is soak timer actually needed on Optical devices?

 

We all know that during troubleshooting we look for detected fault, alarms, or performance parameters on the monitoring points and then correlate with other factors to conclude the root cause. Fault detection is the process of determining that a fault exists. Fault detection capabilities are intended to detect all actual and potential hardware and software troubles so that the impact on service is minimized and consistent with the reliability and service objectives.

 

 

Now let's talk about what we are intended to discuss here.

 

In accordance with GR-474-CORE,  a time period known as “soak time” is incorporated in the definition of a signal failure to allow for momentary transmission loss e.g from single transient events, and to protect against false alarms.

 

For transport entities, the soaking interval is entered when a defect event is detected and is exited only if either the defect persists for the soak time interval and a bona fide failure is declared, or normal transmission returns within the soaking time interval.

 

 

Also keep in mind that circuits do not use the soak timer, but ports do.

 

For example, the time period for DS3 signal failure entry/clearing is 2.5 ± 0.5 seconds and 10 ± 0.5 seconds  (more at  https://mapyourtech.com/entries/general/what-is-the-soak-time-period-to-detect-los-on-a-port-  )

 

 

 

Do we still need 50 ms(milliseconds) restoration for running telecom services?

 

It was always exciting discussing 50ms switching/restoration time perspective for telecom circuits for every engineer who belongs to some part of telecom services including, optical, voice, data, microwave, radio, etc. I was also seeking it since the start of my telecom career, and I believe still somewhere at some point in time, engineers or telecom professionals might be hearing this term and wonder about why (“WHY”) is this? So, I researched over available knowledge pools, and using my experience, I thought of putting it into words to enlighten some of my friends like me.

 

The 50 ms idea originated from Automatic Protection-based Switching subsystems during early digital transmission systems. It was not actually based on any particular service requirement. The value persists because it is not entirely based on technical considerations which could resolve it, but has roots in historical practices and past capabilities and has been a tool of certain marketing strategies.

 

Initially, digital transmission systems based on 1:N APS typically required about 20 ms for fault detection10 ms for signaling, and 10 ms for the tail-end transfer relay operationso the specification for APS switching times was reasonably set at 50 ms, allowing a 10 ms margin

 

 

 

For information, early generations of DS1 channel banks (1970s era) also had a Carrier Group Alarm (CGA) threshold of about 230 ms. The CGA is a time threshold for the persistence of any alarm state on the transmission line side (such as loss of signal or frame synch loss) after which all trunk channels would be busied out. But the requirement for 50 ms APS switching stayed in place, mainly because this was still technically quite feasible at no extra cost in the design of APS subsystems. 

 

The apparent sanctity of 50 ms was further entrenched in the 1990s by vendors who promoted only ring-based transport solutions and found it advantageous to insist on 50 ms as the requirement, effectively precluding distributed mesh restoration alternatives under equal consideration start of the SONET era. 

 

As a marketing strategy, the 50 ms issue served as the "mesh killer" for the 1990s as more traditional telcos were bought into this as reference.

 

On the other hand, there was also real urgency in the early 1990s to deploy some kind of fast automated restoration method relatively immediately. This lead to the quick adoption of ring-based solutions which had only incremental development requirements over 1+1 APS transmission systems. However, once rings were deployed, the effect was to only further reinforce the cultural assumption of 50 ms as the standard. Thus, as sometimes happens in engineering, what was initially a performance capability in one specific context (APS switching time) evolved into a perceived requirement in all other contexts.

But the "50 ms requirement" is undergoing serious challenges to its validity as a ubiquitous requirement, even being referred to as the "50 ms myth" by data-centric entrants to the field who see little actual need for such fast restoration from an IP services standpoint. Faster restoration is by itself always desirable as a goal, but restoration goals must be carefully set in light of corresponding costs that may be paid in terms of limiting the available choices of network architecture. In practice, insistence on "50 ms" means 1+1 dedicated APS or UPSR rings (to follow) are almost the only choices left for the operator to consider. But if something more like 200 ms is allowed, the entire scope of efficient shared-mesh architectures becomes available. So it is an issue of real importance as to whether there are any services that truly require 50 ms.

 

Sosnosky's original study found no applications that require 50 ms restoration. However, the 50 ms requirement was still being debated in 2001 when Schallenburg, understanding the potential costs involved to his company, undertook a series of experimental trials with varying interruption times and measured various service degradations on voice circuits, SNA, ATM, X.25, SS7, DS1, 56 kb/s data, NTC digital video, SONET OC-12 access services, and OC-48. He tested with controlled-duration outages and found that 200 ms outages would not jeopardize any of these services and that, except for SS7 signaling links, all other services would in fact withstand outages of two to five seconds.

Thus, the supposed requirement for 50 ms restoration seems to be more of a techno-cultural myth than a real requirement—there are quite practical reasons to consider 2 seconds as an alternate goal for network restoration. This avoids the regime of connection and session time-outs and IP/MPLS layer reactions but gives a green light to the full consideration of far more efficient mesh-based survivable architectures.

 

  A study done by Sosnosky provides a summary of effects, based on a detailed technical analysis of various services and signal types. In this study, outages are classified by their duration and it is presented how with the given different outage time, main effects/characteristics change.

 

 

Conclusive Comment

 

Considering state-of-art technologies evolving overtimes in all aspects of telecommunication fields, switching speed is too fast, even hold-up-timer (HUT) and hold-down-timers or hold-off-timers are playing significant roles that can hold the consequent actions and avoids unavailability of service. Yes, there will definitely be some packet losses in the services which could be visible as some form of errors in the links or may increase latency sometimes but as we know it varies with the nature of services like voice, data, live stream, internet surfing, video buffering, etc. So we can say that in the recent world the networks are quite resistant to brief outages, although it could vary based on the architecture of the network and flow of the services. Even 50ms or 200ms outages would not jeopardize services (data, video, voice) and it will be based on network architecture and routing of services.

Would love to see viewers comment on this and further discussion.

Reference
Mesh-Based Survivable Networks: Options and Strategies for Optical, MPLS, SONET, and ATM Networking By Wayne D. Grover

 

 

 

What is the reason behind most of the EDFAs (Erbium Doped Fiber Amplifier) using only 980nm and 1480nm as their pumping wavelengths and not any other wavelengths?

The pump wavelength used is either 980nm or 1480 nm due to the availability of these laser sources.  In the course of the explanation let see the energy level diagrams for the (Erbium) Er3+ ion, the absorption band of Er+, and the pump efficiency.

 

 

 

 

(a) Energy levels of erbium ions and (b) gain and attenuation spectra.

 

 

Excited State Absorption (ESA)

Ground-state absorption (GSA)

 

 

There are several states to which the erbium ions can be pumped using sources of different wavelengths, such as those operating at 1480, 980, and 800 nm. However, in practical optical systems, the pump wavelength must provide a high power to achieve a high gain per pump.

 

The commonly available laser diodes can operate at 800, 980, and 1480 nm, but the pump efficiency can go more than 1 dB/mW with low attenuation depending on the pump frequency.

 

The only pump wavelength laser sources that can give a high pumping efficiency with lower attenuations are those operating at 980 and 1480 nm.

 

In practice, the 980 nm pumping source is commonly used due to its high gain coefficient (4 dB/mW). The difference in the effects of these two wavelength sources is mainly caused by the absorption and emission factors.

 

 

Vendors Making these laser Pumps

 

 

Reference: OPTICAL FIBER COMMUNICATIONS SYSTEMS- Le Nguyen Binh

 

 

Quick note on Pre-FEC , Post-FEC ,BER and Q relation.

Pre-FEC BER corresponding to Q.

  • BER and Q can be calculated from one another according to the following relationship:

  • Or, in Excel: dBQ = 20*LOG10(-NORMSINV(BER))
  • dBQ = 20Log(Q)

Post-FEC BER/Q

  • Post-FEC BER of <1e-15 essentially means no post-FEC errors
  • This is equivalent to ~18dBQ, which is about as high as can be measured

 

FEC Limit

  • This is the lowest Q (or highest BER) that can be corrected by the FEC
  • Beyond this post-FEC errors will occur

e.g

 FEC Limit: 8.53dBQ or a BER of 3.8e-3
 FEC Limit: 5.23dBQ or a BER of 3.4e-2 (<97% of bits have to be correct!)
 
 

Pre-FEC Calculation eg.

 

 

Assume:

219499456 : Bit Errors

0 : Uncorrectable words

6.4577198E-6 : PRE-FEC BER

Assume the Time at this instant of performance was 12:05:04 which means : 304 Seconds since the last time interval.

       
Assume The FEC settings was : STANDARD FEC so the Rate used for 100 G transponder is : 1.1181 * 10^11
       
General Formula to calculate PRE_FEC: 
       
PRE_FEC BER =    TotalErrors 
              ------------------------------------------

                (secsFromLast * (rate)) 

TotalErrors =((bitErrorCorrected count + (9 * (uncorrected Words count))

   
       
So Substituting the Values     
       
219499456 / (304*1.1181 * 10^11) = 6.4577198E-6   
       

 

 

View older posts »

Apps

MapYourTech.com for Optical Fiber Professionals
Power(mW) Power(dBm)   
Coupling ratio (%) Insertion Loss(dB)   
Frequency(THz) λ (nm)   
Δƒ(GHz) Δλ(nm)   
Q-factor BER

Views

1259681

Comments

Very useful. Thank You.

It may not necessarily mean amount of power but a threshold of a working link. If you calculate from optimum values such as coefficients and attenuation values you get these optimum values

Great writeup.

great explain ok , but how to calculation typical insertion loss 3.2 3.2
1x2 =50/50 ratio

How can we improve OSNR for a given link?

Requests Articles

Flag Counter

Feeds