Tag

Error detection

Browsing

Forward Error Correction (FEC) has become an indispensable tool in modern optical communication, enhancing signal integrity and extending transmission distances. ITU-T recommendations, such as G.693, G.959.1, and G.698.1, define application codes for optical interfaces that incorporate FEC as specified in ITU-T G.709. In this blog, we discuss the significance of Bit Error Ratio (BER) in FEC-enabled applications and how it influences optical transmitter and receiver performance.

The Basics of FEC in Optical Communications

FEC is a method of error control for data transmission, where the sender adds redundant data to its messages. This allows the receiver to detect and correct errors without the need for retransmission. In the context of optical networks, FEC is particularly valuable because it can significantly lower the BER after decoding, thus ensuring the accuracy and reliability of data across vast distances.

BER Requirements in FEC-Enabled Applications

For certain optical transport unit rates (OTUk), the system BER is mandated to meet specific standards only after FEC correction has been applied. The optical parameters, in these scenarios, are designed to achieve a BER no worse than 10−12 at the FEC decoder’s output. This benchmark ensures that the data, once processed by the FEC decoder, maintains an extremely high level of accuracy, which is crucial for high-performance networks.

Practical Implications for Network Hardware

When it comes to testing and verifying the performance of optical hardware components intended for FEC-enabled applications, achieving a BER of 10−12 at the decoder’s output is often sufficient. Attempting to test components at 10−12 at the receiver output, prior to FEC decoding, can lead to unnecessarily stringent criteria that may not reflect the operational requirements of the application.

Adopting Appropriate BER Values for Testing

The selection of an appropriate BER for testing components depends on the specific application. Theoretical calculations suggest a BER of 1.8×10−4at the receiver output (Point A) to achieve a BER of 10−12 at the FEC decoder output (Point B). However, due to variations in error statistics, the average BER at Point A may need to be lower than the theoretical value to ensure the desired BER at Point B. In practice, a BER range of 10−5 to 10−6 is considered suitable for most applications.

Conservative Estimation for Receiver Sensitivity

By using a BER of 10−6 for component verification, the measurements of receiver sensitivity and optical path penalty at Point A will be conservative estimates of the values after FEC correction. This approach provides a practical and cost-effective method for ensuring component performance aligns with the rigorous demands of FEC-enabled systems.

Conclusion

FEC is a powerful mechanism that significantly improves the error tolerance of optical communication systems. By understanding and implementing appropriate BER testing methodologies, network operators can ensure their components are up to the task, ultimately leading to more reliable and efficient networks.

As the demands for data grow, the reliance on sophisticated FEC techniques will only increase, cementing BER as a fundamental metric in the design and evaluation of optical communication systems.

References

https://www.itu.int/rec/T-REC-G/e

When the bit error occurs to the system, generally the OSNR at the transmit end is well and the fault is well hidden.
Decrease the optical power at the transmit end at that time. If the number of bit errors decreases at the transmit end, the problem is non-linear problem.
If the number of bit errors increases at the transmit end, the problem is the OSNR degrade problem. 

 

General Causes of Bit Errors

  •  Performance degrade of key boards
  • Abnormal optical power
  • Signal-to-noise ratio decrease
  • Non-linear factor
  • Dispersion (chromatic dispersion/PMD) factor
  • Optical reflection
  • External factors (fiber, fiber jumper, power supply, environment and others)

The ITU standards define a “suspect internal flag” which should indicate if the data contained within a register is ‘suspect’ (conditions defined in Q.822). This is more frequently referred to as the IDF (Invalid Data Flag).

PM is bounded by strict data collection  rules as defined in standards. When the collection of PM parameters is affected then  PM system labels the collection of data as suspect with an Invalid Data Flag (IDF). For the sake of identification; some unique flag  is shown next to corresponding counter.

The purpose of the flag is to indicate when the data in the PM bin may not be complete or may have been affected such that the data is not completely reliable. The IDF does not mean the software is contingent.

Some of the common reasons  for setting the IDF include:

  • a collection time period that does not start within +/- 1 second of the nominal collection window start time.
  • a time interval that is inaccurate by +/- 10 seconds (or more)
  • the current time period changes by +/- 10 seconds (or more)
  • a restart (System Controller restarts will wipe out all history data and cause time fluctuations at line/client module;  a module restart will wipe out the current counts)
  • a PM bin is cleared manually
  • a hardware failure prevents PM from properly collecting a full period of PM data (PM clock failure)
  • a protection switch has caused a change of payload on a protection channel.
  • a payload reconfiguration has occurred (similar to above but not restricted to protection switches).
  • an System Controller archive failure has occurred, preventing history data from being collected from the line/client  cards
  • protection mode is switched from non-revertive to revertive (affects PSD only)
  • a protection switch clear indication is received when no raise was indicated
  • laser device failure (affects physical PMs)
  • loss of signal (affects receive – OPRx, IQ – physical PMs only)
  • Control Plane is booted less than 15 min period for 15-min interval and less than 24 hour period for 24-hour interval.

Suspect interval is determined by comparing nSamples to nTotalSamples on a counter PM. If nSamples is not equal to nTotalSamples then this period can be marked as suspect. 

If any 15 minute is marked as suspect or reporting for that day interval is not started at midnight then it should flag that 24 Hr as suspect.

Some of the common examples are:

  • Interface type is changed to another compatible interface (10G SR interface replaced by 10G DWDM interface),
  • Line type is changed from SONET to SDH,
  • Equipment failures are detected and those failures inhibit the accumulation of PM.
  • Transitions to/from the ‘locked’ state.
  • The System shall mark a given accumulation period invalid when the facility object is created or deleted during the interval.
  • Node time is changed.