Animated CTA Banner
MapYourTech
MapYourTech has always been about YOUR tech journey, YOUR questions, YOUR thoughts, and most importantly, YOUR growth. It’s a space where we "Map YOUR Tech" experiences and empower YOUR ambitions.
To further enhance YOUR experience, we are working on delivering a professional, fully customized platform tailored to YOUR needs and expectations.
Thank you for the love and support over the years. It has always motivated us to write more, share practical industry insights, and bring content that empowers and inspires YOU to excel in YOUR career.
We truly believe in our tagline:
“Share, explore, and inspire with the tech inside YOU!”
Let us know what YOU would like to see next! Share YOUR thoughts and help us deliver content that matters most to YOU.
Share YOUR Feedback
Tag

Optical network

Browsing

Syslog is one of the most widely used protocols for logging system events, providing network and optical device administrators with the ability to collect, monitor, and analyze logs from a wide range of devices. This protocol is essential for network monitoring, troubleshooting, security audits, and regulatory compliance. Originally developed in the 1980s, Syslog has since become a standard logging protocol, used in various network and telecommunications environments, including optical devices.Lets explore Syslog, its architecture, how it works, its variants, and use cases. We will also look at its implementation on optical devices and how to configure and use it effectively to ensure robust logging in network environments.

What Is Syslog?

Syslog (System Logging Protocol) is a protocol used to send event messages from devices to a central server called a Syslog server. These event messages are used for various purposes, including:

  • Monitoring: Identifying network performance issues, equipment failures, and status updates.
  • Security: Detecting potential security incidents and compliance auditing.
  • Troubleshooting: Diagnosing issues in real-time or after an event.

Syslog operates over UDP (port 514) by default, but can also use TCP to ensure reliability, especially in environments where message loss is unacceptable. Many network devices, including routers, switches, firewalls, and optical devices such as optical transport networks (OTNs) and DWDM systems, use Syslog to send logs to a central server.

How Syslog Works

Syslog follows a simple architecture consisting of three key components:

  • Syslog Client: The network device (such as a switch, router, or optical transponder) that generates log messages.
  • Syslog Server: The central server where log messages are sent and stored. This could be a dedicated logging solution like Graylog, RSYSLOG, Syslog-ng, or a SIEM system.
  • Syslog Message: The log data itself, consisting of several fields such as timestamp, facility, severity, hostname, and message content.

Syslog Message Format

Syslog messages contain the following fields:

  1. Priority (PRI): A combination of facility and severity, indicating the type and urgency of the message.
  2. Timestamp: The time at which the event occurred.
  3. Hostname/IP: The device generating the log.
  4. Message: A human-readable description of the event.

Example of a Syslog Message:

 <34>Oct 10 13:22:01 router-1 interface GigabitEthernet0/1 down

This message shows that the device with hostname router-1 logged an event at Oct 10 13:22:01, indicating that the GigabitEthernet0/1 interface went down.

Syslog Severity Levels

Syslog messages are categorized by severity to indicate the importance of each event. Severity levels range from 0 (most critical) to 7 (informational):

Syslog Facilities

Syslog messages also include a facility code that categorizes the source of the log message. Commonly used facilities include:

Each facility is paired with a severity level to determine the Priority (PRI) of the Syslog message.

Syslog in Optical Networks

Syslog is crucial in optical networks, particularly in managing and monitoring optical transport devices, DWDM systems, and Optical Transport Networks (OTNs). These devices generate various logs related to performance, alarms, and system health, which can be critical for maintaining service-level agreements (SLAs) in telecom environments.

Common Syslog Use Cases in Optical Networks:

  1. DWDM System Monitoring:
    • Track optical signal power levels, bit error rates, and link status in real-time.
    • Example: “DWDM Line 1 signal degraded, power level below threshold.”
  2. OTN Alarms:
    • Log alarms related to client signal loss, multiplexing issues, and channel degradations.
    • Example: “OTN client signal failure on port 3.”
  3. Performance Monitoring:
    • Monitor latency, jitter, and packet loss in the optical transport network, essential for high-performance links.
    • Example: “Performance threshold breach on optical channel, jitter exceeded.”
  4. Hardware Failure Alerts:
    • Receive notifications for hardware-related failures, such as power supply issues or fan failures.
    • Example: “Power supply failure on optical amplifier module.”

These logs can be critical for network operations centers (NOCs) to detect and resolve problems in the optical network before they impact service.

Syslog Example for Optical Devices

Here’s an example of a Syslog message from an optical device, such as a DWDM system:

<22>Oct 12 10:45:33 DWDM-1 optical-channel-1 signal degradation, power level -5.5dBm, threshold -5dBm

This message shows that on DWDM-1, optical-channel-1 is experiencing signal degradation, with the power level reported at -5.5dBm, below the threshold of -5dBm. Such logs are crucial for maintaining the integrity of the optical link.

Syslog Variants and Extensions

Several extensions and variants of Syslog add advanced functionality:

Reliable Delivery (RFC 5424)

The traditional UDP-based Syslog delivery method can lead to log message loss. To address this, Syslog has been extended to support TCP-based delivery and even Syslog over TLS (RFC 5425), which ensures encrypted and reliable message delivery, particularly useful for secure environments like data centers and optical networks.

Structured Syslog

To standardize log formats across different vendors and devices, Structured Syslog (RFC 5424) allows logs to include structured data in a key-value format, enabling easier parsing and analysis.

Syslog Implementations for Network and Optical Devices

To implement Syslog in network or optical environments, the following steps are typically involved:

Step 1: Enable Syslog on Devices

For optical devices such as Cisco NCS (Network Convergence System) or Huawei OptiX OSN, Syslog can be enabled to forward logs to a central Syslog server.

Example for Cisco Optical Device:

logging host 192.168.1.10 
logging trap warnings

In this example:

    • logging host configures the Syslog server’s IP.
    • logging trap warnings ensures that only messages with a severity of warning (level 4) or higher are forwarded.

Step 2: Configure Syslog Server

Install a Syslog server (e.g., Syslog-ng, RSYSLOG, Graylog). Configure the server to receive and store logs from optical devices.

Example for RSYSLOG:

module(load="imudp")
input(type="imudp" port="514") 
*.* /var/log/syslog

Step 3: Configure Log Rotation and Retention

Set up log rotation to manage disk space on the Syslog server. This ensures older logs are archived and only recent logs are stored for immediate access.

Syslog Advantages

Syslog offers several advantages for logging and network management:

  • Simplicity: Syslog is easy to configure and use on most network and optical devices.
  • Centralized Management: It allows for centralized log collection and analysis, simplifying network monitoring and troubleshooting.
  • Wide Support: Syslog is supported across a wide range of devices, including network switches, routers, firewalls, and optical systems.
  • Real-time Alerts: Syslog can provide real-time alerts for critical issues like hardware failures or signal degradation.

Syslog Disadvantages

Syslog also has some limitations:

  • Lack of Reliability (UDP): If using UDP, Syslog messages can be lost during network congestion or failures. This can be mitigated by using TCP or Syslog over TLS.
  • Unstructured Logs: Syslog messages can vary widely in format, which can make parsing and analyzing logs more difficult. However, structured Syslog (RFC 5424) addresses this issue.
  • Scalability: In large networks with hundreds or thousands of devices, Syslog servers can become overwhelmed with log data. Solutions like log aggregation or log rotation can help manage this.

Syslog Use Cases

Syslog is widely used in various scenarios:

Network Device Monitoring

    • Collect logs from routers, switches, and firewalls for real-time network monitoring.
    • Detect issues such as link flaps, protocol errors, and device overloads.

Optical Transport Networks (OTN) Monitoring

    • Track optical signal health, link integrity, and performance thresholds in DWDM systems.
    • Generate alerts when signal degradation or failures occur on critical optical links.

Security Auditing

    • Log security events such as unauthorized login attempts or firewall rule changes.
    • Centralize logs for compliance with regulations like GDPR, HIPAA, or PCI-DSS.

Syslog vs. Other Logging Protocols: A Quick Comparison

Syslog Use Case for Optical Networks

Imagine a scenario where an optical transport network (OTN) link begins to degrade due to a fiber issue:

  • The OTN transponder detects a degradation in signal power.
  • The device generates a Syslog message indicating the power level is below a threshold.
  • The Syslog message is sent to a Syslog server for real-time alerting.
  • The network administrator is notified immediately, allowing them to dispatch a technician to inspect the fiber and prevent downtime.

Example Syslog Message:

<27>Oct 13 14:10:45 OTN-Transponder-1 optical-link-3 signal degraded, power level -4.8dBm, threshold -4dBm

Summary

Syslog remains one of the most widely-used protocols for logging and monitoring network and optical devices due to its simplicity, versatility, and wide adoption across vendors. Whether managing a large-scale DWDM system, monitoring OTNs, or tracking network security, Syslog provides an essential mechanism for real-time logging and event monitoring. Its limitations, such as unreliable delivery via UDP, can be mitigated by using Syslog over TCP or TLS in secure or mission-critical environments.

 

Introduction

A Digital Twin Network (DTN) represents a major innovation in networking technology, creating a virtual replica of a physical network. This advanced technology enables real-time monitoring, diagnosis, and control of physical networks by providing an interactive mapping between the physical and digital domains. The concept has been widely adopted in various industries, including aerospace, manufacturing, and smart cities, and is now being explored to meet the growing complexities of telecommunication networks.

Here we will deep dive into the fundamentals of Digital Twin Networks, their key requirements, architecture, and security considerations, based on the ITU-T Y.3090 Recommendation.

What is a Digital Twin Network?

A DTN is a virtual model that mirrors the physical network’s operational status, behavior, and architecture. It enables a real-time interactive relationship between the two domains, which helps in analysis, simulation, and management of the physical network. The DTN leverages technologies such as big data, machine learning (ML), artificial intelligence (AI), and cloud computing to enhance the functionality and predictability of networks.

Key Characteristics of Digital Twin Networks

According to ITU-T Y.3090, a DTN is built upon four core characteristics:

    1. Data: Data is the foundation of the DTN system. The physical network’s data is stored in a unified digital repository, providing a single source of truth for network applications.
    2. Real-time Interactive Mapping: The ability to provide a real-time, bi-directional interactive relationship between the physical network and the DTN sets DTNs apart from traditional network simulations.
    3. Modeling: The DTN contains data models representing various components and behaviors of the network, allowing for flexible simulations and predictions based on real-world data.
    4. Standardized Interfaces: Interfaces, both southbound (connecting the physical network to the DTN) and northbound (exchanging data between the DTN and network applications), are critical for ensuring scalability and compatibility.

    Functional Requirements of DTN

    For a DTN to function efficiently, several critical functional requirements must be met:

      Efficient Data Collection:

                  • The DTN must support massive data collection from network infrastructure, such as physical or logical devices, network topologies, ports, and logs.
                  • Data collection methods must be lightweight and efficient to avoid strain on network resources.

        Unified Data Repository:

          The data collected is stored in a unified repository that allows real-time access and management of operational data. This repository must support efficient storage techniques, data compression, and backup mechanisms.

          Unified Data Models:

                          • The DTN requires accurate and real-time models of network elements, including routers, firewalls, and network topologies. These models allow for real-time simulation, diagnosis, and optimization of network performance.

            Open and Standard Interfaces:

                            • Southbound and northbound interfaces must support open standards to ensure interoperability and avoid vendor lock-in. These interfaces are crucial for exchanging information between the physical and digital domains.

              Management:

                              • The DTN management function includes lifecycle management of data, topology, and models. This ensures efficient operation and adaptability to network changes.

                Service Requirements

                Beyond its functional capabilities, a DTN must meet several service requirements to provide reliable and scalable network solutions:

                  1. Compatibility: The DTN must be compatible with various network elements and topologies from multiple vendors, ensuring that it can support diverse physical and virtual network environments.
                  2. Scalability: The DTN should scale in tandem with network expansion, supporting both large-scale and small-scale networks. This includes handling an increasing volume of data, network elements, and changes without performance degradation.
                  3. Reliability: The system must ensure stable and accurate data modeling, interactive feedback, and high availability (99.99% uptime). Backup mechanisms and disaster recovery plans are essential to maintain network stability.
                  4. Security: A DTN must secure sensitive data, protect against cyberattacks, and ensure privacy compliance throughout the lifecycle of the network’s operations.
                  5. Visualization and Synchronization: The DTN must provide user-friendly visualization of network topology, elements, and operations. It should also synchronize with the physical network, providing real-time data accuracy.

                  Architecture of a Digital Twin Network

                  The architecture of a DTN is designed to bridge the gap between physical networks and virtual representations. ITU-T Y.3090 proposes a “Three-layer, Three-domain, Double Closed-loop” architecture:

                    1. Three-layer Structure:

                              • Physical Network Layer: The bottom layer consists of all the physical network elements that provide data to the DTN via southbound interfaces.
                              • Digital Twin Layer: The middle layer acts as the core of the DTN system, containing subsystems like the unified data repository and digital twin entity management.
                              • Application Layer: The top layer is where network applications interact with the DTN through northbound interfaces, enabling automated network operations, predictive maintenance, and optimization.
                    2. Three-domain Structure:

                                • Data Domain: Collects, stores, and manages network data.
                                • Model Domain: Contains the data models for network analysis, prediction, and optimization.
                                • Management Domain: Manages the lifecycle and topology of the digital twin entities.
                    3. Double Closed-loop:

                                • Inner Loop: The virtual network model is constantly optimized using AI/ML techniques to simulate changes.
                                • Outer Loop: The optimized solutions are applied to the physical network in real-time, creating a continuous feedback loop between the DTN and the physical network.

                      Use Cases of Digital Twin Networks

                      DTNs offer numerous use cases across various industries and network types:

                      1. Network Operation and Maintenance: DTNs allow network operators to perform predictive maintenance by diagnosing and forecasting network issues before they impact the physical network.
                      2. Network Optimization: DTNs provide a safe environment for testing and optimizing network configurations without affecting the physical network, reducing operating expenses (OPEX).
                      3. Network Innovation: By simulating new network technologies and protocols in the virtual twin, DTNs reduce the risks and costs of deploying innovative solutions in real-world networks.
                      4. Intent-based Networking (IBN): DTNs enable intent-based networking by simulating the effects of network changes based on high-level user intents.

                      Conclusion

                      A Digital Twin Network is a transformative concept that will redefine how networks are managed, optimized, and maintained. By providing a real-time, interactive mapping between physical and virtual networks, DTNs offer unprecedented capabilities in predictive maintenance, network optimization, and innovation.

                      As the complexities of networks grow, adopting a DTN architecture will be crucial for ensuring efficient, secure, and scalable network operations in the future.

                      Reference

                      ITU-T Y.3090

                      Power Change during add/remove of channels on filters

                      The power change can be quantified as the ratio between the number of channels at the reference point after the channels are added or dropped and the number of channels at that reference point previously. We can consider composite power here and each channel at same optical power in dBm.

                      So whenever we add or delete number of channels from a MUX/DEMUX/FILTER/WSS following equations define the new changed power.

                      For the case when channels are added (as illustrated on the right side of Figure 1 ):

                      where:

                      A   is the number of added channels

                      U   is the number of undisturbed channels

                      For the case when channels are dropped (as illustrated on the left side of Figure 1):

                       

                      where:

                      D   is the number of dropped channels

                      U   is the number of undisturbed channels

                       

                       Figure 1

                      For example:

                      –           adding 7 channels with one channel undisturbed gives a power change of +9 dB;

                      –           dropping 7 channels with one channel undisturbed gives a power change of –9 dB;

                      –           adding 31 channels with one channel undisturbed gives a power change of +15 dB;

                      –           dropping 31 channels with one channel undisturbed gives a power change of –15 dB;

                      refer ITU-T G.680 for further study.

                      Items HD-FEC SD-FEC
                      Definition Decoding based on hard-bits(the output is quantized only to two levels) is called the “HD(hard-decision) decoding”, where each bit is considered definitely one or zero. Decoding based on soft-bits(the output is quantized to more than two levels) is called the “SD(soft-decision) decoding”, where not only one or zero decision but also confidence information for the decision are provided.
                      Application Generally for non-coherent detection optical systems, e.g.,  10 Gbit/s, 40 Gbit/s, also for some coherent detection optical systems with higher OSNR coherent detection optical systems, e.g.,  100 Gbit/s,400 Gbit/s.
                      Electronics Requirement ADC(Analogue-to-Digital Converter) is not necessary in the receiver. ADC is required in the receiver to provide soft information, e.g.,  coherent detection optical systems.
                      specification general FEC per [ITU-T G.975];super FEC per [ITU-T G.975.1]. vendor specific
                      typical scheme Concatenated RS/BCH LDPC(Low density parity check),TPC(Turbo product code)
                      complexity medium high
                      redundancy ratio generally 7% around 20%
                      NCG about 5.6 dB for general FEC;>8.0 dB for super FEC. >10.0 dB
                       Example(If you asked your friend about traffic jam status on roads and he replies) maybe fully jammed or free  50-50  but I found othe way free or less traffic

                      Optical power tolerance: It refers to the tolerable limit of input optical power, which is the range from sensitivity to overload point.

                      Optical power requirement: If refers to the requirement on input optical power, realized by adjusting the system (such as adjustable attenuator, fix attenuator, optical amplifier).

                       

                      Optical power margin: It refers to an acceptable extra range of optical power. For example, “–5/ + 3 dB” requirement is actually a margin requirement.

                      The main advantages and drawbacks of EDFAs are as follows.

                      Advantages

                      • Commercially available in C band (1,530 to 1,565 nm) and L band (1,560 to 1,605) and up to  84-nm range at the laboratory stage.
                      • Excellent coupling: The amplifier medium is an SM fiber;
                      • Insensitivity to light polarization state;
                      • Low sensitivity to temperature;
                      • High gain: > 30 dB with gain flatness < ±0.8 dB and < ±0.5 dB in C and L band, respectively, in the scientific literature and in the manufacturer documentation
                      • Low noise figure: 4.5 to 6 dB
                      • No distortion at high bit rates;
                      • Simultaneous amplification of wavelength division multiplexed signals;
                      • Immunity to crosstalk among wavelength multiplexed channels (to a large extent)

                      Drawbacks

                      • Pump laser necessary;
                      • Difficult to integrate with other components;
                      • Need to use a gain equalizer for multistage amplification;
                      • Dropping channels can give rise to errors in surviving channels:dynamic control of amplifiers is  necessary.

                      The ITU standards define a “suspect internal flag” which should indicate if the data contained within a register is ‘suspect’ (conditions defined in Q.822). This is more frequently referred to as the IDF (Invalid Data Flag).

                      PM is bounded by strict data collection  rules as defined in standards. When the collection of PM parameters is affected then  PM system labels the collection of data as suspect with an Invalid Data Flag (IDF). For the sake of identification; some unique flag  is shown next to corresponding counter.

                      The purpose of the flag is to indicate when the data in the PM bin may not be complete or may have been affected such that the data is not completely reliable. The IDF does not mean the software is contingent.

                      Some of the common reasons  for setting the IDF include:

                      • a collection time period that does not start within +/- 1 second of the nominal collection window start time.
                      • a time interval that is inaccurate by +/- 10 seconds (or more)
                      • the current time period changes by +/- 10 seconds (or more)
                      • a restart (System Controller restarts will wipe out all history data and cause time fluctuations at line/client module;  a module restart will wipe out the current counts)
                      • a PM bin is cleared manually
                      • a hardware failure prevents PM from properly collecting a full period of PM data (PM clock failure)
                      • a protection switch has caused a change of payload on a protection channel.
                      • a payload reconfiguration has occurred (similar to above but not restricted to protection switches).
                      • an System Controller archive failure has occurred, preventing history data from being collected from the line/client  cards
                      • protection mode is switched from non-revertive to revertive (affects PSD only)
                      • a protection switch clear indication is received when no raise was indicated
                      • laser device failure (affects physical PMs)
                      • loss of signal (affects receive – OPRx, IQ – physical PMs only)
                      • Control Plane is booted less than 15 min period for 15-min interval and less than 24 hour period for 24-hour interval.

                      Suspect interval is determined by comparing nSamples to nTotalSamples on a counter PM. If nSamples is not equal to nTotalSamples then this period can be marked as suspect. 

                      If any 15 minute is marked as suspect or reporting for that day interval is not started at midnight then it should flag that 24 Hr as suspect.

                      Some of the common examples are:

                      • Interface type is changed to another compatible interface (10G SR interface replaced by 10G DWDM interface),
                      • Line type is changed from SONET to SDH,
                      • Equipment failures are detected and those failures inhibit the accumulation of PM.
                      • Transitions to/from the ‘locked’ state.
                      • The System shall mark a given accumulation period invalid when the facility object is created or deleted during the interval.
                      • Node time is changed.

                      A short discussion on 980nm and 1480nm pump based EDFA

                      Introduction

                      The 980nm pump needs three energy level for radiation while 1480nm pumps can excite the ions directly to the metastable level .edfa

                      (a) Energy level scheme of ground and first two excited states of Er ions in a silica matrix. The sublevel splitting and the lengths of arrows representing absorption and emission transitions are not drawn to scale. In the case of the 4 I11/2 state, s is the lifetime for nonradiative decay to the I13/2 first excited state and ssp is the spontaneous lifetime of the 4 I13/2 first excited state. (b) Absorption coefficient, a, and emission coefficient, g*, spectra for a typical aluminum co-doped EDF.

                      The most important feature of the level scheme is that the transition energy between the I15/2 ground state and the I13/2 first excited state corresponds to photon wavelengths (approximately 1530 to 1560 nm) for which the attenuation in silica fibers is lowest. Amplification is achieved by creating an inversion by pumping atoms into the first excited state, typically using either 980 nm or 1480 nm diode lasers. Because of the superior noise figure they provide and their superior wall plug efficiency, most EDFAs are built using 980 nm pump diodes. 1480 nm pump diodes are still often used in L-band EDFAs although here, too, 980 nm pumps are becoming more widely used.

                      Though pumping with 1480 nm is used and has an optical power conversion efficiency which is higher than that for 980 nm pumping, the latter is preferred because of the following advantages it has over 1480 nm pumping.

                      • It provides a wider separation between the laser wavelength and pump wavelength.
                      • 980 nm pumping gives less noise than 1480nm.
                      • Unlike 1480 nm pumping, 980 nm pumping cannot stimulate back transition to the ground state.
                      • 980 nm pumping also gives a higher signal gain, the maximum gain coefficient being 11 dB/mW against 6.3 dB/mW for the 1.48
                      • The reason for better performance of 980 nm pumping over the 1.48 m pumping is related to the fact that the former has a narrower absorption spectrum.
                      • The inversion factor almost becomes 1 in case of 980 nm pumping whereas for 1480 nm pumping the best one gets is about 1.6.
                      • Quantum mechanics puts a lower limit of 3 dB to the optical noise figure at high optical gain. 980 nm pimping provides a value of 3.1 dB, close to the quantum limit whereas 1.48  pumping gives a value of 4.2 dB.
                      • 1480nm pump needs more electrical power compare to 980nm.

                      Application

                      The 980 nm pumps EDFA’s are widely used in terrestrial systems while 1480nm pumps are used as Remote Optically Pumped Amplifiers (ROPA) in subsea links where it is difficult to put amplifiers.For submarine systems, remote pumping can be used in order not to have to electrically feed the amplifiers and remove electronic parts.Nowadays ,this is used in pumping up to 200km.

                      The erbium-doped fiber can be activated by a pump wavelength of 980 or 1480 nm but only the second one is used in repeaterless systems due to the lower fiber loss at 1.48 mm with respect to the loss at 0.98 mm. This allows the distance between the terminal and the remote amplifier to be increased.

                      In a typical configuration, the ROPA is comprised of a simple short length of erbium doped fiber in the transmission line placed a few tens of kilometers before a shore terminal or a conventional in-line EDFA. The remote EDF is backward pumped by a 1480 nm laser, from the terminal or in-line EDFA, thus providing signal gain

                      Vendors

                      Following are the vendors that manufactures 980nm and 1480nm EDFAs

                      What Is Coherent Communication?

                      Definition of coherent light

                      A coherent light consists of two light waves that:

                      1) Have the same oscillation direction.

                      2) Have the same oscillation frequency.

                      3) Have the same phase or maintain a constant phase relationship with each other. Two coherent light waves produce interference within the area where they meet.

                      Principles of Coherent Communication

                      Coherent communication technologies mainly include coherent modulation and coherent detection.

                      Coherent modulation uses the signals that are propagated to change the frequencies, phases, and amplitudes of optical carriers. (Intensity modulation only changes the strength of light.)

                      Modulation detection mixes the laser light generated by a local oscillator (LO) with the incoming signal light using an optical hybrid to produce an IF signal that maintains the constant frequency, phase, and amplitude relationships with the signal light.

                       

                       

                      The motivation behind using the coherent communication techniques is two-fold.

                      First, the receiver sensitivity can be improved by up to 20 dB compared with that of IM/DD systems.

                      Second, the use of coherent detection may allow a more efficient use of fiber bandwidth by increasing the spectral efficiency of WDM systems

                      coherent
                      #coherent

                      In a non-coherent WDM system, each optical channel on the line side uses only one binary channel to carry service information. The service transmission rate on each optical channel is called bit rate while the binary channel rate is called baud rateIn this sense, the baud rate was equal to the bit rate. The spectral width of an optical signal is determined by the baud rate. Specifically, the spectral width is linearly proportional to the baud rate, which means a higher baud rate generates a larger spectral width.

                      • Baud (pronounced as /bɔ:d/ and abbreviated as “Bd”) is the unit for representing the data communication speed. It indicates the signal changes occurring in every second on a device, for example, a modulator-demodulator (modem). During encoding, one baud (namely, the signal change) actually represents two or more bits. In the current high-speed modulation techniques, each change in a carrier can transmit multiple bits, which makes the baud rate different from the transmission speed.

                      In practice, the spectral width of the optical signal cannot be larger than the frequency spacing between WDM channels; otherwise, the optical spectrums of the neighboring WDM channels will overlap, causing interference among data streams on different WDM channels and thus generating bit errors and a system penalty.

                      For example, the spectral width of a 100G BPSK/DPSK signal is about 50 GHz, which means a common 40G BPSK/DPSK modulator is not suitable for a 50 GHz channel spaced 100G system because it will cause a high crosstalk penalty. When the baud rate reaches 100 Gbaud/s, the spectral width of the BPSK/DPSK signal is greater than 50 GHz. Thus, it is impossible to achieve 50 GHz channel spacing in a 100G BPSK/DPSK transmission system.

                      (This is one reason that BPSK cannot be used in a 100G coherent system. The other reason is that high-speed ADC devices are costly.)

                      A 100G coherent system must employ new technology. The system must employ more advanced multiplexing technologies so that an optical channel contains multiple binary channels. This reduces the baud rate while keeping the line bit rate unchanged, ensuring that the spectral width is less than 50 GHz even after the line rate is increased to 100 Gbit/s. These multiplexing technologies include quadrature phase shift keying (QPSK) modulation and polarization division multiplexing (PDM).

                      For coherent signals with wide optical spectrum, the traditional scanning method using an OSA or inband polarization method (EXFO) cannot correctly measure system OSNR. Therefore, use the integral method to measure OSNR of coherent signals.

                      Perform the following operations to measure OSNR using the integral method:

                      1.Position the central frequency of the wavelength under test in the middle of the screen of an OSA.
                      2.Select an appropriate bandwidth span for integration (for 40G/100G coherent signals, select 0.4 nm).
                      3.Read the sum of signal power and noise power within the specified bandwidth. On the OSA, enable the Trace Integ function and read the integral value. As shown in Figure 2, the integral optical      power (P + N) is 9.68 uW.
                      4.Read the integral noise power within the specified bandwidth. Disable the related laser before testing the integral noise power. Obtain the integral noise power N within the signal bandwidth      specified in step 2. The integral noise power (N) is 29.58 nW.
                      5.Calculate the integral noise power (n) within the reference noise bandwidth. Generally, the reference noise bandwidth is 0.1 nm. Read the integral power of central frequency within the bandwidth of 0.1 nm. In this example, the integral noise power within the reference noise bandwidth is 7.395 nW.
                      6.Calculate OSNR. OSNR = 10 x lg{[(P + N) – N]/n}

                      In this example, OSNR = 10 x log[(9.68 – 0.02958)/0.007395] = 31.156 dB

                      osnr

                       

                      We follow integral method because Direct OSNR Scanning Cannot Ensure Accuracy because of the following reason:

                      A 40G/100G signal has a larger spectral width than a 10G signal. As a result, the signal spectrums of adjacent channels overlap each other. This brings difficulties in testing the OSNR using the traditional OSA method, which is implemented based on the interpolation of inter-channel noise that is equivalent to in-band noise. Inter-channel noise power contains not only the ASE noise power but also the signal crosstalk power. Therefore, the OSNR obtained using the traditional OSA method is less than the actual OSNR. The figure below shows the signal spectrums in hybrid transmission of 40G and 10G signals with 50 GHz channel spacing. As shown in the figure, a severe spectrum overlap has occurred and the tested ASE power is greater than it should be .As ROADM and OEQ technologies become mature and are widely used, the use of filter devices will impair the noise spectrum. As shown in the following figure, the noise power between channels decreases remarkably after signals traverse a filter. As a result, the OSNR obtained using the traditional OSA method is greater than the actual OSNR..

                       

                      Basic understanding on Tap ratio for Splitter/Coupler

                      Fiber splitters/couplers divide optical power from one common port to two or more split ports and combine all optical power from the split ports to one common port (1 × coupler). They operate across the entire band or bands such as C, L, or O bands. The three port 1 × 2 tap is a splitter commonly used to access a small amount of signal power in a live fiber span for measurement or OSA analysis. Splitters are referred to by their splitting ratio, which is the power output of an individual split port divided by the total power output of all split ports. Popular splitting ratios are shown in Table below; however, others are available. Equation below can be used to estimate the splitter insertion loss for a typical split port. Excess splitter loss adds to the port’s power division loss and is lost signal power due to the splitter properties. It typically varies between 0.1 to 2 dB, refer to manufacturer’s specifications for accurate values. It should be noted that splitter function is symmetrical.tap ratio

                      where IL = splitter insertion loss for the split port, dB

                      Pi = optical output power for single split port, mW

                      PT = total optical power output for all split ports, mW

                      SR = splitting ratio for the split port, %

                      Γe = splitter excess loss (typical range 0.1 to 2 dB), dB

                      Common splitter applications include

                      • Permanent installation in a fiber link as a tap with 2%|98% splitting ratio. This provides for access to live fiber signal power and OSA spectrum measurement without affecting fiber traffic. Commonly installed in DWDM amplifier systems.

                      • Video and CATV networks to distribute signals.

                      • Passive optical networks (PON).

                      • Fiber protection systems.

                      Example with calculation:

                      If a 0 dBm signal is launched into the common port of a 25% |75% splitter, then the two split ports, output power will be −6.2 and −1.5 dBm. However, if a 0 dBm signal is launched into the 25% split port, then the common port output power will be −6.2 dBm.

                      Calculation.

                      Launch power=0 dBm =1mW

                                   

                      Tap is  25%|75%

                      so equivalent mW power which is linear  will be

                      0.250mW|0.750mW

                      and after converting them ,dBm value will be

                      -6.02dBm| -1.24dBm

                      Some of the common split ratios and their equivalent Optical Power is available below for reference.tap

                      Q is the quality of a communication signal and is related to BER. A lower BER gives a higher Q and thus a higher Q gives better performance. Q is primarily used for translating relatively large BER differences into manageable values.

                      Pre-FEC signal fail and Pre-FEC signal degrade thresholds are provisionable in units of dBQ so that the user does not need to worry about FEC scheme when determining what value to set the thresholds to as the software will automatically convert the dBQ values to FEC corrections per time interval based on FEC scheme and data rate.

                      The Q-Factor, is in fact a metric to identify the attenuation in the receiving signal and determine a potential LOS and it is an estimate of the Optical-Signal-to-Noise-Ratio (OSNR) at the optical receiver.   As attenuation in the receiving signal increases, the dBQ value drops and vice-versa.  Hence a drop in the dBQ value can mean that there is an increase in the Pre FEC BER, and a possible LOS could occur if the problem is not corrected in time.

                      The Quality of an Optical Rx signal can be measured by determining the number of “bad” bits in a block of received data.  The bad bits in each block of received data are removed and replaced with “good” zero’s or one’s such that the network path data can still be properly switched and passed on to its destination.  This strategy is referred to as Forward Error Correction (FEC) and prevents a complete loss of traffic due to small un-important data-loss that can be re-sent again later on.  The process by which the “bad” bits are replaced with the “good” bits in an Rx data block is known as Mapping.  The Pre FEC are the FEC Counts of “bad” bits before the Mapper and the FEC Counts (or Post FEC Counts) are those after the Mapper.

                      The number of Pre FEC Counts for a given period of time can represent the status of the Optical Rx network signal; An increase in the Pre FEC count means that there is an increase in the number of “bad” bits that need to be replaced by the Mapper.  Hence a change in rate of the Pre FEC Count (Bit Erro Rate – BER) can identify a potential problem upstream in the network.  At some point the Pre FEC Count will be too high as there will be too many “bad” bits in the incoming data block for the Mapper to replace … this will then mean a Loss of Signal (LOS).

                      As the normal number of Pre FEC Counts are high (i.e. 1.35E-3 to 6.11E-16) and constantly fluctuate, it can be difficult for an network operator to determine whether there is a potential problem in the network.  Hence a dBQ value, known as the Q-Factor, is used as a measure of the Quality of the receiving optical signal.  It should be consistent with the Pre FEC Count Bit Error Rate (BER).

                      The standards define the Q-Factor as Q = 10log[(X1 – X0)/(N1 – N0)] where Xj and Nj are the mean and standard deviation of the received mark-bit (j=1) and space-bit (j=0)  …………….  In some cases Q = 20log[(X1 – X0)/(N1 – N0)]

                      For example, the linear Q range 3 to 8 covers the BER range of 1.35E-3 to 6.11E-16.

                      Nortel defines dBQ as 10xlog10(Q/Qref) where Qref is the pre-FEC raw optical Q, which gives a BER of 1E-15 post-FEC assuming a particular error distribution. Some organizations define dBQ as 20xlog10(Q/Qref), so care must be taken when comparing dBQ values from different sources.

                      The dBQ figure represents the dBQ of margin from the following pre-FEC BERs (which are equivalent to a post-FEC BER of 1E-15). The equivalent linear Q value for these BERs are  Qref in the above formula.

                      Pre-FEC signal degrade can be used the same way a car has an “oil light” in that it states that there is still margin left but you are closer to the fail point than expected so action should be taken.

                      The Optical Time Domain Reflectometer (OTDR) is useful for testing the integrity of fiber optic cables. An optical time-domain reflectometer (OTDR) is an optoelectronic instrument used to characterize an optical fiber. An OTDR is the optical equivalent of an electronic time domain reflectometer. It injects a series of optical pulses into the fiber under test. It also extracts, from the same end of the fiber, light that is scattered (Rayleigh backscatter) or reflected back from points along the fiber. The strength of the return pulses is measured and integrated as a function of time, and plotted as a function of fiber length.

                      Using an OTDR, we can:

                      1. Measure the distance to a fusion splice, mechanical splice, connector, or significant bend in the fiber.

                      2. Measure the loss across a fusion splice, mechanical splice, connector, or significant bend in the fiber.

                      3. Measure the intrinsic loss due to mode-field diameter variations between two pieces of single-mode optical fiber connected by a splice or connector.

                      4. Determine the relative amount of offset and bending loss at a splice or connector joining two single-mode fibers.

                      5. Determine the physical offset at a splice or connector joining two pieces of single-mode fiber, when bending loss is insignificant.

                      6. Measure the optical return loss of discrete components, such as mechanical splices and connectors.

                      7. Measure the integrated return loss of a complete fiber-optic system.

                      8. Measure a fiber’s linearity, monitoring for such things as local mode-field pinch-off.

                      9. Measure the fiber slope, or fiber attenuation (typically expressed in dB/km).

                      10. Measure the link loss, or end-to-end loss of the fiber network.

                      11. Measure the relative numerical apertures of two fibers.

                      12. Make rudimentary measurements of a fiber’s chromatic dispersion.

                      13. Measure polarization mode dispersion.

                      14. Estimate the impact of reflections on transmitters and receivers in a fiber-optic system.

                      15. Provide active monitoring on live fiber-optic systems.

                      16. Compare previously installed waveforms to current traces.

                      The maintenance signals defined in [ITU-T G.709] provide network connection status information in the form of payload missing indication (PMI), backward error and defect indication (BEI, BDI), open connection indication (OCI), and link and tandem connection status information in the form of locked indication (LCK) and alarm indication signal (FDI, AIS).

                       

                       

                       

                       

                      Interaction diagrams are collected from ITU G.798 and OTN application note from IpLight

                      Here we will discuss what are the advantages of OTN(Optical Transport Network) over SDH/SONET.

                      The OTN architecture concept was developed by the ITU-T initially a decade ago, to build upon the Synchronous Digital Hierarchy (SDH) and Dense Wavelength-Division Multiplexing (DWDM) experience and provide bit  rate efficiency,  resiliency and  management  at  high capacity.  OTN therefore looks a  lot like Synchronous Optical Networking (SONET) / SDH in structure, with less overhead and more management features.

                      It is a common misconception that OTN is just SDH with a few insignificant changes. Although the multiplexing structure and terminology look the same, the changes in OTN have a great impact on its use in, for example, a multi-vendor, multi-domain environment. OTN was created to be a carrier technology, which is why emphasis was put on enhancing transparency, reach, scalability and monitoring of signals carried over large distances and through several administrative and vendor domains.

                      The advantages of OTN compared to SDH are mainly related to the introduction of the following changes:

                      Transparent Client Signals:

                      In OTN the Optical Channel Payload Unit-k (OPUk) container is defined to include the entire SONET/SDH and Ethernet signal, including associated overhead bytes, which is why no modification of the overhead is required when transporting through OTN. This allows the end user to view exactly what was transmitted at the far end and decreases the complexity of troubleshooting as transport and client protocols aren’t the same technology.

                      OTN uses asynchronous mapping and demapping of client signals, which is another reason why OTN is timing transparent.

                      Better Forward Error Correction:

                      OTN has increased the number of bytes reserved for Forward Error Correction (FEC), allowing a theoretical improvement of the Signal-to-Noise Ratio (SNR) by 6.2 dB. This improvement can be used to enhance the optical systems in the following areas:

                      • Increase the reach of optical systems by increasing span length or increasing the number of spans.
                      • Increase the number of channels in the optical systems, as the required power theoretical has been lowered 6.2 dB, thus also reducing the non-        linear effects, which are dependent on the total power in the system.
                      • The increased power budget can ease the introduction of transparent optical network elements, which can’t be introduced without a penalty.    These elements include Optical Add-Drop Multiplexers (OADMs), Photonic Cross Connects (PXCs), splitters, etc., which are fundamental for the  evolution from point-to-point optical networks to meshed ones.
                      • The FEC part of OTN has been utilised on the line side of DWDM transponders for at least the last 5 years, allowing a significant increase in reach/capacity.

                      Better scalability:

                      The old transport technologies like SONET/SDH were created to carry voice circuits, which is why the granularity was very dense – down to 1.5 Mb/s. OTN is designed to carry a payload of greater bulk, which is why the granularity is coarser and the multiplexing structure less complicated.

                      Tandem Connection Monitoring:

                      The introduction of additional (six) Tandem Connection Monitoring (TCM) combined with the decoupling of transport and payload protocols allow a significant improvement in monitoring signals that are transported through several administrative domains, e.g. a meshed network topology where the signals are transported through several other operators before reaching the end users.

                      In a multi-domain scenario – “a classic carrier’s carrier scenario”, where the originating domain can’t ensure performance or even monitor the signal when it passes to another domain – TCM introduces a performance monitoring layer between line and path monitoring allowing each involved network to be monitored, thus reducing the complexity of troubleshooting as performance data is accessible for each individual part of the route.

                      Also a major drawback with regards to SDH is that a lot of capacity during packet transport is wasted in overhead and stuffing, which can also create delays in the transmission, leading to problems for the end application, especially if it is designed for asynchronous, bursty communications behavior. This over-complexity is probably one of the reasons why the evolution of SDH has stopped at STM 256 (40 Gbps).

                      References: OTN and NG-OTN: Overview by GEANT